2

I have heard of ensemble methods, such as XGBoost, for binary or categorical machine learning models. However, does this exist for regression? If so, how are the weights for each model in the process of predictions determined?

I am looking to do this manually, as I was planning on training two different models using separate frameworks (YoloV3 aka Darknet and Tensorflow for bounding box regression). Is there a way I can establish a weight for each model in the overall prediction for these boxes?

Or is this a bad idea?

nbro
  • 39,006
  • 12
  • 98
  • 176
niallmandal
  • 211
  • 1
  • 6
  • There are plenty of ensemble methods for regression, e.g. Bagged Trees, Random Forests, Gradient Boosting like XGBoost and AdaBoost. These are all applicable for classification and regression. However, your example towards the end of your question does not sound like that is actually what you are looking for. But rather you would like to combine two totally different models. Can you please clarify? – Jonathan Dec 04 '19 at 21:01

2 Answers2

1

Table ? This means that there are not promising versions of this algorithm fro regression until 2012. After your question, I have found one of the survey research paper which is done or ensemple methods for regression. This table also extracted from this paper. Read this paper, it will help you a lot more

This one is latest paper published on object detection with an ensemble approach

0

There's similar boosting classes in XGBoost for regression. You can implement their built-in classes for your problem, rather than implementing from scratch. You can read more about it from their website. You can also take a look at catboost, which implements a different approach.