0

Normally, when using an ensemble method, such as baggin or boosting, in binary classification, there is a reqauirment that each weak classifier have accuracy better than 50%.

In the multiclass claaification setting, this is often infeasible. Is there a way to improve upon multiclass classification with ensembles.

For an example to make this concrete: Say I have a problem with 1000 classes, and I train 50 models, each with 10% accuracy, which is 100x better than random guessing.

Is there a way to combine these models to form a better classification algorithm?

chessprogrammer
  • 2,215
  • 2
  • 12
  • 23
  • Can you please put your **specific question** in the title? "Multiclass Ensemble Methods" is not a question and it's also not specific. – nbro Feb 17 '22 at 09:45

1 Answers1

0

Any classifier that performs slightly better than random guessing is a weak classifier. In the case of a system with N classes, a random guessing actor will have an accuracy of 100/N.

For example, if there are 1000 classes, a random guesser will have an accuracy of 0.1%. Hence any classifier with more than 0.1% accuracy is a weak classifier in this example.