Abstract
One problem in the field of machine learning is that the performance on the training and validation sets lack robustness when applied in real-life situations. Recent advances in ensemble methods have demonstrated that robust behavior can be improved by combining a large number of weak classifiers. The key insight of this paper is that the performance enhancement due to combining multiple classifiers is considerably greater in multi-category situations than in binary classifications, as long as their errors arc conditionally independent. This paper provides some experimental and theoretical analysis of the performance using majority vote, paying special attention to the effect of several parameters that include the number of combined classifiers, weakness of the combined classifiers and the number of classes. These insights can provide guidance for the analysis and design of multi-classifier systems.
Original language | English (US) |
---|---|
Pages (from-to) | 750-757 |
Number of pages | 8 |
Journal | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
Volume | 3316 |
State | Published - Dec 1 2004 |
ASJC Scopus subject areas
- Theoretical Computer Science
- General Computer Science