An empirical evaluation of four algorithms for multi-class classification: Mart, abc-mart, robust logitboost, and abc-logitboost

P Li - arXiv preprint arXiv:1001.1020, 2010 - arxiv.org
P Li
arXiv preprint arXiv:1001.1020, 2010arxiv.org
This empirical study is mainly devoted to comparing four tree-based boosting algorithms:
mart, abc-mart, robust logitboost, and abc-logitboost, for multi-class classification on a
variety of publicly available datasets. Some of those datasets have been thoroughly tested in
prior studies using a broad range of classification algorithms including SVM, neural nets,
and deep learning. In terms of the empirical classification errors, our experiment results
demonstrate: 1. Abc-mart considerably improves mart. 2. Abc-logitboost considerably …
This empirical study is mainly devoted to comparing four tree-based boosting algorithms: mart, abc-mart, robust logitboost, and abc-logitboost, for multi-class classification on a variety of publicly available datasets. Some of those datasets have been thoroughly tested in prior studies using a broad range of classification algorithms including SVM, neural nets, and deep learning. In terms of the empirical classification errors, our experiment results demonstrate: 1. Abc-mart considerably improves mart. 2. Abc-logitboost considerably improves (robust) logitboost. 3. Robust) logitboost} considerably improves mart on most datasets. 4. Abc-logitboost considerably improves abc-mart on most datasets. 5. These four boosting algorithms (especially abc-logitboost) outperform SVM on many datasets. 6. Compared to the best deep learning methods, these four boosting algorithms (especially abc-logitboost) are competitive.
arxiv.org
Showing the best result for this search. See all results