Skip to main content

Table 3 Comparison of the weighted accuracy of different classifiers using i) subsets of our 7 genes and ii) all 25,000 genes

From: A voting approach to identify a small number of highly predictive genes using multiple classifiers

Classifier Subsets of our 7 genes All 25,000 genes
  Test set (19) All data (5-fold CV) Test set (19) All data (5-fold CV)
C4.5 84.52% 88.49% 79.17% 62.36%
C4.5 with boosting (ADABoost) 91.67% 89.54% 63.10% 62.89%
C4.5 with bagging 84.52% 88.94% 48.81% 63.98%
Naïve Bayes 84.52% 92.13% 50.00% 52.17%
Naïve Bayes with bagging 88.69% 86.82% 50.00% 52.17%
Naïve Bayes with boosting 84.52% 87.65% 50.00% 52.17%
LMT 84.52% 88.11% 77.38% 60.29%
NBTree 84.52% 83.69% 66.07% 58.76%
Random Forest 84.52% 90.59% 66.07% 62.47%
Random Forest with bagging 88.69% 90.59% 73.21% 64.75%
Random Forest with boosting 84.52% 88.48% 66.07% 62.45%
k-NN 80.36% 83.00% 63.69% 61.94%
Logistic Regression 81.55% 88.11% Out of memory* Out of memory*
ANN 77.38% 83.44% Out of memory* Out of memory*
SVM 83.33% 76.23% 63.69% 68.12%
  1. *Our experiments were carried out on a standard Intel Core 2 Duo CPU 2.4 GHz desktop computer running 2 GB of RAM.
\