Skip to main content

Table 2 Comparison of eight classifiers for 2D and 3D image classification. The average classification accuracy on test data (from 10-fold cross-validation) is shown for the optimal parameters settings (shown in parentheses) for each classification approach. The parameters are: nhu – number of hidden nodes in neural network, stop-fract – the fraction of the training data used to stop neural network training, C – error penalty in SVMs, sigma – kernel variance in SVMs, nboost – total number of iterations in AdaBoost, nbag – total number of iterations in Bagging, nhug – number of hidden nodes in the gating network of Mixtures-of-Experts, and ne – total number of experts in Mixtures-of-Experts. The accuracies across the 10-fold cross-validation trials were compared to those for the previously described neural network configuration (nhu = 20, stop-fract = 0.3) by a paired t-test (88% for SLF13, 86% for SLF8, 93% for SLF10, and 84% for SLF14). The best performances are underscored and highlighted for each feature set. *CPU times listed for each classifier are for training and testing for all images in each cross-validation trial (training times include times calculating features), which were measured on an Athlon 1.7 GHz processor with 1.5 GB memory running Redhat Linux 7.1.

From: Boosting accuracy of automated classification of fluorescence microscope images for location proteomics

Feature Set Classifier Classification accuracy (%) Average training time* (s) Average testing time* (s) P-value
SLF13 (2D DNA) Neural Network (nhu = 16, stop-fract = 0.1) 87.8 116.3 0.001 0.43
  SVM (linear, DAG, C = 1) 87.9 0.7 0.088 0.36
  SVM (rbf, DAG, sigma = 8, C = 16) 89.4 1.1 0.470 0.03
  SVM (exprbf, maxwin, sigma = 4, C = 4) 89.2 3.5 0.530 0.04
  SVM (poly, maxwin, degree = 2, C = 0.01) 88.6 4.7 0.140 0.21
  Adaboost (nhu = 8, nboost = 64) 88.9 55.2 0.018 0.10
  Bagging (nhu = 64, nbag = 32) 88.9 111.0 0.078 0.09
  Mixtures-of-Experts (nhu = 16, nhug = 64, ne = 16) 89.7 38.3 0.010 0.02
SLF8 (2D) Neural Network (nhu = 16, stop-fract = 0.3) 86.1 139.1 0.001 0.53
  SVM (linear, DAG, C = 1) 84.9 0.7 0.075 0.83
  SVM (rbf, maxwin, sigma = 8, C = 64) 87.9 11.4 1.600 0.15
  SVM (exprbf, maxwin, sigma = 8, C = 16) 88.1 4.0 0.540 0.02
  SVM (poly, maxwin, degree = 2, C = 0.01) 86.7 5.2 0.170 0.37
  Adaboost (nhu = 32, nboost = 128) 88.2 412.0 0.190 0.12
  Bagging (nhu = 64, nbag = 64) 87.2 238.2 0.160 0.17
  Mixtures-of-Experts (nhu = 32, nhug = 16, ne = 4) 87.0 11.6 0.002 0.22
SLF10 (3D DNA) Neural Network (nhu = 32, stop-fract = 0.1) 95.3 740.3 0.001 0.06
  SVM (linear, DAG, C = 8) 93.3 0.3 0.043 0.47
  SVM (rbf, maxwin, sigma = 2, C = 64) 95.0 2.3 0.230 0.08
  SVM (exprbf, DAG, sigma = 1, C = 1) 95.2 0.5 0.081 0.06
  SVM (poly, maxwin, degree = 2, C = 1) 93.1 2.0 0.067 0.51
  Adaboost (nhu = 32, nboost = 32) 93.2 43.2 0.016 0.46
  Bagging (nhu = 64, nbag = 4) 89.4 6.8 0.003 0.99
  Mixtures-of-Experts (nhu = 32, nhug = 64, ne = 16) 92.2 45.8 0.007 0.74
SLF14 (3D) Neural Network (nhu = 32, stop-fract = 0) 88.4 172.0 0.001 0.02
  SVM (linear, DAG, C = 32) 86.5 1.0 0.047 0.12
  SVM (rbf, maxwin, sigma = 2, C = 32) 86.6 4.6 0.290 0.17
  SVM (exprbf, maxwin, sigma = 2, C = 8) 89.1 1.4 0.170 0.05
  SVM (poly, maxwin, degree = 2, C = 2) 87.3 8.3 0.068 0.05
  Adaboost (nhu = 64, nboost = 64) 87.7 144.3 0.085 0.03
  Bagging (nhu = 64, nbag = 256) 82.2 505.7 0.340 0.82
  Mixtures-of-Experts (nhu = 16, nhug = 8, ne = 2) 83.8 2.9 0.001 0.59