From: Deep learning with language models improves named entity recognition for PharmaCoNER
Method | Mean ± SD | Max | ||||
---|---|---|---|---|---|---|
P (%) | R (%) | F1 (%) | P (%) | R (%) | F1 (%) | |
BERT(Cased) | 89.31 ± 0.26 | 88.00 ± 0.16 | 88.65 ± 0.12 | 89.51 | 88.06 | 88.78\(^*\) |
BERT(Uncased) | 89.60 ± 0.81 | 88.13 ± 0.40 | 88.86 ± 0.57 | 90.32 | 88.65 | 89.48\(^*\) |
NCBI BERT(P+M,Uncased) | 89.29 ± 0.67 | 87.11 ± 0.60 | 88.18 ± 0.35 | 89.58 | 87.30 | 88.42\(^*\) |
NCBI BERT(P,Uncased) | 90.20 ± 0.38 | 88.88 ± 0.52 | 89.53 ± 0.37 | 90.76 | 89.58 | 90.16\(^*\) |
Spanish BERT(Uncased) | 89.69 ± 0.74 | 90.56 ± 0.58 | 90.12 ± 0.37 | 90.47 | 90.72 | 90.59\(^*\) |
Spanish BERT(Cased) | 90.42 ± 0.77 | 90.51 ± 0.69 | 90.47 ± 0.69 | 91.76 | 91.31 | 91.54 |
MultiBERT(Cased) | 89.53 ± 0.27 | 89.99 ± 0.43 | 89.76 ± 0.19 | 89.75 | 90.34 | 90.04\(^*\) |
MultiBERT(Uncased) | 90.74 ± 0.35 | 90.39 ± 0.37 | 90.56 ± 0.25 | 91.02 | 90.77 | 90.89 |
SciBERT(Bertvoc,Cased) | 90.36 ± 0.75 | 89.55 ± 0.30 | 89.96 ± 0.40 | 91.66 | 89.52 | 90.58\(^*\) |
SciBERT(Bertvoc,Uncased) | 91.07 ± 0.71 | 89.00 ± 0.45 | 90.02 ± 0.55 | 91.85 | 89.36 | 90.59\(^*\) |
SciBERT(Scivoc,Uncased) | 90.75 ± 0.86 | 90.27 ± 0.32 | 90.51 ± 0.40 | 92.03 | 90.28 | 91.15 |
SciBERT(Scivoc,Cased) | 91.25 ± 0.69 | 90.30 ± 0.58 | 90.77 ± 0.40 | 92.40 | 89.74 | 91.05 |
BioBERTv1.0(+PMC,Cased) | 90.54 ± 0.71 | 89.59 ± 0.31 | 90.06 ± 0.45 | 91.09 | 89.90 | 90.49\(^*\) |
BioBERTv1.0(+P,Cased) | 90.44 ± 0.34 | 89.98 ± 0.64 | 90.21 ± 0.36 | 90.75 | 90.55 | 90.65\(^*\) |
BioBERTv1.0(+P+PMC,Cased) | 91.08 ± 0.86 | 89.76 ± 0.52 | 90.41 ± 0.42 | 91.13 | 90.34 | 90.73 |
BioBERTv1.1(+P,Cased) | 91.40 ± 0.81 | 90.90 ± 0.47 | 91.15 ± 0.60 | 92.44 | 91.59 | 92.01 |