Skip to main content

Table 3 Cumulative observed model quality scores for each MQAP (TS1 and AL1 models)

From: Benchmarking consensus model quality assessment for protein fold recognition

 

TM-score

MaxSub

GDT

Combined

Maximum MQAP Score

62.30

52.98

56.25

57.18

Zhang-Server_TS1

58.21

48.77

52.03

53.00

3D-Jury †

58.02

48.32

51.96

52.77

Pcons*†

55.55

47.00

50.08

50.87

LEE*†

55.20

45.77

49.60

50.19

ModFOLD

55.39

45.47

49.62

50.16

HHpred2_TS1

54.95

45.22

49.16

49.78

Pcons6_TS1

54.67

45.08

48.52

49.42

Pmodeller6_TS1

54.77

44.76

48.73

49.42

ROBETTA_TS1

54.92

44.43

48.85

49.40

CIRCLE_TS1

54.69

44.59

48.49

49.26

HHpred3_TS1

54.33

44.76

48.52

49.20

BayesHH_TS1

54.39

44.33

48.41

49.04

MetaTasser_TS1

55.17

43.80

48.15

49.04

HHpred1_TS1

54.18

44.48

48.04

48.90

UNI-EID_expm_TS1

54.06

44.58

47.95

48.86

ModSSEA

54.30

43.88

48.35

48.84

beautshot_TS1

54.37

44.25

47.75

48.79

FAMSD_TS1

54.07

44.08

48.05

48.73

PROQ*

53.47

44.50

48.15

48.71

RAPTOR-ACE_TS1

54.05

43.80

47.69

48.52

FAMS_TS1

53.84

43.70

47.84

48.46

SP3_TS1

53.51

43.48

47.41

48.13

SP4_TS1

53.44

43.19

47.11

47.91

shub_TS1

53.35

43.31

46.87

47.84

RAPTOR_TS1

53.48

42.88

47.16

47.84

UNI-EID_bnmx_TS1

52.33

43.72

46.88

47.64

beautshotbase_TS1

52.46

43.05

46.59

47.37

RAPTORESS_TS1

53.17

42.44

46.46

47.36

FUNCTION_TS1

52.75

42.59

46.57

47.30

SPARKS2_TS1

52.47

42.49

46.19

47.05

PROQ-LG

51.49

43.04

46.43

46.99

3Dpro_TS1

51.81

42.16

46.34

46.77

FOLDpro_TS1

51.77

42.06

46.10

46.64

GeneSilicoMetaServer_TS1

51.75

42.09

45.87

46.57

UNI-EID_sfst_AL1.pdb

50.39

42.55

45.37

46.10

PROTINFO_TS1

51.28

41.36

45.60

46.08

Ma-OPUS-server_TS1

51.23

40.96

45.30

45.83

SAM_T06_server_TS1

51.35

40.66

45.12

45.71

PROQ-MX

49.89

41.60

44.89

45.46

PROTINFO-AB_TS1

50.64

40.65

44.65

45.32

Phyre-2_TS1

50.26

40.32

44.38

44.99

ROKKY_TS1

49.66

40.42

44.16

44.75

mGen-3D_TS1

49.29

40.15

44.22

44.55

Bilab-ENABLE_TS1

49.59

39.16

43.26

44.00

SAM-T02_AL1.pdb

48.13

40.12

43.03

43.76

LOOPP_TS1

48.44

38.64

42.73

43.27

FUGUE_AL1.pdb

47.55

38.79

42.53

42.96

nFOLD_TS1

47.40

38.46

41.95

42.60

keasar-server_TS1

47.84

38.20

41.59

42.54

Phyre-1_TS1

46.87

38.16

41.63

42.22

MODCHECK

47.03

37.76

41.65

42.15

NN_PUT_lab_TS1

46.95

37.72

41.26

41.98

CaspIta-FOX_TS1

46.53

37.47

41.01

41.67

FUGMOD_TS1

46.37

37.42

41.10

41.63

FORTE1_AL1.pdb

46.51

37.06

40.66

41.41

FORTE2_AL1.pdb

46.30

36.89

40.56

41.25

3D-JIGSAW_POPULUS_TS1

44.74

35.44

39.34

39.84

karypis.srv_TS1

44.43

35.20

38.95

39.53

3D-JIGSAW_RECOM_TS1

43.70

35.55

38.84

39.36

3D-JIGSAW_TS1

43.53

34.50

38.37

38.80

SAM-T99_AL1.pdb

42.60

35.81

37.64

38.69

karypis.srv.2_TS1

42.77

33.54

37.50

37.94

Huber-Torda-Server_TS1

41.78

34.40

37.21

37.80

forecast-s_AL1.pdb

41.00

33.38

36.48

36.95

Distill_TS1

39.75

27.26

31.94

32.98

Ma-OPUS-server2_TS1

33.35

26.75

29.77

29.96

panther2_TS1

28.87

23.67

25.85

26.13

CPHmodels_TS1

27.75

23.49

24.55

25.26

Frankenstein_TS1

23.55

17.66

20.33

20.52

gtg_AL1.pdb

20.55

16.66

17.81

18.34

ABIpro_TS1

21.88

12.35

17.45

17.22

MIG_FROST_AL1.pdb

16.68

12.11

14.75

14.51

FPSOLVER-SERVER_TS1

14.91

6.78

10.97

10.89

karypis.srv.4_TS1

14.71

6.55

10.66

10.64

POMYSL_TS1

9.64

6.00

8.35

8.00

panther3_TS1

5.75

4.58

5.05

5.12

MIG_FROST_FLEX_AL1.pdb

1.05

0.97

1.07

1.03

  1. Results in bold indicate the cumulative observed model quality scores of the top ranked models for each target (Σm) obtained by using each MQAP method to rank the top models from all fold recognition servers. The maximum achievable MQAP score – obtained by consistently selecting the best model for each target – is also highlighted. All other results are based on the cumulative scores of the TS1 or AL1 models from each fold recognition server taking part in the automated category at CASP7. Each column indicates the method for measuring the observed model quality. Scores are sorted by the combined observed model quality. *The MQAP scores for these methods were downloaded from CASP7 website; all other MQAP methods were run in house during the CASP7 experiment. †MQAP methods which rely on the comparison of multiple models or additional information from multiple servers; all other methods are capable of producing a single score based on a single model.