Predictive tools reviewed in this study

ModelCategoryPerformance AUCYear/Citation
ProteaSMM c/ii0.71/0.742005 [39]
NetCTLpani0.942010 [40]
NetMHCstabpani0.97*2016 [43]
HLAthenaiN/A2020 [44]
iPCPSiN/A2020 [45]
MHCflurry BA/APi0.91/0.852020 [46]
NetCleavei0.582021 [47]
NetMHCpanexpi0.82*2022 [50]
NetMHCpanN/A0.99*2017 [55]
MixMHCpredN/A0.98*2017 [54]
Kernelii0.8§2012 [64]
Antigen.garnish dissimilarity/IEDB scoreii0.85/0.702019 [65]
Pairwise sequence similarityiiN/A2019 [20]
IEDB immunogenicityiii0.612013 [66]
DeepNetBimiii0.94§2021 [70]
DeepImmunoiii0.852021 [72]
DeepHLApaniv0.81*2019 [75]
INeo-Eppiv0.782020 [76]
TA predictoriv0.822021 [77]
PRIMEiv0.812021 [80]
iTTCA-RFiv0.782021 [81]

Tools are grouped by categories established in this article (i: biological features; ii: similarity metrics; iii: pathogen immunogenicity; iv: tumor immunogenicity), and sorted by year of publication. The AUCs were reported by the authors in the original articles. The performance corresponds to independent evaluations on epitope or neoepitope datasets, if available. If multiple evaluations were made, the average AUC is displayed. *: These methods have been evaluated with datasets that contain immunogenic peptides as positives, and other peptides as negatives. The latter may not bind to MHC molecules; †: These methods have been evaluated with datasets that contain immunogenic peptides as positives and non-immunogenic peptides as negatives, but both categories may have the same likelihood of binding to MHC. This approach is comparable to the evaluation performed in this work; ‡: This method was evaluated and included in the pTuneos pipeline. For this reason, its performance can not be assessed individually; §: The AUC corresponds to performance in cross validation; N/A: not applicable; AUC: area under the curve