-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mhcnuggets scores on TESLA samples #23
Comments
Thank you @RachelKarchin. Looking forward to hearing from you. |
@RachelKarchin do you maybe have any update on this? Thank you. |
I have only one student working with MHCnuggets. It is in her queue.
Apologies that I can’t offer you a faster response.
On Fri, Jan 15, 2021 at 6:45 AM vladimirkovacevic ***@***.***> wrote:
@RachelKarchin <https://github.com/RachelKarchin> do you maybe have any
update on this? Thank you.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#23 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADC2PVCSTS6P73LBYJN2VQDS2ATETANCNFSM4UMFAH5A>
.
--
Rachel Karchin, Ph.D.
Professor of Biomedical Engineering, Oncology and Computer Science
Institute for Computational Medicine
Johns Hopkins University
217A Hackerman Hall. 3400 N. Charles St.
Baltimore, MD 21218
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I was very curious to test the performance of mhcnuggets 2.3 on the latest dataset with TESLA validated neoantigen candidates published in Cell (Key Parameters of Tumor Epitope Immunogenicity
Revealed Through a Consortium Approach Improve
Neoantigen Prediction, Table S4 and S7). In TESLA they used flow cytometry and microscopy to confirm (VALIDATE) neoantigen candidates. More info is available in the paper.
Here are ic50 scores obtained by mhcnuggets.
The distributions of confirmed and not confirmed candidates are in a very similar range:
Comfirmed candidates(041): Median=5598.78, Mean=7441.88(+-9090.58)
Not confirmed candidates(871): Median=5605.33, Mean=9153.18(+-10031.29)
When mhcnuggets scores are normalized (with added minus sign since smaller ic50 is better) and compared against VALIDATED AUC score is 0.513, precision is 0.0596, here is the output comparison figure:
Here is the table with obtained scores as a reference and command used for one of the mhcnuggets runs:
python /mhcnuggets/mhcnuggets/src/predict.py -c I --allele HLA-C*05:01 --peptides merged.fasta -o mhcnuggets_HLA-C*05:01.peps
Are these results expected? Could it be that this use case is out of the scope for mhcnuggets or it requires some additional tuning?
The text was updated successfully, but these errors were encountered: