You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, Thanks for your excellent work. I have tried to run your models and also the other models you provided in this repo. But I found out there is a large inconsistency in the results between your models and the the original nnformer repo even with the same patch size, spacing and other same parameters. for you nnFormer, avg of 5-fold cross validation is around 0.5 DSC, but for original nnFormer it is 0.62. makes wonder did you somehow fine-tune only your medformer, and the results from other models are not fine-tuned?
Thanks a lot.
The text was updated successfully, but these errors were encountered:
The medformer in this repo is trained from scratch for all datasets without any pretraining weights. For nnFormer, I copied their original model code with very minor modifications to make it work in our repo. The performance difference between our repo and nnFormer repo might be because other training hyper-parameters, like lr, optimizer, epoch, etc. In my experience, nnFormer is sensitive to hyperparameters and needs special tuning in contrast to ResUNet or MedFormer. Some recent papers also have similar findings: https://arxiv.org/pdf/2304.03493.pdf. You might need to try other training hyper-parameters to see if they can match the performance in the original nnFormer repo.
Hi, Thanks for your excellent work. I have tried to run your models and also the other models you provided in this repo. But I found out there is a large inconsistency in the results between your models and the the original nnformer repo even with the same patch size, spacing and other same parameters. for you nnFormer, avg of 5-fold cross validation is around 0.5 DSC, but for original nnFormer it is 0.62. makes wonder did you somehow fine-tune only your medformer, and the results from other models are not fine-tuned?
Thanks a lot.
The text was updated successfully, but these errors were encountered: