-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About prediction.py #21
Comments
Are you dealing with the LiTS dataset with a 3D model? How is the training and validation curve? Is the validation dice look correct? I never have this issue before. But this block pattern looks like something is wrong with the window-based inference. You can debug in the inference/inference3d.py to make sure everything works as expected. I'll check the prediction code on LiTS later. But I'm very busy recently, I can't guarantee a time to figure it out. |
The learning curve looks like the model learns something, but the accuracy is lower than in my experiments. As I commented in the 164 line of prediction.py, you need to manually modify the intensity normalization during preprocessing to be consistent with training (the training preprocessing code can be found in training/dataset/dim3/dataset_lits.py if you are dealing with the LiTS dataset). So you need to modify lines 169-171 in prediction.py to the following:
This is because I followed the preprocessing streamline proposed in nnUNet, and every dataset has its own foreground intensity mean and std. So you need to modify the preprocessing code in the prediction.py according to your dataset. the current normalization
is for MR images. If you didn't modify lines 169-171 of the prediction.py to the correct one, the testing image distribution will be different with training, thus resulting in the non-sense predictions. |
I'll consider unifying the preprocessing of different datasets in a consistent way in the future commit, as I find different intensity normalization values don't have significant impact on the final performance. |
作者您好!非常抱歉又在一次打扰您,很感谢您之前的回复,我有按照您告诉我的方法修改,将肝脏预处理部分的代码放到预测里,同时也保持与训练一致,实在是不知道到底是什么原因会导致预测的图片效果如此模糊,前两天关注到您有更新prediction.py代码。最近一直在研究这个肝脏分割方向,也很急用这个预测图片的代码,如果您有时间希望您能出一份可以能够预测肝脏分割图的代码。在此非常感谢!!! |
把预测时候的预处理和训练时的预处理统一起来理论上应该能够解决这个问题。我之前是在KiTS和ACDC上做的测试,没有什么问题。我最近会在LiTS上再试一下,我尽量这周末给出一个结果。 |
感谢您的回复!!! |
Hello @yhygao,
Thank you for your work!
I used the liver for prediction, but why did I get a blurry visualization result as shown in the following figure? The premise is that I have modified the code in the preprocessing function according to your method. This is what I have modified:
np_ img = np.clip(np_img, -17, 201)
np_ img = np_ img - 99.40
np_ img = np_ img / 39.39
I don't know what caused it. Please provide an answer. Thank you!
The text was updated successfully, but these errors were encountered: