Replies: 4 comments 4 replies
-
Hi @matt3o, This is a great initiative! DeepEdit has a disadvantage in that it needs the whole volume to train. Using a sliding window or patch-based style for training is very much needed. However, this needs some major changes in the transforms and training workflow. Here is the DeepEdit architecture: https://github.com/Project-MONAI/MONAILabel/wiki/DeepEdit#training-schema We use two loops for training. The transforms for the click simulation after a first forward pass are the following: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/deepedit.py#L83-L98 With regards to your comment:
Are you sure you have considered the correct image axis or image orientation for processing the clicks? Otherwise, I'm not understanding why this is happening in your case. I'm happy to further discuss this over a video call if you think it is faster. We can later comment on what is discussed here :) |
Beta Was this translation helpful? Give feedback.
-
Hey @diazandr3s , thanks for the quick response! I might have figured it out by changing the order of the transforms :) I'll try to explain the problem again before I explain the solution: the clicks in the UI are being done on the initial non-transformed image. However when I add these clicks on the transformed image, of course they don't make any sense since quite a few transforms have been run. So what happened in my case when I clicked on the left side of the person, the network registered a click on the right side of the person (image got tranformed, clicks however not). You can see that well on the appended image. Clicks was on the right, however the small pixel on the left popped up after the inference -> For me this could only mean the Network got the click at the wrong position (which I verified with some debug Nifti output) I actually already pasted the working transform solution above, yesterday evening I did not know it would work. The import step was moving ScaleIntensityRanged() above adding the Guidance signal. I'll publish the code soon as well, it's part of my master thesis. I spent most of the time speeding up the previous code to do the transforms on the GPU, so that now the network runs the inference in less than 5 seconds for the full volume. Dice is at 87% on validation set with 10 clicks per label as additional input on AutoPET, which I am pretty happy about :) |
Beta Was this translation helpful? Give feedback.
-
Btw I also saw the MONAILabel BasicInferTask always clears the CUDA cache. I am not sure if you have to do that. I don't know if you are following MONAI as well but I opened this bug report there: Project-MONAI/MONAI#6626 |
Beta Was this translation helpful? Give feedback.
-
@diazandr3s Is there any functionality to let monailabel track how long it takes for a user to finish an annotation (time between next sample and writing)? Bonus: I was wondering why 3D Slicer blocks while waiting for the new segmentation / inference result. Are there any plans to make that non-blocking or does Slicer not support this behaviour? |
Beta Was this translation helpful? Give feedback.
-
Hey guys! I am currently trying to convert my sliding interactive model based on DeepEdit to work with MONAILabel. I got as far that the input is working and the label output are matching in terms of Nifti.
However while debugging I realized the clicks on the input image are not set / transformed correctly. More specifically when I click on a tumor on the right side, the network added a prediction right opposite to that on the left side.
So obviously the clicks do not fit the transformed input image. If I include the clicks in the transforms (setting the signal to 1 in the according input channel, DeepEdit style, image then has 1+len(labels) channels), I do get even weirder results where the click gets distorted among many pixels and in the end the network does no longer recognize it as a click.
I don't understand however how the previous DeepEdit Code is dealing with this problem.
I'll append my current preprocessing code. The setting below is not working where the guidance signal is added before the transforms. If I move
AddGuidanceSignal
to the end of the transforms, it works, but the clicks added are based on the original image whereasAddGuidanceSignal
works on the transformed one..Any help here would be appreciated, I'll try to find a working solution with the current transforms by only converting the signal or something.
Beta Was this translation helpful? Give feedback.
All reactions