Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement mixed precision for training and inference #20

Merged
merged 2 commits into from
Jul 20, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 11 additions & 9 deletions src/membrain_seg/segmentation/segment.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,16 +91,18 @@ def segment(
print("Performing 8-fold test-time augmentation.")
for m in range(8):
with torch.no_grad():
predictions += (
get_mirrored_img(
inferer(get_mirrored_img(new_data.clone(), m).to(device), pl_model)[
0
],
m,
with torch.cuda.amp.autocast():
predictions += (
get_mirrored_img(
inferer(
get_mirrored_img(new_data.clone(), m).to(device), pl_model
)[0],
m,
)
.detach()
.cpu()
)
.detach()
.cpu()
)

predictions /= 8.0

# Extract segmentations and store them in an output file.
Expand Down
4 changes: 2 additions & 2 deletions src/membrain_seg/segmentation/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,8 +91,7 @@ def train(
save_top_k=-1, # Save all checkpoints
every_n_epochs=100,
dirpath="checkpoints/",
filename=checkpointing_name
+ "-{epoch}-{val_loss:.2f}", # Customize the filename of saved checkpoints
filename=checkpointing_name + "-{epoch}-{val_loss:.2f}",
verbose=True, # Print a message when a checkpoint is saved
)

Expand All @@ -106,6 +105,7 @@ def on_epoch_start(self, trainer, pl_module):
print_lr_cb = PrintLearningRate()
# Set up the trainer
trainer = pl.Trainer(
precision="16-mixed",
logger=[csv_logger, wandb_logger],
callbacks=[
checkpoint_callback_val_loss,
Expand Down
11 changes: 9 additions & 2 deletions src/membrain_seg/segmentation/training/optim_utils.py
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needed to switch to binary_cross_entropy_with_logits because normal binary_cross_entropy was not compatible with the mixed precision.

Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,10 @@
from monai.losses import DiceLoss, MaskedLoss
from monai.networks.nets import DynUNet
from monai.utils import LossReduction
from torch.nn.functional import binary_cross_entropy, sigmoid
from torch.nn.functional import (
binary_cross_entropy_with_logits,
sigmoid,
)
from torch.nn.modules.loss import _Loss


Expand Down Expand Up @@ -79,14 +82,18 @@ def forward(self, data: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
The calculated loss.
"""
# Create a mask to ignore the specified label in the target
orig_data = data.clone()
data = sigmoid(data)
mask = target != self.ignore_label

# Compute the cross entropy loss while ignoring the ignore_label
target_comp = target.clone()
target_comp[target == self.ignore_label] = 0
target_tensor = torch.tensor(target_comp, dtype=data.dtype, device=data.device)
bce_loss = binary_cross_entropy(data, target_tensor, reduction="none")

bce_loss = binary_cross_entropy_with_logits(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, this function applies a sigmoid to the prediction before computing the loss. Have you checked how this impacts training performance?

https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html#torch.nn.BCEWithLogitsLoss

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haven't checked in detail how it influences training performance, except that it re-ran training to see if loss curves behave the same: In this sense, there is no difference.

Including the sigmoid in the loss function is supposed to be more numerically stable, as the optimization uses the log sum exp trick to compute the loss: https://gregorygundersen.com/blog/2020/02/09/log-sum-exp/

I guess this helps especially in case of exploding or vanishing gradients, which I didn't observe during training.

Other than that, the loss function is the same as the one I used previously. Note that the binary_cross_entropy_with_logits function uses as input the orig_data variable instead of previously data where sigmoid is applied above.
So both the previous version and the new version first do sigmoid, and then cross entropy. Only the new version computes gradients in a single pass for more stability.

orig_data, target_tensor, reduction="none"
)
bce_loss[~mask] = 0.0
bce_loss = torch.sum(bce_loss) / torch.sum(mask)
dice_loss = self.dice_loss(data, target, mask)
Expand Down