Skip to content

Critical Bug Fix fixed image fusion on mac#411

Merged
didymo merged 5 commits intodevfrom
fix-transfer-roi
Oct 26, 2025
Merged

Critical Bug Fix fixed image fusion on mac#411
didymo merged 5 commits intodevfrom
fix-transfer-roi

Conversation

@Nowhere4Nothing
Copy link
Copy Markdown
Collaborator

@Nowhere4Nothing Nowhere4Nothing commented Oct 23, 2025

Fixed the not loading with mac, used python threading
Fixed auto fusion not loading

Summary by Sourcery

Add threaded loading and progress queue to Image Fusion UI to fix macOS loading issues and improve error handling

New Features:

  • Introduce a progress queue polled by ImageFusionProgressWindow via QTimer
  • Enable manual fusion loading in a background Python thread with interrupt flag and progress queue

Bug Fixes:

  • Fix manual fusion loading hang on macOS by moving loader into a separate thread with proper progress reporting
  • Fix auto fusion branch not loading by handling boolean result types and showing error dialogs
  • Replace commonprefix with commonpath in MovingImageLoader to correctly compute shared directory

Enhancements:

  • Unify progress_callback usage in ManualFusionLoader and MovingImageLoader as direct callback calls
  • Add robust exception handling in MovingImageLoader.load with helper method to emit progress and errors
  • Connect fusion mode radio buttons to revalidate and toggle the Confirm button upon mode changes

mattbrgn and others added 3 commits October 23, 2025 13:22
- Qt threading causing bus errors in MacOS environments for Image Fusion loading
Fixed auto to work with the new manual fusion / python thread or whenever it broke last

Fixed bug: where user could bypass the warning that stopped users doing auto fusion without an rstruct file selected.
Removed traceback, as its not needed anymore
@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai bot commented Oct 23, 2025

Reviewer's Guide

This PR fixes image fusion loading on Mac by refactoring manual and auto fusion workflows to use Python threading with queue-based progress updates, unifying callback handling and robust error management.

Sequence diagram for threaded manual image fusion loading with progress queue

sequenceDiagram
    participant User
    participant ImageFusionWindow
    participant ManualFusionLoader
    participant ProgressWindow
    participant Thread
    participant Queue

    User->>ImageFusionWindow: Clicks confirm (manual fusion)
    ImageFusionWindow->>ProgressWindow: show()
    ImageFusionWindow->>Queue: set_progress_queue(queue)
    ImageFusionWindow->>Thread: Start thread(run_loader)
    Thread->>ManualFusionLoader: loader.load(interrupt_flag, queue.put)
    ManualFusionLoader->>Queue: queue.put(progress update)
    ProgressWindow->>Queue: _poll_progress_queue()
    Queue->>ProgressWindow: update_progress(msg)
    ManualFusionLoader-->>ImageFusionWindow: signal_loaded or signal_error
    ImageFusionWindow->>ProgressWindow: close() (on error or completion)
Loading

Class diagram for updated ManualFusionLoader and MovingImageLoader

classDiagram
    class ManualFusionLoader {
        +load(interrupt_flag, progress_callback)
        +_load_with_vtk(progress_callback)
        -_interrupt_flag
        +signal_loaded
        +signal_error
    }
    class MovingImageLoader {
        +load(interrupt_flag, progress_callback, manual=False)
        +load_manual_mode(interrupt_flag, progress_callback)
        +_extracted_from_load_17(...)
        +get_common_path_and_datasets()
        +create_model_and_rtss(path)
        +handle_dvh(...)
        +handle_rtss(...)
        +init_container(...)
        +parent_window
        +signal_request_calc_dvh
        +advised_calc_dvh
        +calc_dvh
    }
    ManualFusionLoader --|> MovingImageLoader : uses
    MovingImageLoader --> parent_window : signal_advise_calc_dvh
Loading

Class diagram for ImageFusionProgressWindow with progress queue polling

classDiagram
    class ImageFusionProgressWindow {
        +set_progress_queue(progress_queue)
        +_poll_progress_queue()
        +update_progress(msg)
        -_progress_queue
        -_progress_timer
    }
Loading

File-Level Changes

Change Details Files
Refactor UI to use Python threading and queue-based progress updates
  • Replaced progress_window.start with custom Thread and queue in confirm_button_clicked
  • Added set_progress_queue and _poll_progress_queue in progress window
  • Wrapped loader.load in run_loader with try/except to emit errors
  • Added fusion mode toggle handler _on_fusion_mode_changed for selection revalidation
src/View/ImageFusion/ImageFusionWindow.py
src/View/ImageFusion/ImageFusionProgressWindow.py
Unify and harden progress_callback usage in MovingImageLoader
  • Switched path calculation from commonprefix to commonpath
  • Moved model creation after successful RTSS loading
  • Added exception handling around init_container and handle_rtss extracting to helper
  • Conditional invocation of progress_callback.emit vs direct call based on context
src/View/ImageFusion/MovingImageLoader.py
Ensure progress callbacks execute on the main thread in manual loader
  • Wrapped callback into main_thread_progress_callback for thread safety
  • Replaced progress_callback.emit calls with direct callback invocations
  • Adjusted signal_error emits to include formatted exception messages
src/View/ImageFusion/ManualFusionLoader.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • Consider stopping the QTimer in ImageFusionProgressWindow (e.g. in closeEvent) to avoid polling an invalid queue after the window is closed.
  • The repeated error‐handling pattern (e.g. _extracted_from_load_17 and duplicate try/except blocks) could be refactored into a shared helper or wrapper to reduce duplication.
  • When switching fusion modes via _on_fusion_mode_changed during an active load, verify that the confirm button and UI state remain stable and don’t inadvertently cancel or block ongoing operations.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider stopping the QTimer in ImageFusionProgressWindow (e.g. in closeEvent) to avoid polling an invalid queue after the window is closed.
- The repeated error‐handling pattern (e.g. _extracted_from_load_17 and duplicate try/except blocks) could be refactored into a shared helper or wrapper to reduce duplication.
- When switching fusion modes via _on_fusion_mode_changed during an active load, verify that the confirm button and UI state remain stable and don’t inadvertently cancel or block ongoing operations.

## Individual Comments

### Comment 1
<location> `src/View/ImageFusion/ManualFusionLoader.py:66-67` </location>
<code_context>
                """
-         # Store interrupt flag
+
+        # Wrap the progress_callback so it always runs in the main thread
+        def main_thread_progress_callback(*args, **kwargs):
+            if progress_callback is not None:
+                progress_callback(*args, **kwargs)
</code_context>

<issue_to_address>
**issue (bug_risk):** The main_thread_progress_callback does not guarantee main thread execution.

If progress_callback interacts with UI elements, this may cause thread-safety issues. Use QMetaObject.invokeMethod or a similar approach to ensure execution in the main Qt thread.
</issue_to_address>

### Comment 2
<location> `src/View/ImageFusion/MovingImageLoader.py:207-208` </location>
<code_context>
+                return False
         return True

+    # TODO Rename this here and in `load`
+    def _extracted_from_load_17(self, arg0, e, progress_callback, arg3):
+        logging.exception(arg0, e)
+        if hasattr(progress_callback, "emit"):
</code_context>

<issue_to_address>
**nitpick:** The helper method _extracted_from_load_17 has a non-descriptive name.

Consider renaming _extracted_from_load_17 to a more meaningful name, such as handle_container_init_error, to improve clarity.
</issue_to_address>

### Comment 3
<location> `src/View/ImageFusion/ManualFusionLoader.py:85` </location>
<code_context>
    def _load_with_vtk(self, progress_callback):
        """
                Loads the fixed and moving images using VTK for manual fusion and optionally extracts a saved transform.

                This function loads the fixed and moving image series from the selected directories using VTKEngine,
                validates that all selected files are from the same directory, and checks for a selected transform DICOM.
                If a transform DICOM is found, it extracts the transformation data for use in restoring a previous session.
                Emits the loaded VTKEngine and any extracted transform data via the signal_loaded signal.

                Args:
                    progress_callback: A callback function to report progress updates.

                Returns:
                    None. Emits the result via the signal_loaded signal.
                """
        # Check for interrupt before starting
        if self._interrupt_flag is not None and self._interrupt_flag.is_set():
            self.signal_error.emit((False, "Loading cancelled"))
            return

        # Progress: loading fixed image
        if progress_callback is not None:
            progress_callback(("Loading fixed image (VTK)...", 10))

        # Check for interrupt before loading fixed
        if self._interrupt_flag is not None and self._interrupt_flag.is_set():
            self.signal_error.emit((False, "Loading cancelled"))
            return

        # Gather fixed filepaths (directory)
        patient_dict_container = PatientDictContainer()
        fixed_dir = patient_dict_container.path
        moving_dir = None

        # An array for the image files to filter out the transform file
        selected_image_files = []
        transform_file = None

        if self.selected_files:
            # Validate all selected files are from the same directory
            dirs = {os.path.dirname(f) for f in self.selected_files}
            if len(dirs) > 1:
                # Emit error message if invalid
                error_msg = (
                    f"Selected files span multiple directories: {dirs}. "
                    "Manual fusion requires all files to be from the same directory."
                )
                if progress_callback is not None:
                    progress_callback(("Error loading images", error_msg))
                    logging.error("manualFusionLoader.py_load_with_vtk: ", error_msg)
                self.signal_error.emit((False, error_msg))
                return
            moving_dir = dirs.pop()

            # Separate image series and transform DICOMs
            #NOTE: This needs to be here as moving load expects only image series so we need to filter out the transform (if selected)
            # then only after that apply the transform
            for f in self.selected_files:
                try:
                    ds = dcmread(f, stop_before_pixels=True)
                    modality = getattr(ds, "Modality", "").upper()
                    sop_class = getattr(ds, "SOPClassUID", "")
                    if modality == "REG" or sop_class == "1.2.840.10008.5.1.4.1.1.66.1":
                        transform_file = f
                    else:
                        selected_image_files.append(f)
                except (InvalidDicomError, AttributeError, OSError) as e:
                    logging.warning("<manualFusionLoader.py_load_with_vtk>Error reading DICOM file", e)
                    continue


        # Populate moving model container before processing with VTK so origin can be read the same way as ROI Transfer logic
        moving_image_loader = MovingImageLoader(selected_image_files, None, self)
        moving_model_populated = moving_image_loader.load_manual_mode(self._interrupt_flag,
                                                                          progress_callback)

        if not moving_model_populated:
            # Check if interrupted, emit cancel signal immediately
            if self._interrupt_flag is not None and self._interrupt_flag.is_set():
                self.signal_error.emit((False, "Loading cancelled"))
                return
            # Emit error if loading failed
            logging.error("<manualFusionLoader.py_load_with_vtk> Failed to populate Moving Model Container")
            raise RuntimeError("Failed to populate Moving Model Container")

        # Use VTKEngine to load images
        engine = VTKEngine()

        fixed_loaded = engine.load_fixed(fixed_dir)
        if not fixed_loaded:
            logging.error("<manualFusionLoader.py>Could not load fixed image")
            raise RuntimeError("Failed to load fixed image with VTK.")

        if progress_callback is not None:
            progress_callback(("Loading overlay image (VTK)...", 50))

        # Check for interrupt before loading moving
        if self._interrupt_flag is not None and self._interrupt_flag.is_set():
            self.signal_error.emit((False, "Loading cancelled"))
            return

        moving_loaded = engine.load_moving(moving_dir)
        if not moving_loaded:
            logging.error("<manualFusionLoader.py_load_with_vtk>Failed to load moving image with VTK.")
            raise RuntimeError("Failed to load moving image with VTK.")

        # If a transform DICOM was found and is ticked, extract transform data only
        transform_data = None
        if transform_file is not None:
            try:
                if progress_callback is not None:
                    progress_callback(("Extracting saved transform...", 80))
                ds = pydicom.dcmread(transform_file)

                # See explanation at top for more details on private tags
                if hasattr(ds, "RegistrationSequence") or (0x7777, 0x0020) in ds or (0x7777, 0x0021) in ds:
                    transform_data = self._extracted_from__load_with_vtk_62(ds, np, transform_file)
                else:
                    raise ValueError("Selected transform.dcm is not a valid Spatial Registration Object.")
            except Exception as e:
                logging.error(f"Error extracting transform from {transform_file}: {e}")
                if progress_callback is not None:
                    progress_callback(("Error extracting transform", 80))
                self.signal_error.emit((False, f"Error extracting transform: {e}"))
                return

        # Last message before overlay loaded
        if progress_callback is not None:
            progress_callback(("Preparing overlays...", 90))
            QtCore.QCoreApplication.processEvents()
            time.sleep(0.05)

        # Final interrupt check before emitting loaded signal
        if self._interrupt_flag is not None and self._interrupt_flag.is_set():
            self.signal_error.emit((False, "Loading cancelled"))
            return

        # Only emit the VTKEngine for downstream use; overlays will be generated on-the-fly
        self.signal_loaded.emit((True, {
            "vtk_engine": engine,
            "transform_data": transform_data,
        }))

        # Emit 100% progress just before closing/loading is complete
        if progress_callback is not None:
            progress_callback(("Complete", 100))
            QtCore.QCoreApplication.processEvents()

</code_context>

<issue_to_address>
**issue (code-quality):** Low code quality found in ManualFusionLoader.\_load\_with\_vtk - 13% ([`low-code-quality`](https://docs.sourcery.ai/Reference/Default-Rules/comments/low-code-quality/))

<br/><details><summary>Explanation</summary>The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.

How can you solve this?

It might be worth refactoring this function to make it shorter and more readable.

- Reduce the function length by extracting pieces of functionality out into
  their own functions. This is the most important thing you can do - ideally a
  function should be less than 10 lines.
- Reduce nesting, perhaps by introducing guard clauses to return early.
- Ensure that variables are tightly scoped, so that code using related concepts
  sits together within the function rather than being scattered.</details>
</issue_to_address>

### Comment 4
<location> `src/View/ImageFusion/MovingImageLoader.py:118` </location>
<code_context>
    def load(self, interrupt_flag, progress_callback, manual=False):
        # initial callback
        if manual:
            progress_callback(("Generating Moving Model", 20))
        elif hasattr(progress_callback, "emit"):
            progress_callback.emit(("Creating datasets...", 0))

        # load datasets and common path
        try:
            path, read_data_dict, file_names_dict = self.get_common_path_and_datasets()
        except ImageLoading.NotAllowedClassError as e:
            raise ImageLoading.NotAllowedClassError from e

        try:
            moving_dict_container = self.init_container(path, read_data_dict, file_names_dict)
        except Exception as e:
            return self._extracted_from_load_17(
                "TRACE: Exception in init_container:",
                e,
                progress_callback,
                "Error initializing container",
            )
        if interrupt_flag.is_set():
            return False

        # check for RTSS and RTDOSE, ask to calculate DVH if both present
        if 'rtss' in file_names_dict and 'rtdose' in file_names_dict:
            self.parent_window.signal_advise_calc_dvh.connect(self.update_calc_dvh)
            self.signal_request_calc_dvh.emit()
            while not self.advised_calc_dvh:
                pass

        if 'rtss' in file_names_dict:
            if manual:
                progress_callback(("Getting ROI + Contour data...", 25))
            elif hasattr(progress_callback, "emit"):
                progress_callback.emit(("Getting ROI + Contour data...", 10))

            try:
                dataset_rtss, rois, dict_thickness = self.handle_rtss(
                    file_names_dict, read_data_dict, moving_dict_container
                )
            except Exception as e:
                return self._extracted_from_load_17(
                    "TRACE: Exception in handle_rtss:",
                    e,
                    progress_callback,
                    "Error loading RTSS",
                )
            if interrupt_flag.is_set():
                logging.error("TRACE: MovingImageLoader.load - interrupted after handle_rtss")
                return False

            if 'rtdose' in file_names_dict and self.calc_dvh:
                if manual:
                    progress_callback(("Calculating DVHs...", 40))
                elif hasattr(progress_callback, "emit"):
                    progress_callback.emit(("Calculating DVHs...", 60))
                ok = self.handle_dvh(dataset_rtss, rois, dict_thickness,
                                     file_names_dict, moving_dict_container,
                                     interrupt_flag)
                if not ok or interrupt_flag.is_set():
                    return False
            create_moving_model()
        else:
            # no RTSS present, create temporary RTSS
            if manual:
                progress_callback(("Generating temporary rtss...", 40))
            elif hasattr(progress_callback, "emit"):
                progress_callback.emit(("Generating temporary rtss...", 20))

            ok = self.create_model_and_rtss(path)
            if not ok or interrupt_flag.is_set():
                return False

            # Show moving model loading
            if manual:
                progress_callback(("Loading Moving Model", 45))
            elif hasattr(progress_callback, "emit"):
                progress_callback.emit(("Loading Moving Model", 85))

            if interrupt_flag.is_set() and manual:
                progress_callback(("Stopping", 85))
                return False
            elif hasattr(progress_callback, "emit"):
                progress_callback.emit(("Stopping", 85))
                return False
        return True

</code_context>

<issue_to_address>
**issue (code-quality):** Low code quality found in MovingImageLoader.load - 16% ([`low-code-quality`](https://docs.sourcery.ai/Reference/Default-Rules/comments/low-code-quality/))

<br/><details><summary>Explanation</summary>The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.

How can you solve this?

It might be worth refactoring this function to make it shorter and more readable.

- Reduce the function length by extracting pieces of functionality out into
  their own functions. This is the most important thing you can do - ideally a
  function should be less than 10 lines.
- Reduce nesting, perhaps by introducing guard clauses to return early.
- Ensure that variables are tightly scoped, so that code using related concepts
  sits together within the function rather than being scattered.</details>
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +207 to +208
# TODO Rename this here and in `load`
def _extracted_from_load_17(self, arg0, e, progress_callback, arg3):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick: The helper method _extracted_from_load_17 has a non-descriptive name.

Consider renaming _extracted_from_load_17 to a more meaningful name, such as handle_container_init_error, to improve clarity.

Copy link
Copy Markdown
Collaborator

@Kahreiru Kahreiru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

look ok provided things which are mentioned but not a big deal

progress_callback.emit(("Preparing overlays...", 90))
progress_callback(("Preparing overlays...", 90))
QtCore.QCoreApplication.processEvents()
time.sleep(0.05)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

while it is only 0.05 seconds. Is this a thread blocking? could a non-thread blocking option be used here

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it doesn't need to be there at all it just goes too quick and would go 50% to done, was just thinking it would be better for the user to see it go to 90% for a very quick time before loading. can be removed if best

- Added checks to see if progress callback is from a Qt signal and will change the method execution.
Copy link
Copy Markdown
Collaborator

@sjswerdloff sjswerdloff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created an issue for the renaming of the helper method.
While the bot might consider this to be a nit, this type of thing prevents abstraction.
In other words, I can't read the code calling it and have any idea of what's going to happen, so I have to go find the function and read it. So it doesn't really help me as someone who needs to understand the code, you might as well have just copy/pasted it where-ever it was being used.

@didymo didymo merged commit db4fe0c into dev Oct 26, 2025
7 checks passed
@didymo didymo deleted the fix-transfer-roi branch February 13, 2026 02:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants