Skip to content

Conversation

@mimiliaogo
Copy link
Contributor

@mimiliaogo mimiliaogo commented Jan 2, 2026

This PR introduces a non-invasive BCI visualization application, developed in collaboration with Kernel, that reconstructs and renders HbO activities of a 3D brain in real time.

What’s included

  1. Kernel reconstruction pipeline: end-to-end processing from fNIRS moments input to reconstructed HbO volumes.
  2. Converts the reconstructed HbO output into the format required by the volume renderer.
  3. Volume renderer enhancements: enables streaming and multithreaded execution, adds CUDA stream handling, and includes several bug fixes identified during integration.

TODO

  1. Add a more detailed README (Kernel Flow2, fNIRS, and pipeline overview).
  2. Refactor and consolidate data inputs and upload datasets to cloud.

Summary by CodeRabbit

  • New Features

    • Adds a complete BCI visualization app with runnable entry, Docker image, and real-time pipeline from NIRS/SNIRF input to 3D visualization.
  • Improvements

    • GPU-accelerated reconstruction, voxelization, and regularized solver for lower latency.
    • Renderer: asynchronous GPU uploads, camera-pose caching, and backpressure-safe passthrough to avoid deadlocks.
  • Documentation

    • New README, metadata, config, and pinned Python requirements.

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings January 2, 2026 04:09
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a new BCI (Brain-Computer Interface) visualization application that processes and renders brain hemodynamic activity in real-time using Kernel Flow2 fNIRS data. The implementation includes a complete reconstruction pipeline from raw fNIRS measurements to 3D volume rendering with ClaraViz.

Key Changes:

  • End-to-end Kernel reconstruction pipeline: processes fNIRS moment data through regularized inverse solver to produce HbO/HbR voxel volumes
  • Volume renderer enhancements: adds CUDA stream propagation, streaming support, memory leak fixes, and camera pose caching
  • New VoxelStreamToVolume operator: converts sparse voxel data to dense volumes with affine transformations and GPU-accelerated resampling

Reviewed changes

Copilot reviewed 35 out of 37 changed files in this pull request and generated 20 comments.

Show a summary per file
File Description
operators/volume_renderer/volume_renderer.cpp Enhanced with CUDA stream handling, camera pose caching, streaming support, and fixes for deadlock prevention
operators/volume_renderer/dataset.cpp Added CUDA stream parameter and async memory copy with explicit synchronization workaround
operators/volume_renderer/dataset.hpp Updated SetVolume signature to accept CUDA stream
operators/volume_renderer/python/volume_renderer.cpp Added EmitterReceiverRegistry for custom type registration
applications/bci_visualization/bci_visualization.py Main application orchestrating reconstruction pipeline and volume rendering with interactive visualization
applications/bci_visualization/reconstruction.py Holoscan application wrapper for reconstruction pipeline operators
applications/bci_visualization/operators/voxel_stream_to_volume/ New operator converting voxel data to volume tensors with affine transformations and GPU resampling
applications/bci_visualization/operators/reconstruction/ Reconstruction pipeline operators: BuildRHS, Normalize, Solver, ConvertToVoxels
applications/bci_visualization/operators/color_buffer_passthrough/ Pass-through operator with queue policy to prevent backpressure
applications/bci_visualization/processing/reconstruction/ Core reconstruction logic: regularized solver, HbO conversion, GPU utilities, data loading
applications/bci_visualization/streams/ SNIRF file reader and base NIRS stream interface
applications/bci_visualization/metadata.json Application metadata with Healthcare AI category
applications/CMakeLists.txt Added bci_visualization to build system

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Jan 2, 2026

Greptile Summary

  • Adds a comprehensive BCI Visualization application with real-time fNIRS brain imaging data processing pipeline including GPU-accelerated reconstruction, voxelization, and 3D volume rendering
  • Enhances the existing volume renderer operator with CUDA stream support, asynchronous memory transfers, camera pose caching, and memory leak fixes for streaming applications
  • Implements 7 new reconstruction operators and utilities with Python bindings, SNIRF data streaming support, and proper multi-threading configuration to prevent visualization deadlocks

Important Files Changed

Filename Overview
applications/bci_visualization/operators/stream.py New StreamOperator raises StopIteration instead of graceful stream end handling, which will terminate the application
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py New operator with cumulative HbO accumulation without reset mechanism and hardcoded GPU usage despite use_gpu parameter
applications/bci_visualization/operators/reconstruction/normalize_operator.py New normalization operator returns early without emitting output in certain conditions, potentially stalling pipeline
applications/bci_visualization/streams/snirf.py SNIRF data parser lacks error handling for data structure validation and array bounds checking
operators/volume_renderer/volume_renderer.cpp Enhanced volume renderer with CUDA stream support and memory leak fixes critical for real-time streaming performance

Confidence score: 3/5

  • This PR requires careful review due to several pipeline reliability issues including improper error handling, potential deadlock conditions, and hardcoded GPU assumptions
  • Score lowered due to StopIteration handling in StreamOperator, missing validation in SNIRF parser, cumulative data handling without reset mechanisms, and early returns in normalization operator
  • Pay close attention to the streaming components and error handling in the reconstruction pipeline operators

Sequence Diagram

sequenceDiagram
    participant User
    participant StreamOp as "StreamOperator"
    participant BuildRHS as "BuildRHSOperator"
    participant Normalize as "NormalizeOperator" 
    participant Solver as "RegularizedSolverOperator"
    participant ConvertVoxels as "ConvertToVoxelsOperator"
    participant VoxelToVolume as "VoxelStreamToVolumeOp"
    participant VolumeRenderer as "VolumeRendererOp"
    participant ColorBuffer as "ColorBufferPassthroughOp"
    participant Holoviz as "HolovizOp"

    User->>StreamOp: "Start BCI Visualization"
    StreamOp->>BuildRHS: "samples"
    BuildRHS->>Normalize: "batch"
    Normalize->>Solver: "normalized"
    Solver->>ConvertVoxels: "result"
    ConvertVoxels->>VoxelToVolume: "affine_4x4, hb_voxel_data"
    VoxelToVolume->>VolumeRenderer: "volume, spacing, permute_axis, flip_axes"
    VoxelToVolume->>VolumeRenderer: "mask_volume, mask_spacing, mask_permute_axis, mask_flip_axes"
    VolumeRenderer->>ColorBuffer: "color_buffer_out"
    ColorBuffer->>Holoviz: "color_buffer_out"
    Holoviz->>VolumeRenderer: "camera_pose_output"
    Holoviz->>User: "Display 3D brain visualization"
Loading

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional Comments (4)

  1. operators/volume_renderer/volume_renderer.cpp, line 286 (link)

    logic: cached_camera_pose_ is not explicitly initialized. clara::viz::Matrix4x4 might not default to identity, which could cause undefined behavior on first use when no camera pose is received.

  2. operators/volume_renderer/dataset.cpp, line 149-150 (link)

    style: Explicit cudaStreamSynchronize blocks the CPU thread until all GPU operations complete, which defeats the purpose of async operations and can create performance bottlenecks in the pipeline. The FIXME comment acknowledges this needs better handling.

    Consider using CUDA events for proper synchronization instead, or investigate why the stream isn't being properly synchronized elsewhere in the pipeline.

  3. applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py, line 66-67 (link)

    logic: Raises exception during start() if mask_nifti_path is not provided, but this parameter is listed as "Optional" in the argument parser help text (line 204 of bci_visualization.py). Either make it truly optional with fallback behavior, or update the help text to indicate it's required.

  4. applications/bci_visualization/bci_visualization.py, line 111-122 (link)

    style: Hardcoded data paths to /workspace/holohub/data/kernel won't work outside the Docker container or if users organize data differently. Consider adding command-line arguments for these paths or loading from a configuration file.

    Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

37 files reviewed, 4 comments

Edit Code Review Agent Settings | Greptile

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 2, 2026

Walkthrough

Adds a new bci_visualization application with end-to-end real-time NIRS reconstruction and visualization (Python + C++), new operators and streams, GPU-accelerated solver utilities, a color-buffer passthrough operator with Python bindings, and updates to volume_renderer to accept and propagate CUDA streams and cache camera pose.

Changes

Cohort / File(s) Summary
Top-level app registration
applications/CMakeLists.txt
Registers new bci_visualization holohub application dependent on volume_renderer.
Application project & config
applications/bci_visualization/CMakeLists.txt, applications/bci_visualization/bci_visualization.yaml
New CMake project entry and YAML config for visualization and renderer parameters.
App entry, metadata & docs
applications/bci_visualization/bci_visualization.py, applications/bci_visualization/metadata.json, applications/bci_visualization/README.md, applications/bci_visualization/requirements.txt, applications/bci_visualization/Dockerfile
New Python Application (BciVisualizationApp) with compose/main, metadata, README, pinned requirements, and Dockerfile.
Streams (interfaces & SNIRF)
applications/bci_visualization/streams/*, applications/bci_visualization/streams/base_nirs.py, applications/bci_visualization/streams/snirf.py
New BaseNirsStream, ChannelInfo types, and SNIRFStream implementation that parses SNIRF files and yields moment/wavelength tensors.
Stream operator
applications/bci_visualization/operators/stream.py
New Holoscan StreamOperator that emits samples and channel metadata from a BaseNirsStream.
Reconstruction operators & types
applications/bci_visualization/operators/reconstruction/__init__.py, .../types.py, .../build_rhs_operator.py, .../normalize_operator.py, .../solver_operator.py, .../convert_to_voxels_operator.py
New GPU-enabled reconstruction pipeline: BuildRHSOperator, NormalizeOperator, RegularizedSolverOperator, ConvertToVoxelsOperator and dataclasses (BuildRHSOutput, NormalizedSolveBatch, SolverResult, VoxelMetadata).
Voxel → Volume processing
applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py, .../voxel_stream_to_volume/voxel_stream_to_volume.py
VoxelStreamToVolumeOp: mask NIfTI loading, affine handling, EMA stats, GPU resampling, emits dense volume and mask with spatial metadata.
Color buffer passthrough (C++)
applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt, .../cpp/color_buffer_passthrough.hpp, .../cpp/color_buffer_passthrough.cpp
New ColorBufferPassthroughOp C++ operator that forwards rendered color buffers (input queue size=1, pop policy).
Color buffer Python bindings
applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt, .../python/color_buffer_passthrough.cpp
Pybind11 module exposing ColorBufferPassthroughOp for Python.
Reconstruction utilities & assets
applications/bci_visualization/utils/*, .../reconstruction/assets.py, .../hbo.py, .../reg_inv.py, .../types.py
Asset loader singleton, extinction coefficient parsing, HbO/HbR conversion, cached GPU regularized solver, and type aliases.
Volume renderer API & implementation
operators/volume_renderer/dataset.hpp, operators/volume_renderer/dataset.cpp, operators/volume_renderer/volume_renderer.cpp, operators/volume_renderer/python/volume_renderer.cpp
Dataset::SetVolume signature now accepts cudaStream_t; uses async cudaMemcpy3DAsync and stream-aware blob Access; VolumeRendererOp receive_volume now receives ExecutionContext to extract CUDA stream, caches camera pose, resets volumes when needed, and Python binding registers custom emitter/receiver types.
Minor header/license updates
multiple operators/volume_renderer/* files
SPDX/copyright year bumps and small header edits across volume_renderer files.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Suggested reviewers

  • bhashemian
  • jjomier
🚥 Pre-merge checks | ✅ 2
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: a new BCI visualization application built with Kernel Flow2 and real-time volume rendering, which directly aligns with the substantial additions in the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (6)
operators/volume_renderer/python/volume_renderer.cpp (1)

1-16: Fix copyright year to include 2026.

The copyright header year range must include the current year (2026) to pass compliance checks.

🔎 Proposed fix
 /*
- * SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+ * SPDX-FileCopyrightText: Copyright (c) 2023-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
  * SPDX-License-Identifier: Apache-2.0
operators/volume_renderer/dataset.hpp (1)

1-1: Fix copyright header to include current year.

The pipeline is failing because the copyright header is incomplete. The header shows 2023-2025 but should include the current year (2026) per the compliance check.

🔎 Proposed fix
-/* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+/* SPDX-FileCopyrightText: Copyright (c) 2023-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
operators/volume_renderer/dataset.cpp (2)

1-1: Fix copyright header to include current year.

Same issue as the header file - update the year range to include 2026.

🔎 Proposed fix
-/* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+/* SPDX-FileCopyrightText: Copyright (c) 2023-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

241-243: Off-by-one error in frame index bounds check.

The condition frame_index > density_.size() should be frame_index >= density_.size(). With the current check, if density_.size() is 5 and frame_index is 5, the check passes but accessing density_[5] is undefined behavior. The same issue exists at line 255 for segmentation_.

🔎 Proposed fix
-      if (frame_index > density_.size()) {
+      if (frame_index >= density_.size()) {
         throw std::runtime_error("Invalid density frame index");
       }

And similarly for segmentation:

-      if (frame_index > segmentation_.size()) {
+      if (frame_index >= segmentation_.size()) {
         throw std::runtime_error("Invalid segmentation frame index");
       }
operators/volume_renderer/volume_renderer.cpp (1)

1-2: Fix copyright header to include current year.

Same issue as the other files - update the year range to include 2026.

🔎 Proposed fix
-/* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights
+/* SPDX-FileCopyrightText: Copyright (c) 2023-2026 NVIDIA CORPORATION & AFFILIATES. All rights
applications/CMakeLists.txt (1)

1-1: Update copyright year range to include 2026.

The pipeline failure indicates the copyright header should include the current year.

Proposed fix
-# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2022-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
🟠 Major comments (15)
applications/bci_visualization/Dockerfile-30-37 (1)

30-37: Missing cupy installation; metadata.json lists it as a requirement.

The metadata.json specifies cupy as a Python requirement, but it is not installed in this Dockerfile. Additionally, consider adding --no-cache-dir to reduce image size and pinning versions for reproducibility.

Proposed fix
 # Install Python dependencies
-RUN pip install nibabel nilearn
+RUN pip install --no-cache-dir nibabel nilearn

 # Install Kernel's dependencies
 RUN pip install \
+    --no-cache-dir \
     numpy \
     scipy \
-    h5py
+    h5py \
+    cupy-cuda12x

Note: Adjust the cupy package name based on the target CUDA version (e.g., cupy-cuda11x or cupy-cuda12x).

applications/bci_visualization/processing/reconstruction/hbo.py-21-34 (1)

21-34: File handle leak: open(path) is never closed.

The file opened in the list comprehension is not explicitly closed, which can lead to resource leaks, especially when processing many files.

Proposed fix
     @classmethod
     def from_csv(cls, path: Path) -> list[ExtinctionCoefficient]:
-        return [
-            cls(
-                Wavelength=float(row["Wavelength"]),
-                HbO=float(row["HbO"]),
-                deoxyHb=float(row["deoxyHb"]),
-                Water=float(row["Water"]),
-                Lipids=float(row["Lipids"]),
-                LuTex=float(row["LuTex"]),
-                GdTex=float(row["GdTex"]),
-            )
-            for row in csv.DictReader(open(path))
-        ]
+        with open(path) as f:
+            return [
+                cls(
+                    Wavelength=float(row["Wavelength"]),
+                    HbO=float(row["HbO"]),
+                    deoxyHb=float(row["deoxyHb"]),
+                    Water=float(row["Water"]),
+                    Lipids=float(row["Lipids"]),
+                    LuTex=float(row["LuTex"]),
+                    GdTex=float(row["GdTex"]),
+                )
+                for row in csv.DictReader(f)
+            ]
applications/bci_visualization/processing/reconstruction/gpu.py-15-21 (1)

15-21: Docstring is misleading; update to reflect GPU-only behavior.

The docstring states the function returns "NumPy or CuPy" but the implementation only supports CuPy. Update the documentation to match the GPU-only contract.

Proposed fix
 def get_array_module(use_gpu: bool = False) -> tuple[ModuleType, bool]:
-    """Get the appropriate array module (NumPy or CuPy) based on GPU availability.
+    """Get the CuPy array module for GPU computations.
+
+    This function enforces a GPU-only workflow. Calling with use_gpu=False
+    will raise an error.
+
     Args:
-        use_gpu: Whether to attempt to use GPU arrays.
+        use_gpu: Must be True. GPU is required for the reconstruction pipeline.
+
     Returns:
-        A tuple containing the array module and a boolean indicating if GPU is used.
+        A tuple containing the CuPy module and True.
+
+    Raises:
+        ValueError: If use_gpu is False.
+        ImportError: If CuPy is not installed.
+        RuntimeError: If no CUDA devices are found.
     """
applications/bci_visualization/operators/stream.py-48-51 (1)

48-51: Avoid raising StopIteration from compute(); use proper end-of-stream signaling.

Raising StopIteration from a non-generator function violates PEP 479 and can cause RuntimeError in async contexts. Additionally, the codebase pattern shows operators should return without emitting when no data is available. The scheduler handles missing emissions properly.

Proposed fix
     def compute(
         self, op_input: InputContext, op_output: OutputContext, context: ExecutionContext
     ) -> None:
-
         sample = next(self._iter, None)
         if sample is None:
-            raise StopIteration("No more samples available in the stream.")
+            logger.info("Stream exhausted, no more samples available.")
+            return

         op_output.emit(SampleOutput(sample, self._channels), "samples")
applications/bci_visualization/processing/reconstruction/data_utils.py-39-42 (1)

39-42: Chain the exception to preserve context.

When re-raising a different exception type within an except clause, use raise ... from err to preserve the original exception chain for debugging.

🔎 Proposed fix
-    except KeyError:
+    except KeyError as err:
         raise ValueError(
             f"Channel without jacobian mapping (src_module={src_module}, src={src}, det_module={det_module}, det={det})",
-        )
+        ) from err
applications/bci_visualization/processing/reconstruction/data_utils.py-46-46 (1)

46-46: Replace assert with explicit validation.

Assertions can be disabled with Python's -O flag and should not be used for runtime data validation. Replace with an explicit check and raise a ValueError.

🔎 Proposed fix
-    assert np.any(channel_mask)
+    if not np.any(channel_mask):
+        raise ValueError("No channels were mapped - channel_mask is empty")
     return channel_mask
applications/bci_visualization/operators/color_buffer_passthrough/color_buffer_passthrough.cpp-30-38 (1)

30-38: Propagate CUDA streams in emit call.

The cuda_streams variable retrieved on line 35 is unused and should be propagated with the output message using the tuple pattern: output.emit(std::make_tuple(color_message.value(), cuda_streams), "color_buffer_out"); Alternatively, if stream propagation is not required, remove the unused receive_cuda_streams call to avoid confusion and unnecessary overhead.

applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp-2-2 (1)

2-2: Update copyright year to include 2026.

The pipeline is failing because the copyright year doesn't include 2026. Update line 2 to:

 * SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py-1-2 (1)

1-2: Add the required copyright header.

The pipeline is failing due to missing copyright header. Add the standard NVIDIA SPDX header.

applications/bci_visualization/bci_visualization.py-1-3 (1)

1-3: Update copyright year to include 2026.

The pipeline is failing because the copyright year in the docstring doesn't include 2026. Update line 2 to:

SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
applications/bci_visualization/reconstruction.py-1-2 (1)

1-2: Add the required copyright header.

The pipeline is failing due to missing copyright header. Add the standard NVIDIA SPDX header.

applications/bci_visualization/operators/reconstruction/normalize_operator.py-1-2 (1)

1-2: Add the required copyright header.

The pipeline is failing because the copyright header is missing. Based on other files in this PR, the expected format is:

# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# ...
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py-72-81 (1)

72-81: Affine computation may fail silently or raise unexpectedly.

The retry loop attempts up to 100 random samples to find a non-singular matrix. If all 100 fail, the check on line 80 uses the last B value. However, if the loop never enters (which can't happen since ctr < 100 is always initially true), B would be an empty array.

More concerning: the loop continues but doesn't break on success, meaning it always runs 100 times. This is likely unintended.

Proposed fix
     while ctr < 100:
         ctr += 1
         inds = rng.choice(np.arange(len(ijk)), size=n, replace=False)
         ins = ijk[np.array(inds), :]  # <- points
         out = xyz[np.array(inds), :]  # <- mapped to
         B = np.vstack([np.transpose(ins), np.ones(n, dtype=np.float32)])
         if np.linalg.det(B) == 0:
             continue
+        break  # Found a valid non-singular B
     if np.linalg.det(B) == 0:
         raise RuntimeError("Cannot compute affine, algorithm failed after 100 attempts")
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py-141-141 (1)

141-141: GPU synchronization in hot path - use cp.min/cp.max or remove debug print.

Using np.min(hb_voxel) on a CuPy array triggers an implicit device-to-host transfer and GPU synchronization. This degrades streaming performance. Either use CuPy functions or remove this debug statement.

🔎 Proposed fix (remove debug print)
-        print("VoxelStreamToVolume:  hb_voxel range:", np.min(hb_voxel), np.max(hb_voxel))

Or if debugging is needed, use CuPy and guard with a flag:

if self._debug:
    logger.debug("hb_voxel range: %f to %f", float(cp.min(hb_voxel)), float(cp.max(hb_voxel)))
applications/bci_visualization/streams/snirf.py-35-42 (1)

35-42: HDF5 file handle is never closed; use logger instead of print.

The file opened at line 36 is stored but never closed, which can cause resource leaks. Consider implementing a stop() or close() method. Also, use logger.info() for consistency with the rest of the codebase.

🔎 Proposed fix
     def start(self) -> None:
         self._snirf_file = h5py.File(self._snirf_file_path, "r")

         self._channels = self._get_channels()
         self._unique_channels = [
             ch for ch in self._channels if ch.moment == 0 and ch.wavelength == 0
         ]
-        print("Got {} unique channels".format(len(self._unique_channels)))
+        logger.info("Got %d unique channels", len(self._unique_channels))
+
+    def stop(self) -> None:
+        """Close the SNIRF file handle."""
+        if hasattr(self, "_snirf_file") and self._snirf_file:
+            self._snirf_file.close()
+            self._snirf_file = None
🟡 Minor comments (18)
operators/volume_renderer/python/volume_renderer.cpp-124-129 (1)

124-129: Remove unused type registrations or provide justification for their inclusion.

The std::array<uint32_t, 3> and std::array<bool, 3> types registered in the register_types function are not emitted or received by VolumeRendererOp or any other operator in the codebase. The operator actually emits color_message and depth_message objects, not the registered array types.

While the register_types function correctly follows the Holoscan SDK pattern (it is automatically invoked by the framework at module import time via io_type_registry in Holoscan SDK >= 2.1.0), the type registrations appear to be dead code. Either remove these unused registrations or provide clear documentation/implementation showing how these types are intended to be used in the data flow.

applications/bci_visualization/README.md-44-44 (1)

44-44: Typo: "GRAY MATER" should be "GRAY MATTER".

Proposed fix
-   a. `activeRegions`:  (0: SKIN, 1: SKULL, 2: CSF, 3: GRAY MATER, 4: WHITE MATTER, 5: AIR). Here, we select [3, 4] for our ROI. Set everything else as opacity=0 (default).
+   a. `activeRegions`:  (0: SKIN, 1: SKULL, 2: CSF, 3: GRAY MATTER, 4: WHITE MATTER, 5: AIR). Here, we select [3, 4] for our ROI. Set everything else as opacity=0 (default).
applications/bci_visualization/processing/reconstruction/hbo.py-1-10 (1)

1-10: Add missing SPDX copyright header.

The pipeline failure indicates the copyright header is missing.

Proposed fix
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 from __future__ import annotations
applications/bci_visualization/operators/stream.py-1-8 (1)

1-8: Add missing SPDX copyright header.

The pipeline failure indicates the copyright header is missing.

Proposed fix
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 from __future__ import annotations
applications/bci_visualization/Dockerfile-1-4 (1)

1-4: Update copyright year to include 2026.

The pipeline failure indicates the copyright header is incomplete. Since the current year is 2026, update the copyright notice.

Proposed fix
-# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
applications/bci_visualization/processing/reconstruction/gpu.py-1-4 (1)

1-4: Add missing SPDX copyright header.

The pipeline failure indicates the copyright header is missing. Add the standard Apache 2.0 header at the top of the file.

Proposed fix
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 import logging
 from types import ModuleType
applications/bci_visualization/bci_visualization.py-8-8 (1)

8-8: Remove unused imports to fix linter errors.

The pipeline is failing due to unused imports. Remove os and MultiThreadScheduler.

Proposed fix
-import os
 import argparse
 from pathlib import Path


 from holoscan.core import Application
 from holoscan.operators import HolovizOp
-from holoscan.resources import CudaStreamPool, UnboundedAllocator
-from holoscan.schedulers import EventBasedScheduler, MultiThreadScheduler
+from holoscan.resources import CudaStreamPool, UnboundedAllocator
+from holoscan.schedulers import EventBasedScheduler

Also applies to: 16-16

applications/bci_visualization/reconstruction.py-32-33 (1)

32-33: _tol parameter is stored but never used.

The tol parameter is accepted in __init__ and stored as self._tol (line 43), but it's never passed to RegularizedSolverOperator or any other operator. Either remove it or wire it to the solver.

Option 1: Remove unused parameter
     def __init__(
         self,
         *,
         stream: BaseNirsStream,
         jacobian_path: Path | str,
         channel_mapping_path: Path | str,
         voxel_info_dir: Path,
         coefficients_path: Path | str,
         reg: float = REG_DEFAULT,
-        tol: float = 1e-4,  # Tolerance for the regularized solver
         use_gpu: bool = False,
     ):
         ...
-        self._tol = tol
Option 2: Pass to solver (if supported)
         regularized_solver_operator = RegularizedSolverOperator(
             reg=self._reg,
+            tol=self._tol,
             use_gpu=self._use_gpu,
             fragment=fragment,
         )

Note: This requires adding tol parameter support to RegularizedSolverOperator.

Also applies to: 43-43, 66-70

applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py-11-11 (1)

11-11: Remove unused import gpu.

The pipeline is failing because gpu is imported but never used. Remove it.

Proposed fix
-from processing.reconstruction import ExtinctionCoefficient, HbO, gpu
+from processing.reconstruction import ExtinctionCoefficient, HbO
applications/bci_visualization/reconstruction.py-48-48 (1)

48-48: Remove unused variable emit_voxel_frames.

The linter flagged this variable as assigned but never used. Remove it to fix the pipeline failure.

Proposed fix
     def compose(self, application: Application, voxel_to_volume: Operator):
         fragment = application
-        emit_voxel_frames = True

         pipeline_assets = get_assets(
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py-83-84 (1)

83-84: Replace lambda with a named function to fix linter error.

The pipeline is failing due to E731 (lambda assigned to variable). Rewrite as a def.

Proposed fix
+    def entry(r: NDArray, d: int) -> float:
+        return np.linalg.det(np.delete(np.vstack([r, B]), (d + 1), axis=0))
+
     D = 1.0 / np.linalg.det(B)
-    entry = lambda r, d: np.linalg.det(np.delete(np.vstack([r, B]), (d + 1), axis=0))
     M = [[(-1) ** i * D * entry(R, i) for i in range(n)] for R in np.transpose(out)]

Note: B needs to be in scope for the nested function, which it already is.

applications/bci_visualization/processing/reconstruction/__init__.py-1-8 (1)

1-8: Add the required SPDX copyright header.

The pipeline is failing due to the missing copyright header.

🔎 Proposed fix
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
+# SPDX-License-Identifier: Apache-2.0
+
 import json
 import logging
 import pathlib
applications/bci_visualization/streams/snirf.py-1-13 (1)

1-13: Add the required SPDX copyright header.

The pipeline is failing due to the missing copyright header.

🔎 Proposed fix
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
+# SPDX-License-Identifier: Apache-2.0
+
 import logging
 from pathlib import Path
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py-1-6 (1)

1-6: Update the copyright header to include the current year.

The pipeline indicates the copyright header is incomplete. Update to include 2026.

🔎 Proposed fix
 """
-SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES.
+SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
 SPDX-License-Identifier: Apache-2.0
applications/bci_visualization/operators/reconstruction/__init__.py-1-10 (1)

1-10: Add the required SPDX copyright header.

The pipeline is failing due to the missing copyright header. Add the standard Holoscan copyright header at the top of the file.

🔎 Proposed fix
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
+# SPDX-License-Identifier: Apache-2.0
+
 from .build_rhs_operator import BuildRHSOperator
 from .convert_to_voxels_operator import ConvertToVoxelsOperator
applications/bci_visualization/processing/reconstruction/reg_inv.py-1-9 (1)

1-9: Add the required SPDX copyright header; remove or use MAX_REASONABLE_COND_RATIO.

The copyright header is missing. Also, MAX_REASONABLE_COND_RATIO is defined but never used in this module.

🔎 Proposed fix
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
+# SPDX-License-Identifier: Apache-2.0
+
 import logging
 from types import ModuleType

 logger = logging.getLogger(__name__)

 # Cache holds CuPy arrays when running GPU-only. (Keyed by wavelength index.)
 _HESSIAN_CACHE: dict[int, object] = {}
-
-MAX_REASONABLE_COND_RATIO = 10
applications/bci_visualization/operators/reconstruction/types.py-1-7 (1)

1-7: Add the required SPDX copyright header.

The pipeline is failing due to the missing copyright header.

🔎 Proposed fix
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
+# SPDX-License-Identifier: Apache-2.0
+
 from __future__ import annotations

 from dataclasses import dataclass
applications/bci_visualization/processing/reconstruction/__init__.py-10-13 (1)

10-13: Add __all__ to explicitly declare re-exports and fix linting errors.

The linter flags get_channel_mask and HbO as unused imports (F401). Adding an __all__ list makes the re-export intention explicit and silences the warnings.

🔎 Proposed fix
 # reexport
 from .data_utils import get_channel_mask
-from .hbo import ExtinctionCoefficient, HbO 
+from .hbo import ExtinctionCoefficient, HbO
 from .types import ChannelHeadsetMapping
+
+__all__ = [
+    "get_channel_mask",
+    "ExtinctionCoefficient",
+    "HbO",
+    "ChannelHeadsetMapping",
+    "Assets",
+    "get_assets",
+    "REG_DEFAULT",
+    "RESHAPING_ORDER",
+    "USE_GPU_DEFAULT",
+]

Committable suggestion skipped: line range outside the PR's diff.

🧹 Nitpick comments (35)
operators/volume_renderer/dataset.cpp (1)

144-151: Address the FIXME/TODO for stream synchronization.

The synchronization is necessary because CudaMemoryBlob::AccessGuard requires the data to be ready when it goes out of scope. However, synchronizing here blocks the calling thread and partially negates the benefits of asynchronous copy.

A more elegant approach would be to synchronize at the rendering boundary (in VolumeRendererOp::compute()) or use CUDA events to coordinate with ClaraViz's internal stream. This would allow better overlap of copy and other CPU work.

Would you like me to help design a deferred synchronization pattern, or open an issue to track this improvement?

operators/volume_renderer/volume_renderer.cpp (5)

228-228: Missing space after comma.

Minor formatting issue for consistency.

🔎 Proposed fix
-  bool receive_volume(InputContext& input, ExecutionContext& context,Dataset::Types type);
+  bool receive_volume(InputContext& input, ExecutionContext& context, Dataset::Types type);

289-289: Missing space after comma (same as declaration).

🔎 Proposed fix
-bool VolumeRendererOp::Impl::receive_volume(InputContext& input, ExecutionContext& context, Dataset::Types type) {
+bool VolumeRendererOp::Impl::receive_volume(InputContext& input, ExecutionContext& context, Dataset::Types type) {

Wait, actually checking the line more carefully:

-bool VolumeRendererOp::Impl::receive_volume(InputContext& input, ExecutionContext& context,Dataset::Types type) {
+bool VolumeRendererOp::Impl::receive_volume(InputContext& input, ExecutionContext& context, Dataset::Types type) {

542-542: Improve the inline comment for clarity.

The comment (Mimi) refers to the author but doesn't add context. Consider making it more descriptive for future maintainers.

🔎 Proposed fix
-  spec.output<holoscan::gxf::Entity>("color_buffer_out").condition(ConditionType::kNone); // (Mimi): remove backpressure to avoid deadlock when running in a loop
+  // ConditionType::kNone prevents backpressure deadlock when operator runs in a feedback loop
+  spec.output<holoscan::gxf::Entity>("color_buffer_out").condition(ConditionType::kNone);

551-552: Missing space after comma in function calls.

🔎 Proposed fix
-  bool new_volume = impl_->receive_volume(input, context,Dataset::Types::Density);
-  bool new_mask = impl_->receive_volume(input, context,Dataset::Types::Segmentation);
+  bool new_volume = impl_->receive_volume(input, context, Dataset::Types::Density);
+  bool new_mask = impl_->receive_volume(input, context, Dataset::Types::Segmentation);

846-861: Consider reducing log level for no-volume case.

Using HOLOSCAN_LOG_INFO to log every time there's no density volume could flood logs during startup or when the upstream operator is slow. Consider using HOLOSCAN_LOG_DEBUG or logging only on state transitions.

🔎 Proposed fix
-    HOLOSCAN_LOG_INFO("No density volume, sleeping for {} milliseconds", access->time_slot.Get());
+    HOLOSCAN_LOG_DEBUG("No density volume, sleeping for {} milliseconds", access->time_slot.Get());

Alternatively, log only on first occurrence:

static bool logged_once = false;
if (!logged_once) {
  HOLOSCAN_LOG_INFO("No density volume available yet, waiting...");
  logged_once = true;
}
applications/bci_visualization/README.md (1)

33-37: Add language specifier to the fenced code block.

Specifying a language (e.g., text or plaintext) improves markdown linting compliance and rendering.

Proposed fix
-```
+```text
 Reconstruction → VoxelStreamToVolume → VolumeRenderer → Holoviz
                                              ↑               ↓
                                              └─── camera ────┘
</details>

</blockquote></details>
<details>
<summary>applications/bci_visualization/processing/reconstruction/hbo.py (2)</summary><blockquote>

`73-73`: **Remove unnecessary `round()` call; `wavelength` is already an `int`.**

The parameter `wavelength` is typed as `int`, making the `round()` call redundant.

<details>
<summary>Proposed fix</summary>

```diff
-        wavelength_rows = [ext for ext in self._coefficients if ext.Wavelength == round(wavelength)]
+        wavelength_rows = [ext for ext in self._coefficients if ext.Wavelength == wavelength]

126-128: Replace assert with explicit validation for production code.

Using assert for runtime validation can be disabled with -O flag. Additionally, verify that wavelengths always has exactly 2 elements (for HbO and HbR), otherwise sample_hb[0] and sample_hb[1] may not correspond to the expected outputs.

Proposed fix
-        assert sample_hb.shape == (len(wavelengths), num_voxels)
+        if sample_hb.shape != (2, num_voxels):
+            raise ValueError(
+                f"Expected sample_hb shape (2, {num_voxels}), got {sample_hb.shape}"
+            )
         data_hbo = sample_hb[0]
         data_hbr = sample_hb[1]
applications/bci_visualization/streams/__init__.py (1)

1-2: Add a docstring and consider using math.sqrt.

The function lacks documentation explaining its purpose and parameters. Additionally, using math.sqrt() is more idiomatic and readable than ** 0.5.

🔎 Proposed refactor
+import math
+
 def dist3d(x1: float, y1: float, z1: float, x2: float, y2: float, z2: float) -> float:
+    """
+    Calculate the Euclidean distance between two 3D points.
+    
+    Args:
+        x1, y1, z1: Coordinates of the first point
+        x2, y2, z2: Coordinates of the second point
+    
+    Returns:
+        float: The Euclidean distance between the two points
+    """
-    return ((x2 - x1) ** 2 + (y2 - y1) ** 2 + (z2 - z1) ** 2) ** 0.5
+    return math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2 + (z2 - z1) ** 2)

As per coding guidelines, code must include clear documentation of functionality and use idiomatic constructs.

applications/bci_visualization/processing/reconstruction/types.py (1)

3-3: Document the type alias structure.

The deeply nested type alias lacks documentation explaining what each level represents and how it should be used.

🔎 Proposed addition
+"""
+Type alias for mapping channel-headset relationships.
+
+Structure: dict[level1, dict[level2, dict[level3, dict[level4, Tuple[int]]]]]
+where each level represents a specific mapping dimension used by the reconstruction pipeline.
+"""
 ChannelHeadsetMapping = dict[str, dict[str, dict[str, dict[str, Tuple[int]]]]]

As per coding guidelines, code must include clear documentation of functionality. Consider adding a module-level docstring explaining the structure and usage of this type alias.

applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt (1)

24-26: Remove trailing blank lines.

Multiple trailing blank lines at the end of the file are unnecessary.

applications/bci_visualization/streams/base_nirs.py (1)

20-21: Consider adding a docstring to clarify intent.

The start() method is intentionally a no-op providing an optional initialization hook for subclasses. Adding a docstring would clarify this design choice and address the static analysis hint.

🔎 Proposed enhancement
 def start(self) -> None:
+    """Optional initialization hook for subclasses. Default implementation does nothing."""
     pass
applications/bci_visualization/operators/reconstruction/normalize_operator.py (3)

28-29: Consider renaming to NormalizeOp per coding guidelines.

Per the coding guidelines for **/operators/**/*.py, class names for operators should use TitleCase with 'Op' suffix (e.g., NormalizeOp instead of NormalizeOperator). This ensures consistency with other operators in the holohub repository.


164-166: Use explicit return None for clarity.

The implicit return (bare return with no value) on line 166 should be explicit to match the function's return type annotation (Tuple[...] | None).

Proposed fix
         if row_normalizers is None:
-            return  # early exit if max_rhs is all zeros
+            return None  # early exit if max_rhs is all zeros

28-82: Unit tests are missing for this operator.

Per the coding guidelines for **/operators/**/*.py, Python operators must include unit tests in the same directory using the pytest framework. Consider adding a test_normalize_operator.py file with tests covering the normalization logic, edge cases (e.g., all-zero RHS), and caching behavior.

Would you like me to generate a skeleton test file for this operator?

applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt (1)

42-45: Remove trailing blank lines.

There are multiple trailing blank lines at the end of the file. Consider removing them for cleaner formatting.

applications/bci_visualization/operators/reconstruction/solver_operator.py (3)

21-22: Consider renaming to RegularizedSolverOp per coding guidelines.

Per the coding guidelines, operator class names should use TitleCase with 'Op' suffix. Consider renaming RegularizedSolverOperator to RegularizedSolverOp for consistency.


28-33: _use_gpu flag is stored but never used.

The use_gpu parameter is stored in self._use_gpu (line 33) but the code always uses CuPy unconditionally (line 58: xp = cp). Either remove the unused flag or implement the conditional GPU/CPU logic.

Option 1: Remove unused parameter
     def __init__(
         self,
         *,
         reg: float = REG_DEFAULT,
-        use_gpu: bool = False,
         fragment: Any | None = None,
     ) -> None:
         super().__init__(fragment, name=self.__class__.__name__)
         self._reg = reg
-        self._use_gpu = use_gpu
Option 2: Implement conditional logic
     def _solve_batch(self, batch: NormalizedSolveBatch) -> SolverResult:
         ...
-        # GPU-only: always use CuPy.
-        xp = cp
+        xp = cp if self._use_gpu else np

Note: This would also require importing numpy and handling CPU arrays.

Also applies to: 57-58


64-65: Consider replacing assertions with proper validation.

Using assert statements for runtime validation can be problematic as they are disabled with -O optimization. Consider using explicit validation with ValueError or similar exceptions for production code.

Proposed fix
-            assert system.rhs.ndim == 1
-            assert system.jacobian.shape[1] == num_cols_expected
+            if system.rhs.ndim != 1:
+                raise ValueError(f"Expected 1D rhs, got shape {system.rhs.shape}")
+            if system.jacobian.shape[1] != num_cols_expected:
+                raise ValueError(
+                    f"Jacobian columns {system.jacobian.shape[1]} != expected {num_cols_expected}"
+                )
applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp (1)

39-47: Clarify the purpose of discarded args parameter.

The constructor accepts py::args but silently discards it with (void)args. If this is intentional for API compatibility, consider adding a comment explaining why. If not needed, remove it from the signature to prevent confusion.

Option: Add clarifying comment
   PyColorBufferPassthroughOp(Fragment* fragment, const py::args& args,
                              const std::string& name = "color_buffer_passthrough")
       : ColorBufferPassthroughOp(ArgList{}) {
-    (void)args;
+    (void)args;  // Unused: accepted for API compatibility with other operators
     name_ = name;
     fragment_ = fragment;
applications/bci_visualization/bci_visualization.py (2)

111-122: Avoid hardcoded paths and move imports to module level.

Several concerns here:

  1. Hardcoded path: /workspace/holohub/data/kernel is not portable. Consider making this configurable via CLI arguments or environment variables.

  2. Imports inside method: Moving from streams.snirf import SNIRFStream and from reconstruction import ReconstructionApplication to module level is more idiomatic and improves startup-time error detection.

Proposed refactor

Move imports to module level (after line 17):

from streams.snirf import SNIRFStream
from reconstruction import ReconstructionApplication

Make the data path configurable:

     parser.add_argument(
+        "-d",
+        "--data_dir",
+        action="store",
+        type=str,
+        dest="data_dir",
+        default="/workspace/holohub/data/kernel",
+        help="Path to the Kernel data directory containing SNIRF and model files.",
+    )

Then in compose():

kernel_data = Path(self._data_dir)

26-36: Add type hints to __init__ parameters.

Per the coding guidelines, code should include type hints for clarity. The __init__ parameters are missing type annotations.

Proposed fix
     def __init__(self, 
-        argv=None,
+        argv: list[str] | None = None,
         *args,
-        render_config_file,
-        density_min,
-        density_max,
-        label_path=None,
-        roi_labels=None,
-        mask_path=None,
+        render_config_file: str | None,
+        density_min: int | None,
+        density_max: int | None,
+        label_path: str | None = None,
+        roi_labels: list[int] | None = None,
+        mask_path: str | None = None,
         **kwargs,
-    ):
+    ) -> None:
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (2)

91-92: Consider renaming to ConvertToVoxelsOp per coding guidelines.

Per the coding guidelines, operator class names should use TitleCase with 'Op' suffix.


106-106: _cached_affine is defined but never used.

The instance variable self._cached_affine is initialized to None but is never assigned or read anywhere in the class. Either remove it or implement the intended caching logic.

Proposed fix (remove if not needed)
         self._affine_sent: bool = False
-        self._cached_affine: NDArray[np.float32] | None = None
         self._use_gpu = use_gpu
applications/bci_visualization/operators/reconstruction/__init__.py (1)

12-21: Consider sorting __all__ alphabetically.

Ruff flags that __all__ is not sorted (RUF022). Sorting improves readability and makes merge conflicts easier to resolve.

🔎 Proposed fix
 __all__ = [
     "BuildRHSOperator",
+    "BuildRHSOutput",
     "ConvertToVoxelsOperator",
+    "NormalizedSolveBatch",
     "NormalizeOperator",
     "RegularizedSolverOperator",
-    "BuildRHSOutput",
-    "NormalizedSolveBatch",
     "SolverResult",
     "VoxelMetadata",
 ]
applications/bci_visualization/processing/reconstruction/__init__.py (1)

84-90: Remove redundant pathlib.Path() wrapping.

The variables mua_path, musp_path, etc. are already Path objects (constructed on lines 76-82). Wrapping them again in pathlib.Path() is unnecessary.

🔎 Proposed fix
-    _mua = np.load(pathlib.Path(mua_path))
-    _musp = np.load(pathlib.Path(musp_path))
-    _idxs_significant_voxels = np.load(pathlib.Path(idxs_significant_voxels_path))
-    _ijk = np.load(pathlib.Path(ijk_path))
-    _xyz = np.load(pathlib.Path(xyz_path))
-    _wavelengths = np.load(pathlib.Path(wavelengths_path))
-    _resolution = tuple(np.load(pathlib.Path(resolution_path)).tolist())
+    _mua = np.load(mua_path)
+    _musp = np.load(musp_path)
+    _idxs_significant_voxels = np.load(idxs_significant_voxels_path)
+    _ijk = np.load(ijk_path)
+    _xyz = np.load(xyz_path)
+    _wavelengths = np.load(wavelengths_path)
+    _resolution = tuple(np.load(resolution_path).tolist())
applications/bci_visualization/streams/snirf.py (1)

139-150: Use logger instead of print; rename unused ts to _ts and add strict=True to zip().

For consistency, use the module logger. The unused loop variable ts should be prefixed with _. Adding strict=True to zip() ensures mismatched lengths are caught early.

🔎 Proposed fix
-        print("Streaming {} samples from SNIRF".format(len(data)))
-        for ts, sample in zip(times, data):
+        logger.info("Streaming %d samples from SNIRF", len(data))
+        for _ts, sample in zip(times, data, strict=True):
             # sample is shape (n_channels,)
             # send (n_moments, n_unique_channels, n_wavelengths)
             to_send = np.full((NUM_MOMENTS, len(self._unique_channels), NUM_WAVELENGTHS), np.nan)
applications/bci_visualization/processing/reconstruction/reg_inv.py (2)

117-128: Replace assert with explicit validation for production code.

Assertions are stripped when Python runs with optimization flags (-O). For input validation that must always run, use explicit checks with proper exceptions.

🔎 Proposed fix
-    # Validate input
-    assert (A.ndim == 2) and (A.shape[0] == A.shape[1])
-    assert b.ndim in {1, 2} and b.shape[0] == A.shape[0]
-    assert xp.all(xp.isfinite(A))
-    assert xp.all(xp.isfinite(b))
+    # Validate input
+    if not ((A.ndim == 2) and (A.shape[0] == A.shape[1])):
+        raise ValueError(f"A must be a square 2D matrix, got shape {A.shape}")
+    if not (b.ndim in {1, 2} and b.shape[0] == A.shape[0]):
+        raise ValueError(f"b shape {b.shape} incompatible with A shape {A.shape}")
+    if not xp.all(xp.isfinite(A)):
+        raise ValueError("A contains non-finite values")
+    if not xp.all(xp.isfinite(b)):
+        raise ValueError("b contains non-finite values")

     # Ensure symmetry for numerical stability
     A = 0.5 * (A + A.T)

     # Regular inverse
     result = xp.linalg.solve(A, b)
-    assert xp.all(xp.isfinite(result))
+    if not xp.all(xp.isfinite(result)):
+        raise RuntimeError("Solver produced non-finite result")
     return result

12-18: Consider stronger return type annotations.

The return type object is too generic. Using a union type or the actual array type would improve type checking and IDE support.

from typing import Union
import numpy as np
from numpy.typing import NDArray

# At function level:
def solve_regularized_system(...) -> "NDArray[np.float32] | cp.ndarray":
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (2)

26-27: Class name should use 'Op' suffix per coding guidelines.

Per the coding guidelines for **/operators/**/*.py, operator class names should use TitleCase with 'Op' suffix (e.g., BuildRHSOp instead of BuildRHSOperator).

Based on coding guidelines, class naming should follow the pattern *Op.


132-136: Replace assert with explicit validation.

Assertions are stripped with -O flag. Use explicit validation for production code.

🔎 Proposed fix
         # Validate that jacobian features dimension matches realtime moments
         # 5D jacobian shape: (channels, features, wavelengths, voxels, simulation_types)
         num_features = realtime_moments.shape[0]
-        assert self._mega_jacobians_cpu.shape[1] == num_features
+        if self._mega_jacobians_cpu.shape[1] != num_features:
+            raise ValueError(
+                f"Jacobian features dimension ({self._mega_jacobians_cpu.shape[1]}) "
+                f"does not match realtime moments ({num_features})"
+            )
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (3)

52-57: Remove deprecated code.

These fields are marked as deprecated but still initialized. Consider removing them entirely or adding a deprecation warning if they must remain for backward compatibility.


76-79: Replace print() statements with logger calls throughout.

Multiple print() statements are used (lines 76-79, 119, 180, 199-200). For consistency and proper log level control, use the logging module. As per coding guidelines, code must include appropriate error handling and clear documentation.

🔎 Example fix for this block
+import logging
+
+logger = logging.getLogger(__name__)
+
 # In start():
-            print(
-                f"VoxelStreamToVolume: Loaded mask from {self.mask_nifti_path}, "
-                f"shape: {self.mask_voxel_raw.shape}, values: {np.unique(self.mask_voxel_raw)}"
-            )
+            logger.info(
+                "VoxelStreamToVolume: Loaded mask from %s, shape: %s, values: %s",
+                self.mask_nifti_path, self.mask_voxel_raw.shape, np.unique(self.mask_voxel_raw)
+            )

106-109: Use TypeError for type validation.

When validating that hb_voxel is a CuPy array, TypeError is more appropriate than ValueError.

🔎 Proposed fix
         # Check voxel data is valid
         if not isinstance(hb_voxel, cp.ndarray):
-            raise ValueError("VoxelStreamToVolume: Invalid voxel data, expected cupy array")
+            raise TypeError(f"VoxelStreamToVolume: Expected cupy array, got {type(hb_voxel).__name__}")

@@ -0,0 +1,26 @@
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Update copyright year.

The copyright header must include the current year (2026).

🔎 Proposed fix
-# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

As per pipeline failure logs.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
🧰 Tools
🪛 GitHub Actions: Check Compliance

[error] 1-1: Copyright header incomplete: current year not included in the header.

🤖 Prompt for AI Agents
In applications/bci_visualization/CMakeLists.txt around line 1, the SPDX
copyright header currently lists the year 2025; update the year to 2026 so the
header reads "... Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights
reserved." and ensure formatting/spelling of the SPDX line remains unchanged.

Comment on lines 1 to 3
from .voxel_stream_to_volume import VoxelStreamToVolumeOp


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Explicitly re-export the imported class using __all__.

The linter flags the import as unused because it's not explicitly marked for re-export. Add an __all__ declaration to fix the F401 error.

🔎 Proposed fix
 from .voxel_stream_to_volume import VoxelStreamToVolumeOp
 
-
+__all__ = ["VoxelStreamToVolumeOp"]
🧰 Tools
🪛 GitHub Actions: Check Compliance

[error] 1-1: Copyright header missing or formatted incorrectly (manual fix required).

🪛 GitHub Actions: Check linting

[error] 1-1: F401: '.voxel_stream_to_volume.VoxelStreamToVolumeOp' imported but unused; consider removing or re-exporting explicitly

🤖 Prompt for AI Agents
In applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
lines 1-3, the imported VoxelStreamToVolumeOp is intended to be re-exported but
the linter marks it unused; add an explicit __all__ list that includes
"VoxelStreamToVolumeOp" to declare it as part of the public API and silence
F401, and ensure module-level exports use the exact class name as a string in
the __all__ sequence.

@@ -0,0 +1,3 @@
from .voxel_stream_to_volume import VoxelStreamToVolumeOp
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Add the required copyright header.

All source files must include the Apache 2.0 license header.

🔎 Proposed fix
+# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 from .voxel_stream_to_volume import VoxelStreamToVolumeOp

As per coding guidelines, all code contributions must include appropriate license headers.

🧰 Tools
🪛 GitHub Actions: Check Compliance

[error] 1-1: Copyright header missing or formatted incorrectly (manual fix required).

🪛 GitHub Actions: Check linting

[error] 1-1: F401: '.voxel_stream_to_volume.VoxelStreamToVolumeOp' imported but unused; consider removing or re-exporting explicitly

🤖 Prompt for AI Agents
In applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
around line 1, the file is missing the required Apache 2.0 license header; add
the standard Apache License, Version 2.0 header comment block at the very top of
the file (before any imports) using the project’s canonical header text
(including copyright owner and year), ensuring formatting matches other files in
the repo.

@@ -0,0 +1,47 @@
import logging
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Add copyright header.

The copyright header is missing from this file.

🔎 Proposed fix
+# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 import logging

As per coding guidelines and pipeline failure logs.

🧰 Tools
🪛 GitHub Actions: Check Compliance

[error] 1-1: Copyright header missing or formatted incorrectly (manual fix required).

🤖 Prompt for AI Agents
In applications/bci_visualization/processing/reconstruction/data_utils.py at
lines 1-1, the file is missing the required copyright/header; add the project’s
standard copyright/license header at the very top of the file (matching the
repository’s header template used in other Python files), update the year/owner
placeholders as appropriate, and ensure there is a blank line after the header
before the first import so the file passes linting and pipeline checks.

@@ -0,0 +1,2 @@
def dist3d(x1: float, y1: float, z1: float, x2: float, y2: float, z2: float) -> float:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Add the required copyright header.

All source files must include the Apache 2.0 license header as specified in the compliance check.

🔎 Proposed fix
+# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 def dist3d(x1: float, y1: float, z1: float, x2: float, y2: float, z2: float) -> float:
     return ((x2 - x1) ** 2 + (y2 - y1) ** 2 + (z2 - z1) ** 2) ** 0.5

As per coding guidelines, all code contributions must include appropriate license headers.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def dist3d(x1: float, y1: float, z1: float, x2: float, y2: float, z2: float) -> float:
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def dist3d(x1: float, y1: float, z1: float, x2: float, y2: float, z2: float) -> float:
return ((x2 - x1) ** 2 + (y2 - y1) ** 2 + (z2 - z1) ** 2) ** 0.5
🧰 Tools
🪛 GitHub Actions: Check Compliance

[error] 1-1: Copyright header missing or formatted incorrectly (manual fix required).

🤖 Prompt for AI Agents
In applications/bci_visualization/streams/__init__.py at line 1, the file is
missing the required Apache 2.0 license header; add the standard Apache 2.0
license header comment block at the very top of the file (before any code),
filling in the correct copyright year(s) and copyright holder/organization, and
ensure the header matches the project's canonical license header format.

Copy link
Contributor

@tbirdso tbirdso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good so far! Took a first high level review pass, will take a second pass for Python app details

coefficients_path: pathlib.Path | str,
voxel_info_dir: pathlib.Path | str,
) -> Assets:
"""Load large reconstruction assets on demand.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: suggest adding a "Throws" section to mention possible errors from np.load, such as if any assets do not exist in the provided path location(s). https://numpy.org/devdocs/reference/generated/numpy.load.html

@tbirdso
Copy link
Contributor

tbirdso commented Jan 5, 2026

@tbirdso tbirdso requested a review from a team January 5, 2026 20:40
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
operators/volume_renderer/dataset.cpp (2)

236-239: Pre-existing off-by-one: bounds check should use >=.

This isn't part of the current changes, but worth noting: frame_index > density_.size() allows accessing index equal to size(), which is out of bounds for a 0-based vector. The same issue exists at line 251 for segmentation_.

🔎 Suggested fix
-      if (frame_index > density_.size()) {
+      if (frame_index >= density_.size()) {

And similarly at line 251:

-      if (frame_index > segmentation_.size()) {
+      if (frame_index >= segmentation_.size()) {

112-148: Add explicit stream synchronization after async GPU copy to ensure data completeness before blob is used.

The cudaMemcpy3DAsync at line 144 launches an asynchronous copy on cuda_stream but returns immediately without guaranteeing the transfer completes. The AccessGuard scope ends at line 148, and the blob is then stored in density_ or segmentation_ vectors for later rendering access. For device-to-device transfers (line 133), the operation is truly asynchronous and requires explicit synchronization.

Add cudaStreamSynchronize(cuda_stream) after the async copy to ensure the data is ready before the access guard scope ends and the blob is used downstream.

🤖 Fix all issues with AI Agents
In @applications/bci_visualization/metadata.json:
- Around line 40-41: The metadata.json's requirements.python currently lists
only "numpy" and "cupy" which is out of sync with requirements.txt; update the
"requirements.python" array in metadata.json to include the additional packages
found in requirements.txt (at least "nibabel", "scipy", and "h5py") so the field
matches the application's actual Python dependencies.

In
@applications/bci_visualization/operators/color_buffer_passthrough/color_buffer_passthrough.hpp:
- Line 1: Update the copyright header in color_buffer_passthrough.hpp by
changing the year "2025" to "2026" in the top-of-file SPDX/ copyright comment so
the header reflects the current year.

In
@applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py:
- Line 16: The VoxelStreamToVolumeOp operator lacks unit tests; add a pytest
test module in the same directory (e.g., test_voxel_stream_to_volume.py) that
imports VoxelStreamToVolumeOp and covers: processing of valid 3D input producing
expected volume, affine matrix handling for first-frame vs subsequent frames,
error cases for missing mask path / invalid input types / wrong dimensions, and
normalization behavior for positive, negative, and zero-valued data; use small
synthetic arrays and fixtures to exercise methods and assert raised exceptions
and output shapes/values.
- Around line 98-101: Replace the first type check's ValueError with TypeError
and include the actual received type in its message: when verifying hb_voxel is
a cupy array (the check using isinstance(hb_voxel, cp.ndarray)), raise TypeError
and format the message to show the received type (e.g., type(hb_voxel).__name__
or type(hb_voxel)); leave the 3D shape check as a ValueError but ensure its
message already includes hb_voxel.shape as shown.
🧹 Nitpick comments (7)
applications/bci_visualization/CMakeLists.txt (1)

16-22: Consider adding a testing section for functional testing.

Applications should include a testing section in their CMakeLists.txt for functional testing using CTest. This would help verify the BCI visualization pipeline integration.

Based on coding guidelines.

applications/bci_visualization/bci_visualization.py (1)

199-254: LGTM!

The main() function properly uses the HOLOSCAN_INPUT_PATH environment variable with a sensible fallback, and sets up the application with appropriate configuration. The argument parser provides a clean interface for essential parameters.

Consider adding CLI arguments for use_gpu (line 239) and reg (line 238) to provide users more flexibility without editing the code.

applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (5)

34-37: Address the TODO: Make normalization range configurable or data-driven.

The hardcoded normalization range (-1e-4 to 1e-4) was flagged in previous reviews. For better generalization across datasets, consider either:

  • Computing these values from data statistics (e.g., percentiles), or
  • Making them required configuration parameters with dataset-specific values

68-71: Prefer logging over print statements.

For production code, replace print statements with proper logging (e.g., logging.info() or logger.info()) to enable configurable log levels and better observability.

🔎 Proposed fix

Add at the top of the file:

import logging
logger = logging.getLogger(__name__)

Then replace the print:

-            print(
-                f"VoxelStreamToVolume: Loaded mask from {self.mask_nifti_path}, "
-                f"shape: {self.mask_voxel_raw.shape}, values: {np.unique(self.mask_voxel_raw)}"
-            )
+            logger.info(
+                f"Loaded mask from {self.mask_nifti_path}, "
+                f"shape: {self.mask_voxel_raw.shape}, values: {np.unique(self.mask_voxel_raw)}"
+            )

111-111: Prefer logging over print statements.

Similar to the start method, use proper logging instead of print statements for production code.

🔎 Proposed fix
-            print("VoxelStreamToVolume: Received affine matrix")
+            logger.info("Received affine matrix")

120-123: Document or extract the magic normalization constants.

The values -99 and 99 appear as magic numbers with a vague comment about "adding a buffer to avoid edge case". A previous review flagged this as unclear. Either:

  • Define these as named constants with detailed documentation explaining the edge case, or
  • Make them configurable parameters
🔎 Proposed fix - Option 1: Named constants

At the class level:

# Buffer range to avoid numerical edge cases in the renderer's transfer function
_NORMALIZE_MIN = -99
_NORMALIZE_MAX = 99

Then use:

-            # Note: set to -99 to 99 to add a buffer avoiding edge case.
             hb_voxel_normalized = self._normalize_and_process_activated_voxels(
-                hb_voxel, normalize_min_value=-99, normalize_max_value=99
+                hb_voxel, normalize_min_value=self._NORMALIZE_MIN, normalize_max_value=self._NORMALIZE_MAX
             )

Option 2: Configuration parameters
Add to __init__:

self.normalize_min = kwargs.pop("normalize_min", -99)
self.normalize_max = kwargs.pop("normalize_max", 99)

194-200: Consider using a named constant for the sentinel value.

The magic number 4 is used as a sentinel to indicate an uninitialized axis. For improved readability, consider defining a named constant like AXIS_UNSET = 4.

🔎 Proposed fix

At the method level or class level:

AXIS_UNSET = 4  # Sentinel value for uninitialized axis

Then use:

-        rl_axis = 4
-        is_axis = 4
-        pa_axis = 4
+        rl_axis = AXIS_UNSET
+        is_axis = AXIS_UNSET
+        pa_axis = AXIS_UNSET
         
         # ...
         
-        if 4 in [rl_axis, is_axis, pa_axis]:
+        if AXIS_UNSET in [rl_axis, is_axis, pa_axis]:
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dfb9684 and c84ed9c.

📒 Files selected for processing (10)
  • applications/bci_visualization/CMakeLists.txt
  • applications/bci_visualization/Dockerfile
  • applications/bci_visualization/bci_visualization.py
  • applications/bci_visualization/bci_visualization.yaml
  • applications/bci_visualization/metadata.json
  • applications/bci_visualization/operators/color_buffer_passthrough/color_buffer_passthrough.hpp
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/requirements.txt
  • operators/volume_renderer/dataset.cpp
  • operators/volume_renderer/volume_renderer.cpp
✅ Files skipped from review due to trivial changes (1)
  • applications/bci_visualization/requirements.txt
🚧 Files skipped from review as they are similar to previous changes (1)
  • operators/volume_renderer/volume_renderer.cpp
🧰 Additional context used
📓 Path-based instructions (7)
**/metadata.json

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

When reviewing PRs that modify or add metadata.json files, verify that the category (first tag in the tags array) matches one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

Files:

  • applications/bci_visualization/metadata.json
**/operators/**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Class names for operators should use TitleCase with 'Op' suffix (e.g., 'AdaptiveThresholdingOp')

Files:

  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • operators/volume_renderer/dataset.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/color_buffer_passthrough.hpp
**/operators/**/*.{py,cpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Operator filename should match the directory name with appropriate extension (e.g., 'adaptive_thresholding.py')

Files:

  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • operators/volume_renderer/dataset.cpp
**/operators/**/*.py

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Python operators must include unit tests in the same directory using pytest framework

Files:

  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

**/*.{py,cpp,hpp}: Code must adhere to Holoscan SDK coding standards including style compliance, descriptive naming, and minimal abbreviations
Code must include appropriate error handling and validation for all operations
All code must pass linting checks using './holohub lint' before submission
All code contributions must use clear, English descriptive names for functions and variables
Code must include inline comments for complex logic and clear documentation of functionality

Files:

  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/bci_visualization.py
  • operators/volume_renderer/dataset.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/color_buffer_passthrough.hpp
**/applications/**/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

**/applications/**/CMakeLists.txt: CMakeLists.txt for applications must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators
Applications should include a testing section in CMakeLists.txt for functional testing using CTest

Files:

  • applications/bci_visualization/CMakeLists.txt
**/operators/**/*.cpp

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Operators with C++ implementations must include CMakeLists.txt for build system integration

Files:

  • operators/volume_renderer/dataset.cpp
🧠 Learnings (23)
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(operators|applications|workflows|gxf_extensions|tutorials))/**/metadata.json : Every application, operator, workflow, and extension must include a 'metadata.json' file following the appropriate schema

Applied to files:

  • applications/bci_visualization/metadata.json
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(operators|applications|workflows|gxf_extensions|tutorials))/**/metadata.json : metadata.json must include 'holoscan_sdk' with 'minimum_required_version' and 'tested_versions' fields

Applied to files:

  • applications/bci_visualization/metadata.json
📚 Learning: 2025-11-24T16:27:43.600Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T16:27:43.600Z
Learning: Applies to **/metadata.json : When reviewing PRs that modify or add `metadata.json` files, verify that the **category** (first tag in the `tags` array) matches one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

Applied to files:

  • applications/bci_visualization/metadata.json
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(operators|applications|workflows|gxf_extensions|tutorials))/**/metadata.json : metadata.json must include 'language' field specifying 'C++', 'Python', or 'GXF'

Applied to files:

  • applications/bci_visualization/metadata.json
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.{py,cpp} : Operator filename should match the directory name with appropriate extension (e.g., 'adaptive_thresholding.py')

Applied to files:

  • applications/bci_visualization/Dockerfile
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/ : Directory names for operators should use snake_case (e.g., 'adaptive_thresholding')

Applied to files:

  • applications/bci_visualization/Dockerfile
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.py : Python operators must include unit tests in the same directory using pytest framework

Applied to files:

  • applications/bci_visualization/Dockerfile
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.{py,cpp,hpp} : Class names for operators should use TitleCase with 'Op' suffix (e.g., 'AdaptiveThresholdingOp')

Applied to files:

  • applications/bci_visualization/Dockerfile
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/CMakeLists.txt : CMakeLists.txt for operators must use 'add_holohub_operator()' with optional 'DEPENDS EXTENSIONS' for dependent extensions

Applied to files:

  • applications/bci_visualization/Dockerfile
  • applications/bci_visualization/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Contributions must use consistent directory structures based on their type (operators, applications, workflows, extensions, tutorials, packages)

Applied to files:

  • applications/bci_visualization/Dockerfile
📚 Learning: 2025-10-22T16:53:45.393Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: operators/video_streaming/streaming_client_enhanced/python/CMakeLists.txt:16-24
Timestamp: 2025-10-22T16:53:45.393Z
Learning: The pybind11_add_holohub_module CMake macro in cmake/pybind11_add_holohub_module.cmake encapsulates all pybind11 setup internally, including finding pybind11, linking against holoscan::pybind11 conditionally, and linking the C++ operator target. Operator Python bindings in holohub should only call this macro without additional pybind11 setup.

Applied to files:

  • applications/bci_visualization/Dockerfile
  • applications/bci_visualization/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(operators|applications|workflows))/**/metadata.json : Operators, applications, and workflows should use the '<holohub_data_dir>' placeholder in metadata.json 'run' command for referencing data directory

Applied to files:

  • applications/bci_visualization/bci_visualization.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/applications/**/CMakeLists.txt : CMakeLists.txt for applications must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators

Applied to files:

  • applications/bci_visualization/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/pkg/CMakeLists.txt : CMakeLists.txt for packages must use 'add_holohub_package()' function with optional APPLICATIONS and OPERATORS dependencies

Applied to files:

  • applications/bci_visualization/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/workflows/**/CMakeLists.txt : CMakeLists.txt for workflows must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators

Applied to files:

  • applications/bci_visualization/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(applications|workflows))/**/CMakeLists.txt : Applications and workflows must include CMakeLists.txt for build system integration

Applied to files:

  • applications/bci_visualization/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/pkg/**/CMakeLists.txt : CMakeLists.txt for packages must use 'holohub_configure_deb()' with required NAME, DESCRIPTION, VERSION, VENDOR, CONTACT and optional SECTION, PRIORITY parameters

Applied to files:

  • applications/bci_visualization/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.cpp : Operators with C++ implementations must include CMakeLists.txt for build system integration

Applied to files:

  • applications/bci_visualization/CMakeLists.txt
📚 Learning: 2025-11-19T04:58:30.227Z
Learnt from: finalelement
Repo: nvidia-holoscan/holohub PR: 1237
File: applications/surgical_scene_recon/training/utils/sh_utils.py:1-23
Timestamp: 2025-11-19T04:58:30.227Z
Learning: When reviewing files that contain third-party code with original copyright notices (e.g., from PlenOctree, other open-source projects), do not suggest modifying the original copyright years or attribution. The original copyright notice must be preserved exactly as required by redistribution licenses (BSD, MIT, Apache, etc.). If substantial modifications are made by the current project, a separate copyright line can be added, but the original must remain unchanged.

Applied to files:

  • applications/bci_visualization/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/color_buffer_passthrough.hpp
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/gxf_extensions/**/CMakeLists.txt : CMakeLists.txt for GXF extensions must use 'add_holohub_extension()' function

Applied to files:

  • applications/bci_visualization/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/applications/**/CMakeLists.txt : Applications should include a testing section in CMakeLists.txt for functional testing using CTest

Applied to files:

  • applications/bci_visualization/CMakeLists.txt
📚 Learning: 2025-11-17T06:32:45.908Z
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.cpp:64-87
Timestamp: 2025-11-17T06:32:45.908Z
Learning: In CUDA, when `cudaMemcpyAsync` is used with pageable (non-pinned) host memory, the operation is synchronous with respect to the host thread because the runtime must prevent memory modification during transfer. Only pinned (page-locked) memory allows truly asynchronous behavior. Therefore, no explicit `cudaStreamSynchronize` is required after `cudaMemcpyAsync` when the destination is pageable host memory.

Applied to files:

  • operators/volume_renderer/dataset.cpp
📚 Learning: 2025-10-20T22:19:33.520Z
Learnt from: chengronglai
Repo: nvidia-holoscan/holohub PR: 1136
File: benchmarks/realtime_threads_benchmarking/realtime_thread_benchmark.cpp:245-251
Timestamp: 2025-10-20T22:19:33.520Z
Learning: In the Holoscan framework, operators added via add_operator() can execute without explicitly attaching a scheduling condition - they will still be scheduled and their compute() method will be invoked by the runtime.

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/color_buffer_passthrough.hpp
🧬 Code graph analysis (1)
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (3)
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (2)
  • setup (56-58)
  • compute (99-191)
applications/bci_visualization/operators/reconstruction/normalize_operator.py (2)
  • setup (47-49)
  • compute (51-81)
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (2)
  • setup (113-116)
  • compute (118-152)
🪛 Ruff (0.14.10)
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py

59-59: Avoid specifying long messages outside the exception class

(TRY003)


73-75: Avoid specifying long messages outside the exception class

(TRY003)


92-92: Unused method argument: context

(ARG002)


99-99: Prefer TypeError exception for invalid type

(TRY004)


99-99: Avoid specifying long messages outside the exception class

(TRY003)


101-101: Avoid specifying long messages outside the exception class

(TRY003)


115-115: Avoid specifying long messages outside the exception class

(TRY003)


227-227: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: check-cli-py (3.11)
  • GitHub Check: check-cli-py (3.13)
  • GitHub Check: check-cli-py (3.12)
  • GitHub Check: check-metadata
  • GitHub Check: check-cli-py (3.10)
  • GitHub Check: build-and-deploy
  • GitHub Check: check-cli-ctest
  • GitHub Check: compliance-check
  • GitHub Check: check-lint
  • GitHub Check: check-urls
  • GitHub Check: Greptile Review
🔇 Additional comments (12)
applications/bci_visualization/CMakeLists.txt (2)

16-22: Use add_holohub_application() instead of manual CMake configuration.

Application CMakeLists.txt files must use the add_holohub_application() function with optional DEPENDS OPERATORS for dependent operators, rather than manually defining the project setup.

Based on coding guidelines.

⛔ Skipped due to learnings
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/applications/**/CMakeLists.txt : CMakeLists.txt for applications must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/pkg/CMakeLists.txt : CMakeLists.txt for packages must use 'add_holohub_package()' function with optional APPLICATIONS and OPERATORS dependencies
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/workflows/**/CMakeLists.txt : CMakeLists.txt for workflows must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/CMakeLists.txt : CMakeLists.txt for operators must use 'add_holohub_operator()' with optional 'DEPENDS EXTENSIONS' for dependent extensions
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/pkg/**/CMakeLists.txt : CMakeLists.txt for packages must use 'holohub_configure_deb()' with required NAME, DESCRIPTION, VERSION, VENDOR, CONTACT and optional SECTION, PRIORITY parameters
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(applications|workflows))/**/CMakeLists.txt : Applications and workflows must include CMakeLists.txt for build system integration
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/gxf_extensions/**/CMakeLists.txt : CMakeLists.txt for GXF extensions must use 'add_holohub_extension()' function
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/applications/**/CMakeLists.txt : Applications should include a testing section in CMakeLists.txt for functional testing using CTest
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.cpp : Operators with C++ implementations must include CMakeLists.txt for build system integration
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: operators/video_streaming/streaming_client_enhanced/python/CMakeLists.txt:16-24
Timestamp: 2025-10-22T16:53:45.393Z
Learning: The pybind11_add_holohub_module CMake macro in cmake/pybind11_add_holohub_module.cmake encapsulates all pybind11 setup internally, including finding pybind11, linking against holoscan::pybind11 conditionally, and linking the C++ operator target. Operator Python bindings in holohub should only call this macro without additional pybind11 setup.

1-1: Update copyright year to 2026.

The copyright header must include the current year (2026).

⛔ Skipped due to learnings
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/applications/**/CMakeLists.txt : CMakeLists.txt for applications must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/pkg/CMakeLists.txt : CMakeLists.txt for packages must use 'add_holohub_package()' function with optional APPLICATIONS and OPERATORS dependencies
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/workflows/**/CMakeLists.txt : CMakeLists.txt for workflows must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators
Learnt from: finalelement
Repo: nvidia-holoscan/holohub PR: 1237
File: applications/surgical_scene_recon/training/utils/sh_utils.py:1-23
Timestamp: 2025-11-19T04:58:30.227Z
Learning: When reviewing files that contain third-party code with original copyright notices (e.g., from PlenOctree, other open-source projects), do not suggest modifying the original copyright years or attribution. The original copyright notice must be preserved exactly as required by redistribution licenses (BSD, MIT, Apache, etc.). If substantial modifications are made by the current project, a separate copyright line can be added, but the original must remain unchanged.
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/CMakeLists.txt : CMakeLists.txt for operators must use 'add_holohub_operator()' with optional 'DEPENDS EXTENSIONS' for dependent extensions
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/applications/**/CMakeLists.txt : Applications should include a testing section in CMakeLists.txt for functional testing using CTest
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(applications|workflows))/**/CMakeLists.txt : Applications and workflows must include CMakeLists.txt for build system integration
applications/bci_visualization/Dockerfile (1)

1-27: LGTM!

The Dockerfile structure follows best practices with parameterized base image, proper SPDX headers, and standard environment variable setup.

applications/bci_visualization/operators/color_buffer_passthrough/color_buffer_passthrough.hpp (1)

21-36: LGTM!

The operator follows Holoscan SDK conventions with proper naming (TitleCase with 'Op' suffix), clear documentation explaining its purpose for queue policy control, and appropriate method declarations.

applications/bci_visualization/bci_visualization.yaml (1)

1-9: LGTM!

The configuration file is well-structured with appropriate volume renderer settings and proper SPDX headers.

applications/bci_visualization/metadata.json (2)

1-37: LGTM!

The metadata structure correctly includes all required fields: application name, authors with affiliations, language specification, version information, Holoscan SDK requirements with minimum and tested versions, and platform support.


38-38: LGTM!

The category tag "Visualization" correctly matches one of the approved categories. As per coding guidelines.

applications/bci_visualization/bci_visualization.py (2)

38-73: LGTM!

The constructor is well-structured with clear type hints and appropriate parameter organization. Past review comments suggest moving some parameters to a YAML configuration file, which would be a good-to-have improvement for user experience.


75-196: LGTM!

The compose() method is well-organized with clear separation between reconstruction and visualization pipelines. The explicit deadlock avoidance mechanism (lines 140-148) with detailed comments demonstrates thoughtful design for multi-threaded execution.

applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (3)

172-190: Well-documented orientation mapping.

The detailed explanation of the nibabel "to" vs. C++ "from" orientation convention directly addresses tbirdso's previous review comment requesting clarification. The mapping logic is clear and well-justified.


240-264: LGTM: Normalization preserves zero baseline.

The normalization logic correctly scales positive and negative values separately while preserving zero as the baseline, which is appropriate for visualizing activation data.


266-290: LGTM: Clean GPU-based resampling implementation.

The affine transformation logic correctly computes the mapping matrix and leverages CuPy's GPU-accelerated resampling for performance.

Comment on lines +40 to +41
"requirements": {
"python": ["numpy", "cupy"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Sync Python requirements with requirements.txt.

The requirements.python field lists only numpy and cupy, but the application's requirements.txt file includes additional dependencies: nibabel, scipy, and h5py. This discrepancy may confuse users about the actual dependencies required to run the application.

🔎 Proposed fix
 "requirements": {
-  "python": ["numpy", "cupy"]
+  "python": ["nibabel", "numpy", "scipy", "h5py", "cupy"]
 },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"requirements": {
"python": ["numpy", "cupy"]
"requirements": {
"python": ["nibabel", "numpy", "scipy", "h5py", "cupy"]
🤖 Prompt for AI Agents
In @applications/bci_visualization/metadata.json around lines 40 - 41, The
metadata.json's requirements.python currently lists only "numpy" and "cupy"
which is out of sync with requirements.txt; update the "requirements.python"
array in metadata.json to include the additional packages found in
requirements.txt (at least "nibabel", "scipy", and "h5py") so the field matches
the application's actual Python dependencies.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional Comments (15)

  1. applications/bci_visualization/Dockerfile, line 32 (link)

    style: Remove trailing whitespace

    Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

  2. applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt, line 43-46 (link)

    style: trailing whitespace at end of file

    Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

  3. applications/bci_visualization/operators/reconstruction/solver_operator.py, line 28 (link)

    style: The use_gpu parameter is stored but never used - the operator always uses GPU (CuPy) regardless of this setting. Is this parameter intended for future CPU fallback functionality?

    Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

  4. applications/bci_visualization/metadata.json, line 41 (link)

    logic: Missing key Python dependencies - the application uses scipy, nibabel, and h5py but only lists numpy and cupy

  5. applications/bci_visualization/README.md, line 44 (link)

    syntax: typo: 'GRAY MATER' should be 'GRAY MATTER'

  6. applications/bci_visualization/processing/reconstruction/hbo.py, line 33 (link)

    style: File handle not explicitly closed - could cause resource leak with many CSV reads

    Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

  7. applications/bci_visualization/processing/reconstruction/reg_inv.py, line 9 (link)

    style: This constant is defined but never used in the implementation. Should this constant be used for condition number validation, or can it be removed?

    Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

  8. applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py, line 277 (link)

    style: data_gpu parameter shadows input, creates unnecessary GPU memory copy

    Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

  9. applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py, line 63-64 (link)

    syntax: Typo: 'np.nsarray' should be 'np.ndarray', 'defineg' should be 'defining'

  10. applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py, line 106 (link)

    style: _cached_affine is initialized but never used in the implementation

    Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

  11. applications/bci_visualization/operators/reconstruction/build_rhs_operator.py, line 115 (link)

    logic: In-place log operation on moments[0] modifies input data permanently - could cause issues if data is reused elsewhere. Is the input payload.data intended to be modified, or should this use a copy?

  12. applications/bci_visualization/processing/reconstruction/__init__.py, line 50-52 (link)

    style: Parameters documented as 'optional' but function requires all of them - remove 'optional' from docstring

  13. applications/bci_visualization/processing/reconstruction/__init__.py, line 58-60 (link)

    style: Section header should be 'Returns' not 'returns'

  14. applications/bci_visualization/operators/reconstruction/normalize_operator.py, line 136-138 (link)

    style: mixing CuPy array with Python max() function could cause GPU-CPU transfer - use xp.maximum() instead

  15. applications/bci_visualization/streams/snirf.py, line 148 (link)

    logic: Potential IndexError if unique_channel_idxs contains -1 values from failed lookups

38 files reviewed, 15 comments

Edit Code Review Agent Settings | Greptile

Copy link
Member

@bhashemian bhashemian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @mimiliaogo for submitting this PR. I reviewed it from the user experience perspective and made a few comments to address.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to create an explicit cpp folder please?


## Running
1. Download data
* Please download data from [here](https://drive.google.com/drive/folders/1RpQ6UzjIZAr90FdW9VIbtTFYR6-up7w2) and put everything under `data/bci_visualization`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The download link is not accessible for public. Can we hosted them somewhere else?
We cannot give public access to the content on NVIDIA GDrive.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

38 files reviewed, 21 comments

Edit Code Review Agent Settings | Greptile

# Install Python dependencies
COPY applications/bci_visualization/requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt --no-cache-dir

No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Trailing whitespace should be removed

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!


channel_mask[jacobian_index] = True

assert np.any(channel_mask)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Using assert for runtime validation can cause issues in production when Python is run with -O optimization flag, which removes assert statements. Consider using explicit validation with a proper exception instead.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

nibabel==5.3.2
numpy==2.3.3
scipy==1.16.3
h5py==3.15.1 No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: File should end with a newline character

logger.info("CuPy is not available.")


def get_array_module(use_gpu: bool = False) -> tuple[ModuleType, bool]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Parameter default use_gpu: bool = False conflicts with the GPU-only enforcement - consider removing the default or changing it to True.

Suggested change
def get_array_module(use_gpu: bool = False) -> tuple[ModuleType, bool]:
def get_array_module(use_gpu: bool = True) -> tuple[ModuleType, bool]:

Is there a reason to keep the default as False when the function always raises an exception for that value?

Comment on lines +48 to +50
sample = next(self._iter, None)
if sample is None:
raise StopIteration("No more samples available in the stream.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Raising StopIteration will terminate the entire Holoscan application. Consider using a more graceful approach like returning early or emitting a special end-of-stream message.

Comment on lines 136 to 138
self._max_rhs[idx_feature, wavelength_idx] = max(
self._max_rhs[idx_feature, wavelength_idx], batch_feature_max
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Using Python's max() instead of xp.maximum() forces GPU-to-CPU transfer, breaking GPU-only processing

Suggested change
self._max_rhs[idx_feature, wavelength_idx] = max(
self._max_rhs[idx_feature, wavelength_idx], batch_feature_max
)
self._max_rhs[idx_feature, wavelength_idx] = xp.maximum(
self._max_rhs[idx_feature, wavelength_idx], batch_feature_max
)

Comment on lines 100 to 101
if hb_voxel.ndim != 3:
raise ValueError(f"VoxelStreamToVolume: Invalid voxel data shape: {hb_voxel.shape}, expected 3D")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: docstring indicates 4D input (I,J,K,n_channels) but validation expects 3D

Suggested change
if hb_voxel.ndim != 3:
raise ValueError(f"VoxelStreamToVolume: Invalid voxel data shape: {hb_voxel.shape}, expected 3D")
if hb_voxel.ndim != 4:
raise ValueError(f"VoxelStreamToVolume: Invalid voxel data shape: {hb_voxel.shape}, expected 4D (I,J,K,n_channels)")

Should this accept 4D input with channels or is the docstring incorrect?

Comment on lines +50 to +56
mega_jacobian_path : pathlib.Path | str, optional
Path to the mega Jacobian (.npy/.npz). Defaults to the module constant.
channel_mapping_path : pathlib.Path | str, optional
Path to the channel mapping JSON. Defaults to the module constant.
voxel_info_dir : pathlib.Path | str, optional
Directory containing voxel info files (mua, musp, idxs_significant_voxels, ijk, xyz, wavelengths).
Defaults to the module constant.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Documentation mentions 'optional' and 'Defaults to the module constant' but all parameters are required and there are no module constants defined

Suggested change
mega_jacobian_path : pathlib.Path | str, optional
Path to the mega Jacobian (.npy/.npz). Defaults to the module constant.
channel_mapping_path : pathlib.Path | str, optional
Path to the channel mapping JSON. Defaults to the module constant.
voxel_info_dir : pathlib.Path | str, optional
Directory containing voxel info files (mua, musp, idxs_significant_voxels, ijk, xyz, wavelengths).
Defaults to the module constant.
mega_jacobian_path : pathlib.Path | str
Path to the mega Jacobian (.npy/.npz).
channel_mapping_path : pathlib.Path | str
Path to the channel mapping JSON.
voxel_info_dir : pathlib.Path | str
Directory containing voxel info files (mua, musp, idxs_significant_voxels, ijk, xyz, wavelengths).

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +84 to +89
_mua = np.load(pathlib.Path(mua_path))
_musp = np.load(pathlib.Path(musp_path))
_idxs_significant_voxels = np.load(pathlib.Path(idxs_significant_voxels_path))
_ijk = np.load(pathlib.Path(ijk_path))
_xyz = np.load(pathlib.Path(xyz_path))
_wavelengths = np.load(pathlib.Path(wavelengths_path))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Redundant pathlib.Path() wrapping - variables are already Path objects from lines 76-82

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +147 to +148
unique_channel_idxs = channel_idxs[moment][wavelength]["unique_channel_idxs"]
to_send[moment, unique_channel_idxs, wavelength] = sample[snirf_channel_idxs]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Array assignment will fail if unique_channel_idxs contains -1 values from failed lookups

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Overview

Greptile Summary

This PR introduces a new BCI visualization application developed with Kernel that reconstructs and renders HbO brain activities in real-time from fNIRS data.

Major Changes

New Application Components:

  • Complete reconstruction pipeline: SNIRF stream reader → RHS builder → normalization → regularized solver → voxel conversion → volume rendering
  • Real-time visualization with interactive 3D volume renderer and camera controls
  • Multi-threaded execution using EventBasedScheduler to decouple reconstruction and rendering pipelines
  • GPU-accelerated processing with CuPy for reconstruction and CUDA streams throughout

Volume Renderer Enhancements:

  • Added CUDA stream handling for async GPU transfers (cudaMemcpy3DAsync)
  • Implemented camera pose caching to avoid resets when no input received
  • Fixed memory leak by calling ResetVolume() before loading new volumes
  • Added check to skip rendering when no density volume available

Queue Management:

  • New ColorBufferPassthroughOp with POP queue policy prevents frame stacking and deadlocks in multi-threaded mode
  • Output port condition set to NONE on volume renderer to remove backpressure

Issues Found

  • README has incomplete placeholder text that needs completion
  • Incorrect wget flag syntax (-0 should be -O)
  • Copyright year 2026 in several files should be 2025

Confidence Score: 4/5

  • This PR is mostly safe to merge with minor documentation fixes needed.
  • The implementation is solid with well-structured reconstruction and visualization pipelines, proper CUDA stream handling, and important bug fixes (memory leak, camera pose caching). The multi-threaded architecture with queue management is correctly implemented. However, the score is 4 (not 5) due to incomplete documentation (placeholder text in README) and minor style issues (copyright years, wget syntax) that should be addressed before merging.
  • README.md needs completion of placeholder text and wget syntax fix. Several Python files have incorrect copyright year (2026 instead of 2025).

Important Files Changed

File Analysis

Filename Score Overview
applications/bci_visualization/bci_visualization.py 5/5 Main application file connecting reconstruction and visualization pipelines with proper threading configuration. Clean implementation with good operator composition.
applications/bci_visualization/README.md 3/5 Comprehensive documentation with incomplete placeholder text and incorrect wget flag syntax that need fixing.
operators/volume_renderer/volume_renderer.cpp 5/5 Enhanced with CUDA stream handling, camera pose caching, memory leak fix, and async GPU transfers for improved performance and stability.
operators/volume_renderer/dataset.cpp 5/5 Added CUDA stream parameter and async memory operations, plus ResetVolume method to prevent memory leaks.
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py 5/5 Converts sparse voxel data to dense volumes with adaptive normalization, resampling, and proper orientation handling.
applications/bci_visualization/operators/reconstruction/solver_operator.py 4/5 Implements regularized inverse solver for reconstruction. Has incorrect copyright year (2026).
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py 4/5 Converts moments to RHS/Jacobian batches with baseline subtraction and GPU caching. Has incorrect copyright year (2026).
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp 5/5 Simple passthrough operator with POP queue policy to prevent frame stacking and avoid deadlocks in multi-threaded mode.
applications/bci_visualization/streams/snirf.py 4/5 SNIRF file reader for streaming brain activity data. Has incorrect copyright year (2026).

Sequence Diagram

sequenceDiagram
    participant SNIRF as SNIRF Stream
    participant Stream as Stream Operator
    participant BuildRHS as Build RHS
    participant Normalize as Normalize
    participant Solver as Regularized Solver
    participant Convert as Convert to Voxels
    participant VoxelToVol as Voxel to Volume
    participant VolRenderer as Volume Renderer
    participant Passthrough as Color Buffer Passthrough
    participant Holoviz as HolovizOp
    
    Note over SNIRF,Stream: Reconstruction Pipeline (Thread 1)
    SNIRF->>Stream: Read SNIRF data
    Stream->>BuildRHS: moments
    BuildRHS->>BuildRHS: Apply baseline subtraction
    BuildRHS->>BuildRHS: Map channels to Jacobian
    BuildRHS->>Normalize: batch (RHS + Jacobian)
    Normalize->>Normalize: Normalize for stability
    Normalize->>Solver: normalized batch
    Solver->>Solver: Solve regularized inverse
    Solver->>Convert: result (mua, musp)
    Convert->>Convert: Apply extinction coefficients
    Convert->>VoxelToVol: hb_voxel_data + affine_4x4
    
    Note over VoxelToVol,Holoviz: Visualization Pipeline (Thread 2)
    VoxelToVol->>VoxelToVol: Update running statistics
    VoxelToVol->>VoxelToVol: Normalize with adaptive scaling
    VoxelToVol->>VoxelToVol: Resample to mask dimensions
    VoxelToVol->>VolRenderer: volume + mask + spacing/orientation
    VolRenderer->>VolRenderer: Render with ClaraViz
    VolRenderer->>Passthrough: color_buffer_out
    Passthrough->>Holoviz: color_buffer_out (POP policy)
    Holoviz->>VolRenderer: camera_pose feedback
    VolRenderer->>VolRenderer: Cache camera pose
Loading

Comment on lines 47 to 48
In this Holohub application, we demonstrate a real-time source reconstruction pipeline that runs on a Jetson Thor at the native framerate of the Kernel Flow system (4.75 Hz) and visualizes the data in 3D using Holoviz. We did this by X, Y,
and Z (@Gabe or @Mimi to add high-level).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incomplete sentence with placeholder text (@Gabe or @Mimi to add high-level) needs to be completed before merging.

@@ -0,0 +1,97 @@
"""
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copyright year is 2026, should be 2025.

@@ -0,0 +1,195 @@
"""
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copyright year is 2026, should be 2025.

@@ -0,0 +1,155 @@
"""
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copyright year is 2026, should be 2025.

- Downloaded `.snirf` files for use with the included data replayer. Example data can be found on [OpenNeuro](https://openneuro.org/datasets/ds006545) and copied locally to be run through the replayer.

```bash
wget -0 data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Flag should be -O (capital letter O for output), not -0 (zero)

Suggested change
wget -0 data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
wget -O data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 17

🤖 Fix all issues with AI agents
In @applications/bci_visualization/bci_visualization.py:
- Line 16: The import statement brings in MultiThreadScheduler from
holoscan.schedulers but it is unused and triggers an F401 lint error; remove
MultiThreadScheduler from the import so only EventBasedScheduler is imported
(i.e., change the import line to import EventBasedScheduler alone) to eliminate
the unused-symbol warning.
- Around line 1-6: Update the copyright header in the module docstring to
include 2026 (e.g., change "2025" to "2025-2026" or add 2026) so the
SPDX-FileCopyrightText reflects the correct year range; modify the top-of-file
docstring in bci_visualization.py where the current "2025" appears.

In
@applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt:
- Line 1: Update the SPDX copyright header year in the CMakeLists.txt by
changing the existing "2025" to "2026" so it matches the .cpp file and resolves
the pipeline failure; locate the line starting with "# SPDX-FileCopyrightText:"
and replace the year only.

In
@applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp:
- Line 1: Update the SPDX copyright header at the top of
color_buffer_passthrough.cpp to include the current year 2026 (replace "2025"
with "2026"); locate the file header comment (the /* SPDX-... block) and modify
the year range/text so the header reads the correct year, then save and re-run
the pipeline.
- Around line 30-38: The compute method in ColorBufferPassthroughOp retrieves
CUDA streams via input.receive_cuda_streams("color_buffer_in") but never uses
them; either propagate the streams to the output or remove the call. To fix,
after successfully getting color_message.value() and before returning, call
output.emit_cuda_streams("color_buffer_out", cuda_streams) so the downstream
operator receives the same stream (keep the existing
output.emit(color_message.value(), "color_buffer_out")); alternatively, if
stream propagation is handled elsewhere, delete the
input.receive_cuda_streams("color_buffer_in") call to remove the unused
variable.

In
@applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp:
- Around line 1-16: Update the file header in color_buffer_passthrough.cpp by
changing the copyright year line that currently reads "Copyright (c) 2025 NVIDIA
CORPORATION & AFFILIATES." to include 2026 (e.g., "Copyright (c) 2025-2026
NVIDIA CORPORATION & AFFILIATES."), leaving the rest of the SPDX and license
block untouched so the SPDX-License-Identifier and license text remain
identical.

In
@applications/bci_visualization/operators/reconstruction/build_rhs_operator.py:
- Around line 111-124: When _apply_baseline returns None inside compute, you're
returning without emitting anything on the "batch" output which deadlocks
downstream (normalize_operator and solver_operator). Fix by emitting an explicit
skip/sentinel on the "batch" port whenever _apply_baseline returns None (e.g., a
small payload or metadata object like {'skip': True} or a Batch with a skip
flag) so downstream operators can detect and early-return; update
normalize_operator and solver_operator to check for that sentinel on
op_input.receive("batch") and handle it (no-op propagate or drop) instead of
blocking. Ensure the sentinel uses the same port name "batch" and a documented
field so all operators check the same symbol.

In
@applications/bci_visualization/operators/reconstruction/normalize_operator.py:
- Around line 68-71: The check for "if result is None" is dead because
_normalize_batch always returns a (systems, num_absorbers) tuple; implement the
intended skip by having _normalize_batch detect when the computed max_rhs (or
equivalent per-frame RHS max) is all zeros and return None in that case,
otherwise return the existing (systems, num_absorbers) tuple; then keep the
caller logic that checks for None before proceeding
(self._normalize_batch(batch) -> result, handle None to skip, otherwise
unpack/use the tuple). Ensure the detection uses the same internal variable name
(e.g., max_rhs) and preserves existing return shape when not skipping.

In
@applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py:
- Around line 1-6: Update the copyright header year from 2025 to 2026 in the
top-of-file docstring for the VoxelStreamToVolume operator; specifically change
"SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES." to
use 2026 in voxel_stream_to_volume.py so the file header reflects the current
year.
- Around line 16-167: Add pytest unit tests for the VoxelStreamToVolumeOp class:
create test_voxel_stream_to_volume.py that covers initialization/start
(mask_nifti_path handling and start() error path), setup() port configuration,
and compute() behavior including receiving affine_4x4 and hb_voxel_data,
validating type/shape errors, calling compute path that exercises
_derive_orientation_from_affine, _update_running_statistics,
_normalize_and_process_activated_voxels, and _cupy_resample (mock GPU ops where
needed), and emission of mask and volume outputs; use fixtures from conftest.py
to provide fake fragment, OperatorSpec context, synthetic mask NIfTI, cupy
arrays, and CUDA stream, and include edge cases (missing affine, non-cupy
hb_voxel, wrong ndim) and assertions that op_output.emit is called with expected
keys and types.

In @applications/bci_visualization/README.md:
- Around line 47-48: Replace the placeholder text "@Gabe or @Mimi to add
high-level" in the README sentence describing the Holohub application with a
concise high-level summary of what was done (e.g., outline the real-time source
reconstruction approach, key components used on Jetson Thor, and the role of
Holoviz for 3D visualization), ensuring the final sentence reads naturally and
completes the description of the pipeline and contributions.
- Line 59: Fix the typo in the wget command in the README: change the flag `-0`
to the uppercase `-O` in the line starting with `wget -0
data/examples/data.snirf "https://s3.amazonaws.com/...` so the output filename
is correctly specified (ensure the rest of the command, including the output
path `data/examples/data.snirf` and the quoted URL, remains unchanged).

In @applications/bci_visualization/streams/snirf.py:
- Around line 45-52: The HDF5 file opened in start() is stored in
self._snirf_file and never closed; add a stop(self) method on the class that
checks if self._snirf_file is not None, calls its close() method, and sets
self._snirf_file = None (and optionally clears related state like
self._channels/_unique_channels); also ensure start() handles exceptions by
closing any partially opened file (or call stop() on error) to avoid leaks.

In @applications/bci_visualization/utils/reconstruction/reg_inv.py:
- Around line 6-16: Remove the unused ModuleType import and move the cupy import
to the top-level imports: delete "from types import ModuleType" (it is unused)
and relocate "import cupy as cp" to join "import logging" and other imports
above the module-level constants and _HESSIAN_CACHE so all imports are at the
top; ensure references to _HESSIAN_CACHE and MAX_REASONABLE_COND_RATIO remain
unchanged.
🧹 Nitpick comments (25)
applications/bci_visualization/utils/reconstruction/types.py (1)

8-8: Consider using TypedDict or dataclass for the nested structure.

The deeply nested dictionary type (4 levels deep) could be more maintainable and self-documenting with TypedDict or a dataclass hierarchy. This would provide:

  • Better IDE autocomplete
  • More explicit field names
  • Runtime validation options
♻️ Example refactor with TypedDict
from typing import Tuple, TypedDict

class ChannelData(TypedDict):
    coordinates: Tuple[int]

class SourceDetectorPair(TypedDict):
    # Add specific keys here based on domain knowledge
    pass

class HeadsetChannel(TypedDict):
    # Add specific keys here based on domain knowledge  
    pass

class ChannelHeadsetMapping(TypedDict):
    # Add specific keys here based on domain knowledge
    pass
applications/bci_visualization/README.md (1)

81-81: Add language specifier to fenced code block.

Per markdownlint (MD040), fenced code blocks should have a language specified for proper syntax highlighting.

Proposed fix
-```
+```text
 data/bci_visualization/
applications/bci_visualization/operators/stream.py (2)

30-30: Operator class name should use 'Op' suffix.

Per coding guidelines, operator class names should use TitleCase with 'Op' suffix. Consider renaming StreamOperator to StreamOp or NirsStreamOp for consistency.


44-57: Add stop() method for resource cleanup and reconsider StopIteration usage.

Two concerns:

  1. Missing cleanup: The operator should implement a stop() method to properly close the stream when the pipeline shuts down.

  2. StopIteration handling: Raising StopIteration from compute() may not be the standard Holoscan pattern for signaling stream exhaustion. Consider using ConditionType.kNone on the output port or logging a warning and returning early without emitting.

Proposed fix for resource cleanup
     def start(self) -> None:
         self._stream.start()
         self._channels = self._stream.get_channels()
         self._iter = self._stream.stream_nirs()
 
+    def stop(self) -> None:
+        if hasattr(self._stream, 'stop'):
+            self._stream.stop()
+
     def compute(
         self, op_input: InputContext, op_output: OutputContext, context: ExecutionContext
     ) -> None:
 
         sample = next(self._iter, None)
         if sample is None:
-            raise StopIteration("No more samples available in the stream.")
+            logger.info("Stream exhausted - no more samples available")
+            return
 
         op_output.emit(SampleOutput(sample, self._channels), "samples")
applications/bci_visualization/streams/base_nirs.py (2)

13-20: Consider validating array length consistency in ChannelInfo.

The __len__ method returns only len(self.source_module), which assumes all four arrays have the same length. If they don't, this could cause subtle bugs downstream.

Consider adding validation, either in a factory method or by documenting the invariant:

Option: Add validation method
class ChannelInfo(NamedTuple):
    detector_module: NDArray[np.int_]
    detector_number: NDArray[np.int_]
    source_module: NDArray[np.int_]
    source_number: NDArray[np.int_]

    def __len__(self) -> int:
        return len(self.source_module)

    def validate(self) -> None:
        """Raises ValueError if arrays have inconsistent lengths."""
        lengths = {
            len(self.detector_module),
            len(self.detector_number),
            len(self.source_module),
            len(self.source_number),
        }
        if len(lengths) != 1:
            raise ValueError(f"ChannelInfo arrays have inconsistent lengths: {lengths}")

23-25: Clarify intent of non-abstract start() method.

Ruff flags this as an empty method in an ABC without @abstractmethod. The current design (concrete no-op) is valid if start() is optional for subclasses. However, consider adding a docstring to clarify this is intentionally a default no-op that subclasses may override.

Proposed documentation
 class BaseNirsStream(abc.ABC):
     def start(self) -> None:
+        """Initialize the stream. Override in subclasses if setup is needed."""
         pass
applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt (1)

22-42: Consider using add_holohub_operator() for consistency.

Per coding guidelines, CMakeLists.txt for operators should use add_holohub_operator() with optional DEPENDS EXTENSIONS. The current manual approach works but may miss standardized build behaviors provided by the HoloHub macro.

If this operator is intentionally application-scoped and not meant to be reusable across HoloHub, the current approach is acceptable. Based on learnings, the pybind11_add_holohub_module macro is used for Python bindings, so ensure consistency with the python subdirectory.

applications/bci_visualization/operators/reconstruction/solver_operator.py (3)

32-37: Unused use_gpu parameter.

The use_gpu parameter is stored in self._use_gpu but never used - the _solve_batch method always uses CuPy/GPU operations. Either remove the parameter or implement a CPU fallback path.

Option 1: Remove unused parameter
     def __init__(
         self,
         *,
         reg: float = REG_DEFAULT,
-        use_gpu: bool = False,
         fragment: Any | None = None,
     ) -> None:
         super().__init__(fragment, name=self.__class__.__name__)
         self._reg = reg
-        self._use_gpu = use_gpu

67-68: Replace assert statements with explicit validation.

assert statements are removed when Python runs with optimization flags (-O). For production code, use explicit validation with proper error messages.

Proposed fix
-            assert system.rhs.ndim == 1
-            assert system.jacobian.shape[1] == num_cols_expected
+            if system.rhs.ndim != 1:
+                raise ValueError(f"Expected 1D RHS, got shape {system.rhs.shape}")
+            if system.jacobian.shape[1] != num_cols_expected:
+                raise ValueError(
+                    f"Jacobian columns {system.jacobian.shape[1]} != expected {num_cols_expected}"
+                )

79-79: Replace assert with explicit validation.

Same concern as above - this assertion will be skipped with -O.

Proposed fix
-            assert solution.shape == (num_cols_expected,)
+            if solution.shape != (num_cols_expected,):
+                raise ValueError(
+                    f"Solution shape {solution.shape} != expected ({num_cols_expected},)"
+                )
applications/bci_visualization/operators/reconstruction/normalize_operator.py (4)

43-49: Unused instance variables _use_gpu and _max_rhs.

Both self._use_gpu and self._max_rhs are initialized but never used in the class methods. The GPU path is always taken via CuPy, and _max_rhs is never set or checked.

Proposed fix - remove unused variables
     def __init__(
         self,
         *,
         fragment: Any | None = None,
-        use_gpu: bool = False,
         use_hard_coded_normalizers: bool = True,
     ) -> None:
         super().__init__(fragment, name=self.__class__.__name__)
         self._jacobian_cache: NDArray[np.float32] | None = None
-        self._use_gpu = use_gpu
-        self._max_rhs: NDArray[np.float32] | None = None

         self._use_hard_coded_normalizers = use_hard_coded_normalizers

103-104: Replace assert with explicit validation.

assert statements are removed with -O flag. Use explicit error handling.

Proposed fix
-        assert not cp.any(cp.isnan(row_normalizers))
+        if cp.any(cp.isnan(row_normalizers)):
+            raise ValueError("Row normalizers contain NaN values after initialization")
         self._hard_coded_row_normalizers_cache = row_normalizers

110-111: Replace assert with explicit validation.

Same concern - use explicit error handling for production code.

Proposed fix
         num_absorbers, remainder = divmod(num_cols, num_significant)
-        assert not remainder
+        if remainder:
+            raise ValueError(
+                f"Jacobian columns ({num_cols}) not evenly divisible by "
+                f"significant voxels ({num_significant})"
+            )

27-30: Document the meaning of HARD_CODED_NORMALIZERS.

These magic numbers would benefit from documentation explaining what they represent (e.g., units, derivation, or physical meaning in the fNIRS context).

Proposed documentation
+# Per-feature row normalizers for each wavelength.
+# Shape: [wavelength_idx][feature_idx] where feature corresponds to moment order.
+# Values empirically determined for numerical stability in the inverse solve.
 HARD_CODED_NORMALIZERS = [  # for each feature type (moment)
     np.array([1, 5e2, 5e5]),
     np.array([0.5, 2.5e2, 2.5e5]),
 ]
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (2)

28-52: Type annotation inconsistency: function returns CuPy array.

The return type annotation indicates NDArray[np.float32] but the function returns a CuPy array (cp.zeros). Consider updating the type hint or adding a docstring clarification.


75-84: Floating-point comparison with exact zero is fragile.

Comparing np.linalg.det(B) == 0 can miss near-singular matrices due to floating-point precision. Consider using a tolerance-based check.

Proposed fix
+    EPS = 1e-10
     while ctr < 100:
         ctr += 1
         inds = rng.choice(np.arange(len(ijk)), size=n, replace=False)
         ins = ijk[np.array(inds), :]  # <- points
         out = xyz[np.array(inds), :]  # <- mapped to
         B = np.vstack([np.transpose(ins), np.ones(n, dtype=np.float32)])
-        if np.linalg.det(B) == 0:
+        if abs(np.linalg.det(B)) < EPS:
             continue
-    if np.linalg.det(B) == 0:
+        break  # found valid B
+    if abs(np.linalg.det(B)) < EPS:
         raise RuntimeError("Cannot compute affine, algorithm failed after 100 attempts")
applications/bci_visualization/utils/reconstruction/reg_inv.py (2)

18-52: Consider more specific type annotations.

The function signatures use object as return type and lack parameter type annotations for array arguments. While functional, more specific types (e.g., cp.ndarray) would improve IDE support and documentation.


76-92: Global Hessian cache lacks invalidation mechanism.

The _HESSIAN_CACHE persists across calls without a way to clear it. If the Jacobian data changes between runs (e.g., different datasets), stale cached values could be used. Consider adding a cache-clearing function or documenting this limitation.

applications/bci_visualization/utils/reconstruction/assets.py (1)

98-104: Redundant Path() wrapping.

The paths are already converted to pathlib.Path on lines 90-96. The additional Path() calls are unnecessary.

Proposed fix
-    _mua = np.load(pathlib.Path(mua_path))
-    _musp = np.load(pathlib.Path(musp_path))
-    _idxs_significant_voxels = np.load(pathlib.Path(idxs_significant_voxels_path))
-    _ijk = np.load(pathlib.Path(ijk_path))
-    _xyz = np.load(pathlib.Path(xyz_path))
-    _wavelengths = np.load(pathlib.Path(wavelengths_path))
-    _resolution = tuple(np.load(pathlib.Path(resolution_path)).tolist())
+    _mua = np.load(mua_path)
+    _musp = np.load(musp_path)
+    _idxs_significant_voxels = np.load(idxs_significant_voxels_path)
+    _ijk = np.load(ijk_path)
+    _xyz = np.load(xyz_path)
+    _wavelengths = np.load(wavelengths_path)
+    _resolution = tuple(np.load(resolution_path).tolist())
applications/bci_visualization/streams/snirf.py (2)

52-52: Use logger instead of print() for consistency.

Lines 52 and 144 use print() while a logger is already configured at module level.

Proposed fix
-        print("Got {} unique channels".format(len(self._unique_channels)))
+        logger.info("Got %d unique channels", len(self._unique_channels))
-        print("Streaming {} samples from SNIRF".format(len(data)))
+        logger.info("Streaming %d samples from SNIRF", len(data))

Also applies to: 144-144


145-153: Rename unused loop variable and consider adding strict=True to zip.

The loop variable ts is unused. Rename to _ts to indicate it's intentionally ignored. Adding strict=True to zip() ensures times and data have matching lengths.

Proposed fix
-        for ts, sample in zip(times, data):
+        for _ts, sample in zip(times, data, strict=True):
applications/bci_visualization/utils/reconstruction/hbo.py (2)

59-63: Unused use_gpu parameter.

The use_gpu parameter is stored in self._use_gpu but never referenced. Either remove it or implement the CPU/GPU switching logic if intended.

♻️ Proposed fix
 class HbO:
-    def __init__(self, coefficients: Dict[int, ExtinctionCoefficient], use_gpu: bool = False) -> None:
+    def __init__(self, coefficients: Dict[int, ExtinctionCoefficient]) -> None:
         self._coefficients = coefficients
         self._cached_coefficients: cp.ndarray | None = None
-        self._use_gpu = use_gpu

85-92: Redundant round() call on integer parameter.

Per static analysis, wavelength is typed as int, making round(wavelength) redundant. The call is harmless but can be simplified.

♻️ Proposed fix
-        coefficient = self._coefficients.get(round(wavelength))
+        coefficient = self._coefficients.get(wavelength)
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (2)

136-139: Replace assert with explicit validation for production code.

Using assert for runtime validation can be bypassed when Python runs with optimizations (-O flag). For production code, use explicit conditional checks with exceptions.

♻️ Proposed fix
         # Validate that jacobian features dimension matches realtime moments
         # 5D jacobian shape: (channels, features, wavelengths, voxels, simulation_types)
         num_features = realtime_moments.shape[0]
-        assert self._mega_jacobians_cpu.shape[1] == num_features
+        if self._mega_jacobians_cpu.shape[1] != num_features:
+            raise ValueError(
+                f"Jacobian features dimension {self._mega_jacobians_cpu.shape[1]} "
+                f"does not match moments features {num_features}"
+            )

30-31: Operator class name should use 'Op' suffix per coding guidelines.

The CONTRIBUTING.md guidelines require operator class names to use TitleCase with 'Op' suffix (e.g., BuildRHSOp). However, the codebase has inconsistent naming: newer operators follow this convention (WebRTCServerOp, UnzipOp, SendMeshToUSDOp), while older operators in the medical_imaging directory use 'Operator' suffix (PublisherOperator, STLConversionOperator, DICOMSeriesToVolumeOperator). This file should be renamed to BuildRHSOp to align with documented guidelines, or the codebase should adopt a consistent convention.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 44bbc7c and c0e1e86.

⛔ Files ignored due to path filters (3)
  • applications/bci_visualization/docs/brain_activity_example.gif is excluded by !**/*.gif
  • applications/bci_visualization/docs/flow_channel_map.png is excluded by !**/*.png
  • applications/bci_visualization/docs/photon_simulation.gif is excluded by !**/*.gif
📒 Files selected for processing (26)
  • applications/bci_visualization/README.md
  • applications/bci_visualization/bci_visualization.py
  • applications/bci_visualization/bci_visualization.yaml
  • applications/bci_visualization/metadata.json
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/reconstruction/types.py
  • applications/bci_visualization/operators/stream.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/requirements.txt
  • applications/bci_visualization/streams/__init__.py
  • applications/bci_visualization/streams/base_nirs.py
  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/utils/__init__.py
  • applications/bci_visualization/utils/reconstruction/__init__.py
  • applications/bci_visualization/utils/reconstruction/assets.py
  • applications/bci_visualization/utils/reconstruction/hbo.py
  • applications/bci_visualization/utils/reconstruction/reg_inv.py
  • applications/bci_visualization/utils/reconstruction/types.py
✅ Files skipped from review due to trivial changes (3)
  • applications/bci_visualization/utils/init.py
  • applications/bci_visualization/utils/reconstruction/init.py
  • applications/bci_visualization/streams/init.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • applications/bci_visualization/bci_visualization.yaml
  • applications/bci_visualization/requirements.txt
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

**/*.{py,cpp,hpp}: Code must adhere to Holoscan SDK coding standards including style compliance, descriptive naming, and minimal abbreviations
Code must include appropriate error handling and validation for all operations
All code must pass linting checks using './holohub lint' before submission
All code contributions must use clear, English descriptive names for functions and variables
Code must include inline comments for complex logic and clear documentation of functionality

Files:

  • applications/bci_visualization/utils/reconstruction/types.py
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/utils/reconstruction/assets.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/stream.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/utils/reconstruction/hbo.py
  • applications/bci_visualization/operators/reconstruction/types.py
  • applications/bci_visualization/utils/reconstruction/reg_inv.py
  • applications/bci_visualization/bci_visualization.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/streams/base_nirs.py
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
**/operators/**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Class names for operators should use TitleCase with 'Op' suffix (e.g., 'AdaptiveThresholdingOp')

Files:

  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/stream.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/types.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
**/operators/**/*.{py,cpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Operator filename should match the directory name with appropriate extension (e.g., 'adaptive_thresholding.py')

Files:

  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/stream.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/types.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
**/operators/**/*.py

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Python operators must include unit tests in the same directory using pytest framework

Files:

  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/stream.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/types.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
**/metadata.json

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

When reviewing PRs that modify or add metadata.json files, verify that the category (first tag in the tags array) matches one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

Files:

  • applications/bci_visualization/metadata.json
**/operators/**/*.cpp

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Operators with C++ implementations must include CMakeLists.txt for build system integration

Files:

  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
**/operators/**/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

CMakeLists.txt for operators must use 'add_holohub_operator()' with optional 'DEPENDS EXTENSIONS' for dependent extensions

Files:

  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
**/applications/**/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

**/applications/**/CMakeLists.txt: CMakeLists.txt for applications must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators
Applications should include a testing section in CMakeLists.txt for functional testing using CTest

Files:

  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
🧠 Learnings (15)
📚 Learning: 2025-11-19T04:58:30.227Z
Learnt from: finalelement
Repo: nvidia-holoscan/holohub PR: 1237
File: applications/surgical_scene_recon/training/utils/sh_utils.py:1-23
Timestamp: 2025-11-19T04:58:30.227Z
Learning: When reviewing files that contain third-party code with original copyright notices (e.g., from PlenOctree, other open-source projects), do not suggest modifying the original copyright years or attribution. The original copyright notice must be preserved exactly as required by redistribution licenses (BSD, MIT, Apache, etc.). If substantial modifications are made by the current project, a separate copyright line can be added, but the original must remain unchanged.

Applied to files:

  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/CMakeLists.txt : CMakeLists.txt for operators must use 'add_holohub_operator()' with optional 'DEPENDS EXTENSIONS' for dependent extensions

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
📚 Learning: 2025-10-20T22:19:33.520Z
Learnt from: chengronglai
Repo: nvidia-holoscan/holohub PR: 1136
File: benchmarks/realtime_threads_benchmarking/realtime_thread_benchmark.cpp:245-251
Timestamp: 2025-10-20T22:19:33.520Z
Learning: In the Holoscan framework, operators added via add_operator() can execute without explicitly attaching a scheduling condition - they will still be scheduled and their compute() method will be invoked by the runtime.

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(operators|applications|workflows|gxf_extensions|tutorials))/**/metadata.json : Every application, operator, workflow, and extension must include a 'metadata.json' file following the appropriate schema

Applied to files:

  • applications/bci_visualization/metadata.json
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(operators|applications|workflows|gxf_extensions|tutorials))/**/metadata.json : metadata.json must include 'holoscan_sdk' with 'minimum_required_version' and 'tested_versions' fields

Applied to files:

  • applications/bci_visualization/metadata.json
📚 Learning: 2025-11-24T16:27:43.600Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T16:27:43.600Z
Learning: Applies to **/metadata.json : When reviewing PRs that modify or add `metadata.json` files, verify that the **category** (first tag in the `tags` array) matches one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

Applied to files:

  • applications/bci_visualization/metadata.json
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(operators|applications|workflows))/**/metadata.json : Operators, applications, and workflows should use the '<holohub_data_dir>' placeholder in metadata.json 'run' command for referencing data directory

Applied to files:

  • applications/bci_visualization/bci_visualization.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/applications/**/CMakeLists.txt : CMakeLists.txt for applications must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/pkg/CMakeLists.txt : CMakeLists.txt for packages must use 'add_holohub_package()' function with optional APPLICATIONS and OPERATORS dependencies

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/workflows/**/CMakeLists.txt : CMakeLists.txt for workflows must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.cpp : Operators with C++ implementations must include CMakeLists.txt for build system integration

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
📚 Learning: 2025-10-22T16:53:45.393Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: operators/video_streaming/streaming_client_enhanced/python/CMakeLists.txt:16-24
Timestamp: 2025-10-22T16:53:45.393Z
Learning: The pybind11_add_holohub_module CMake macro in cmake/pybind11_add_holohub_module.cmake encapsulates all pybind11 setup internally, including finding pybind11, linking against holoscan::pybind11 conditionally, and linking the C++ operator target. Operator Python bindings in holohub should only call this macro without additional pybind11 setup.

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.py : Python operators must include unit tests in the same directory using pytest framework

Applied to files:

  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/test_*.py : Unit test files for Python operators should use 'test_' prefix followed by the operator name (e.g., 'test_adaptive_thresholding.py')

Applied to files:

  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
📚 Learning: 2025-10-22T16:33:55.411Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.

Applied to files:

  • applications/bci_visualization/README.md
🧬 Code graph analysis (7)
applications/bci_visualization/operators/reconstruction/normalize_operator.py (4)
applications/bci_visualization/operators/reconstruction/types.py (3)
  • BuildRHSOutput (23-31)
  • NormalizedSolveBatch (42-48)
  • WavelengthSystem (35-38)
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (2)
  • setup (60-62)
  • compute (103-195)
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (2)
  • setup (116-119)
  • compute (121-151)
applications/bci_visualization/operators/reconstruction/solver_operator.py (2)
  • setup (39-41)
  • compute (43-54)
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp (1)
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp (2)
  • spec (34-34)
  • input (35-35)
applications/bci_visualization/streams/snirf.py (2)
applications/bci_visualization/streams/base_nirs.py (5)
  • ChannelInfo (13-20)
  • BaseNirsStream (23-33)
  • start (24-25)
  • get_channels (28-29)
  • stream_nirs (32-33)
applications/bci_visualization/operators/stream.py (1)
  • start (44-47)
applications/bci_visualization/bci_visualization.py (5)
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (1)
  • BuildRHSOperator (30-195)
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (1)
  • ConvertToVoxelsOperator (94-193)
applications/bci_visualization/operators/reconstruction/normalize_operator.py (1)
  • NormalizeOperator (32-168)
applications/bci_visualization/operators/stream.py (1)
  • StreamOperator (30-57)
applications/bci_visualization/streams/snirf.py (1)
  • SNIRFStream (35-155)
applications/bci_visualization/streams/base_nirs.py (2)
applications/bci_visualization/operators/stream.py (1)
  • start (44-47)
applications/bci_visualization/streams/snirf.py (3)
  • start (45-52)
  • get_channels (54-60)
  • stream_nirs (110-155)
applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp (1)
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp (2)
  • setup (21-28)
  • setup (21-21)
applications/bci_visualization/operators/reconstruction/solver_operator.py (2)
applications/bci_visualization/utils/reconstruction/reg_inv.py (1)
  • solve_regularized_system (18-52)
applications/bci_visualization/operators/reconstruction/types.py (2)
  • NormalizedSolveBatch (42-48)
  • SolverResult (52-58)
🪛 Cppcheck (2.19.0)
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp

[error] 32-32: There is an unknown macro here somewhere. Configuration is required. If HOLOSCAN_OPERATOR_FORWARD_ARGS is a macro then please configure it.

(unknownMacro)

applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp

[error] 32-32: There is an unknown macro here somewhere. Configuration is required. If HOLOSCAN_OPERATOR_FORWARD_ARGS is a macro then please configure it.

(unknownMacro)

🪛 GitHub Actions: Check Compliance
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp

[error] 1-1: Copyright headers incomplete. Current year not included in the header.

applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp

[error] 1-1: Copyright headers incomplete. Current year not included in the header.

applications/bci_visualization/bci_visualization.py

[error] 1-1: Copyright headers incomplete. Current year not included in the header.

applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt

[error] 1-1: Copyright headers incomplete. Current year not included in the header.

applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp

[error] 1-1: Copyright headers incomplete. Current year not included in the header.

applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py

[error] 1-1: Copyright headers incomplete. Current year not included in the header.

🪛 GitHub Actions: Check linting
applications/bci_visualization/utils/reconstruction/reg_inv.py

[error] 7-7: F401: types.ModuleType imported but unused.


[error] 16-16: E402: Module level import not at top of file.

applications/bci_visualization/bci_visualization.py

[error] 16-16: F401: Remove unused import: holoscan.schedulers.MultiThreadScheduler.

applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py

[error] 86-86: E731: Do not assign a lambda expression, use a def (rewrite entry as a function).

🪛 LanguageTool
applications/bci_visualization/README.md

[uncategorized] ~33-~33: Do not mix variants of the same word (‘travelled’ and ‘traveled’) within a single text.
Context: ...r times correspond to photons that have travelled farther into the tissue. In a given sec...

(EN_EXACT_COHERENCY_RULE)


[style] ~44-~44: Consider a more concise word here.
Context: ... from the perspective of the detector). In order to have a more anatomical representation o...

(IN_ORDER_TO_PREMIUM)


[grammar] ~52-~52: Use a hyphen to join words.
Context: ...son Thor Developer kit. Any Holoscan SDK supported platform should work. To run...

(QB_NEW_EN_HYPHEN)

🪛 markdownlint-cli2 (0.18.1)
applications/bci_visualization/README.md

81-81: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.14.10)
applications/bci_visualization/operators/reconstruction/__init__.py

17-26: __all__ is not sorted

Apply an isort-style sorting to __all__

(RUF022)

applications/bci_visualization/operators/reconstruction/normalize_operator.py

59-59: Unused method argument: context

(ARG002)


63-63: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/operators/stream.py

50-50: Unused method argument: op_input

(ARG002)


50-50: Unused method argument: context

(ARG002)


55-55: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/operators/reconstruction/build_rhs_operator.py

94-97: Avoid specifying long messages outside the exception class

(TRY003)


100-100: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/streams/snirf.py

43-43: Avoid specifying long messages outside the exception class

(TRY003)


145-145: Loop control variable ts not used within loop body

Rename unused ts to _ts

(B007)


145-145: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

applications/bci_visualization/utils/reconstruction/hbo.py

32-32: Value being cast to int is already an integer

Remove unnecessary int call

(RUF046)


86-86: Value being rounded is already an integer

Remove unnecessary round call

(RUF057)


88-90: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py

84-84: Avoid specifying long messages outside the exception class

(TRY003)


86-86: Do not assign a lambda expression, use a def

Rewrite entry as a def

(E731)


125-125: Unused method argument: context

(ARG002)

applications/bci_visualization/streams/base_nirs.py

24-25: BaseNirsStream.start is an empty method in an abstract base class, but has no abstract decorator

(B027)

applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py

74-74: Avoid specifying long messages outside the exception class

(TRY003)


88-90: Avoid specifying long messages outside the exception class

(TRY003)


107-107: Unused method argument: context

(ARG002)


114-114: Avoid specifying long messages outside the exception class

(TRY003)


116-116: Avoid specifying long messages outside the exception class

(TRY003)


130-130: Avoid specifying long messages outside the exception class

(TRY003)


227-227: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/operators/reconstruction/solver_operator.py

47-47: Unused method argument: context

(ARG002)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-and-deploy
🔇 Additional comments (18)
applications/bci_visualization/metadata.json (1)

1-48: LGTM! Metadata structure is correct.

The metadata.json file follows the required schema with:

  • Proper category tag ("Visualization" matches approved categories)
  • Required holoscan_sdk fields (minimum_required_version and tested_versions)
  • Appropriate platform and dependency specifications

Note: The pipeline copyright header check appears to be a false positive for this JSON metadata file.

Based on coding guidelines.

applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp (2)

23-36: LGTM! Well-designed passthrough operator.

The operator correctly implements the Holoscan operator pattern with:

  • Proper naming convention (TitleCase with 'Op' suffix)
  • Clear documentation explaining the queue policy rationale for preventing backpressure
  • Standard operator interface methods

Based on coding guidelines.


1-15: Update copyright year to 2026.

The copyright header shows 2025, but the current year is 2026. The header should be updated to reflect the correct year.

📅 Proposed fix for copyright year
-/* SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES.
+/* SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES.
⛔ Skipped due to learnings
Learnt from: finalelement
Repo: nvidia-holoscan/holohub PR: 1237
File: applications/surgical_scene_recon/training/utils/sh_utils.py:1-23
Timestamp: 2025-11-19T04:58:30.227Z
Learning: When reviewing files that contain third-party code with original copyright notices (e.g., from PlenOctree, other open-source projects), do not suggest modifying the original copyright years or attribution. The original copyright notice must be preserved exactly as required by redistribution licenses (BSD, MIT, Apache, etc.). If substantial modifications are made by the current project, a separate copyright line can be added, but the original must remain unchanged.
applications/bci_visualization/operators/reconstruction/types.py (1)

1-58: LGTM! Well-structured immutable data types.

The frozen dataclasses provide type-safe, immutable data structures for the reconstruction pipeline:

  • Comprehensive type annotations with NDArray
  • Helpful inline dimension comments for array shapes
  • Proper use of frozen=True to prevent accidental mutations
  • Clear hierarchical structure (VoxelMetadata → BuildRHSOutput → NormalizedSolveBatch → SolverResult)
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (1)

72-339: LGTM! Robust operator implementation with proper GPU acceleration.

The operator demonstrates excellent engineering practices:

  • Comprehensive validation and error handling throughout
  • Proper CUDA stream management for GPU operations
  • Well-documented complex orientation logic with detailed comments
  • Adaptive statistics using exponential moving average
  • Clear separation of concerns with focused helper methods
  • Appropriate use of CuPy for GPU-accelerated resampling

Note: Static analysis warnings about error message length (TRY003) and unused context parameter (ARG002) are false positives—the detailed error messages provide valuable debugging context, and context is part of the standard operator interface.

applications/bci_visualization/utils/reconstruction/types.py (1)

1-8: LGTM! Clean type alias definition.

The type alias provides a clear contract for channel headset mapping data structures, with proper copyright headers and typing imports.

applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp (1)

35-60: LGTM!

The pybind11 binding follows the established pattern for Holoscan operator bindings. Class naming ColorBufferPassthroughOp correctly follows TitleCase with 'Op' suffix per coding guidelines.

applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (1)

94-151: LGTM!

The operator implementation is well-structured with clear separation of concerns. The cumulative HbO/HbR accumulation pattern and one-time affine emission are appropriate for real-time streaming visualization. Class naming ConvertToVoxelsOperator follows the coding guidelines.

applications/bci_visualization/utils/reconstruction/reg_inv.py (1)

95-125: LGTM with note on assertion usage.

The solver implementation is numerically sound with appropriate symmetry enforcement. Note that assertions are disabled when Python runs with optimization flags (-O), so the input validation would be skipped in optimized builds.

applications/bci_visualization/operators/reconstruction/__init__.py (1)

1-26: LGTM!

The package initialization correctly exports all reconstruction operators and types. The __all__ list provides clear public API documentation.

applications/bci_visualization/bci_visualization.py (2)

75-204: Well-structured pipeline composition.

The compose method clearly separates reconstruction and visualization pipeline stages with helpful comments. The deadlock avoidance pattern using ConditionType.NONE and the passthrough operator is well-documented.


207-259: LGTM!

The main function follows Holoscan patterns with proper CLI argument handling, environment variable support for data paths, and YAML configuration loading.

applications/bci_visualization/utils/reconstruction/assets.py (1)

21-44: LGTM!

The frozen dataclass with comprehensive field documentation provides a clean, immutable container for reconstruction assets. The singleton caching in get_assets appropriately prevents redundant disk I/O for large files.

applications/bci_visualization/streams/snirf.py (1)

62-108: LGTM!

The SNIRF channel parsing correctly handles the HDF5 structure, label decoding, and measurement list sorting. The implementation properly extracts source/detector positions and builds the channel list.

applications/bci_visualization/utils/reconstruction/hbo.py (1)

14-57: LGTM! Clean NamedTuple structure for extinction coefficients.

The ExtinctionCoefficient design with from_csv factory method and get_oxy_deoxy_coefficients is well-structured. Minor note: the static analysis hint about redundant int(round(...)) at line 32 is a false positive since float() returns a float that needs rounding and conversion.

applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (3)

64-75: LGTM! Clean baseline differencing implementation.

The frame-by-frame differencing logic correctly handles the first-frame baseline capture and subsequent differential computation. The copy() calls properly avoid aliasing issues with GPU arrays.


77-83: LGTM! Efficient invalid value sanitization.

Good use of early return to skip processing when no invalid values exist. The in-place nan_to_num correctly handles NaN, positive infinity, and negative infinity.


85-101: LGTM! Robust channel index mapping with good error messages.

The nested dictionary lookup with proper KeyError handling provides clear debugging information when channel mapping fails. The empty indices check is a good defensive measure.

Comment on lines 16 to 167
class VoxelStreamToVolumeOp(Operator):
"""
Convert streaming HbO/HbR voxel data [I, J, K, 2] into a 3D volume tensor for VolumeRendererOp.
Inputs:
- affine_4x4: np.ndarray shape (4, 4) (processed once if provided)
- hb_voxel_data: np.ndarray shape (I, J, K, n_channels) where last dim is channels [HbO, HbR] (HbO: 0, HbR: 1)
Outputs:
- volume: holoscan.gxf.Entity containing a tensor named "volume" with shape (Z,Y,X)
- spacing: np.ndarray shape (3,) derived from affine
- permute_axis: np.ndarray shape (3,) derived from affine
- flip_axes: np.ndarray shape (3,) derived from affine
"""

def __init__(self, fragment, *args, **kwargs):
# Anatomy mask NIfTI file
self.mask_nifti_path = kwargs.pop("mask_nifti_path", None)

# Exponential moving average factor for running statistics (0 < alpha <= 1)
# Higher alpha = faster adaptation, lower alpha = more stable
self.stats_alpha = kwargs.pop("stats_alpha", 0.1)

# Visualization scale factor for amplifying activations
# Needed because global min/max includes the whole brain (larger values),
# but we only visualize white/gray matter (smaller activations).
# Higher scale = more sensitive visualization of small activations
self.visualization_scale = kwargs.pop("visualization_scale", 10)

# Density range, must be same as the VolumeRendererOp's density range
self.density_min = kwargs.pop("density_min", -100)
self.density_max = kwargs.pop("density_max", 100)

super().__init__(fragment, *args, **kwargs)

# Internal state
self.affine = None

# Metadata, set from the first frame, reused for subsequent frames
self.dims = None # np.array([X, Y, Z], dtype=np.uint32)
self.out_spacing = None # np.ndarray float32 (3,)
self.permute_axis = None # np.ndarray uint32 (3,)
self.flip_axes = None # np.ndarray bool (3,)
self.roi_mask = None # np.ndarray bool (I, J, K)

# Raw incoming mask (I, J, K) for pass-through emission (loaded from file if provided)
self.mask_voxel_raw = None
self.mask_volume_gpu = None
self.mask_affine = None
self.mask_shape = None

# Running statistics for adaptive normalization (initialized from first frame)
self.global_min = None
self.global_max = None
self.frame_count = 0

def start(self):
if not self.mask_nifti_path:
raise ValueError("VoxelStreamToVolume: No mask NIfTI path provided")

try:
img = nib.load(self.mask_nifti_path)
mask_3d = img.get_fdata()
# Segmentation volumes must be unsigned 8-bit integer
self.mask_voxel_raw = np.asarray(mask_3d, dtype=np.uint8)
self.mask_affine = img.affine
self.mask_shape = mask_3d.shape
print(
f"VoxelStreamToVolume: Loaded mask from {self.mask_nifti_path}, "
f"shape: {self.mask_voxel_raw.shape}, values: {np.unique(self.mask_voxel_raw)}"
)
except Exception as e:
raise RuntimeError(
f"VoxelStreamToVolume: Failed to load mask NIfTI '{self.mask_nifti_path}': {e}"
) from e

def setup(self, spec: OperatorSpec):
spec.input("affine_4x4").condition(ConditionType.NONE) # (4, 4), only emit at the first frame
spec.input("hb_voxel_data") # (I, J, K)

spec.output("volume")
spec.output("spacing")
spec.output("permute_axis")
spec.output("flip_axes")

# brain anatomy mask
spec.output("mask_volume").condition(ConditionType.NONE)
spec.output("mask_spacing").condition(ConditionType.NONE)
spec.output("mask_permute_axis").condition(ConditionType.NONE)
spec.output("mask_flip_axes").condition(ConditionType.NONE)

def compute(self, op_input, op_output, context):
# Receive Hb voxel data (cupy array)
hb_voxel = op_input.receive("hb_voxel_data") # (I, J, K)
cuda_stream = op_input.receive_cuda_stream("hb_voxel_data")

# Check voxel data is valid
if not isinstance(hb_voxel, cp.ndarray):
raise TypeError(f"VoxelStreamToVolume: Invalid voxel data type: {type(hb_voxel)}, expected cupy array")
if hb_voxel.ndim != 3:
raise ValueError(f"VoxelStreamToVolume: Invalid voxel data shape: {hb_voxel.shape}, expected 3D")

# Receive affine matrix only at the first frame
affine = op_input.receive("affine_4x4")
if affine is not None:
self.affine = np.array(affine, dtype=np.float32).reshape(4, 4)
# Derive spacing/orientation from affine - use mask's affine as we will resample data to mask's size
self.out_spacing, self.permute_axis, self.flip_axes = self._derive_orientation_from_affine(
self.mask_affine
)
print("VoxelStreamToVolume: Received affine matrix")

# Check if affine has been set at least once
if self.affine is None:
raise ValueError("VoxelStreamToVolume: No affine matrix received")

with cp.cuda.ExternalStream(cuda_stream):
# Update running statistics from incoming data
self._update_running_statistics(hb_voxel)

# Note: +-1 to add a buffer avoiding edge case in ClaraViz boundaries.
hb_voxel_normalized = self._normalize_and_process_activated_voxels(
hb_voxel, normalize_min_value=self.density_min + 1, normalize_max_value=self.density_max - 1
)

# Resample to mask's size
volume_gpu = self._cupy_resample(
hb_voxel_normalized, self.affine, self.mask_affine, self.mask_shape
)

volume_gpu = cp.transpose(volume_gpu, (2, 1, 0))
volume_gpu = cp.ascontiguousarray(volume_gpu, dtype=cp.float32)

# If we have a mask, emit oriented mask every frame for the renderer
if self.mask_volume_gpu is None:
with cp.cuda.ExternalStream(cuda_stream):
self.mask_volume_gpu = cp.asarray(self.mask_voxel_raw, dtype=cp.uint8)
self.mask_volume_gpu = cp.transpose(self.mask_volume_gpu, (2, 1, 0))
self.mask_volume_gpu = cp.ascontiguousarray(self.mask_volume_gpu)


# Emit mask outputs
op_output.emit({"volume": self.mask_volume_gpu}, "mask_volume")
op_output.emit(self.out_spacing, "mask_spacing", "std::array<float, 3>")
op_output.emit(self.permute_axis, "mask_permute_axis", "std::array<uint32_t, 3>")
op_output.emit(self.flip_axes, "mask_flip_axes", "std::array<bool, 3>")

# Emit density outputs
op_output.emit({"volume": volume_gpu}, "volume")
op_output.emit(self.out_spacing, "spacing", "std::array<float, 3>")
op_output.emit(self.permute_axis, "permute_axis", "std::array<uint32_t, 3>")
op_output.emit(self.flip_axes, "flip_axes", "std::array<bool, 3>")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for test files in the voxel_stream_to_volume operator directory
find applications/bci_visualization/operators/voxel_stream_to_volume/ -type f -name "test_*.py"

Repository: nvidia-holoscan/holohub

Length of output: 49


🏁 Script executed:

#!/bin/bash
# Check the operator directory structure
ls -la applications/bci_visualization/operators/voxel_stream_to_volume/

Repository: nvidia-holoscan/holohub

Length of output: 318


Add unit tests for VoxelStreamToVolumeOp operator.

Python operators must include unit tests in the same directory using pytest framework. Create test_voxel_stream_to_volume.py covering initialization, port setup, error handling, compute logic (affine matrix reception, voxel normalization, resampling, GPU operations), and edge cases. Reuse common fixtures from conftest.py where applicable.

🧰 Tools
🪛 Ruff (0.14.10)

74-74: Avoid specifying long messages outside the exception class

(TRY003)


88-90: Avoid specifying long messages outside the exception class

(TRY003)


107-107: Unused method argument: context

(ARG002)


114-114: Avoid specifying long messages outside the exception class

(TRY003)


116-116: Avoid specifying long messages outside the exception class

(TRY003)


130-130: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In
@applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
around lines 16 - 167, Add pytest unit tests for the VoxelStreamToVolumeOp
class: create test_voxel_stream_to_volume.py that covers initialization/start
(mask_nifti_path handling and start() error path), setup() port configuration,
and compute() behavior including receiving affine_4x4 and hb_voxel_data,
validating type/shape errors, calling compute path that exercises
_derive_orientation_from_affine, _update_running_statistics,
_normalize_and_process_activated_voxels, and _cupy_resample (mock GPU ops where
needed), and emission of mask and volume outputs; use fixtures from conftest.py
to provide fake fragment, OperatorSpec context, synthetic mask NIfTI, cupy
arrays, and CUDA stream, and include edge cases (missing affine, non-cupy
hb_voxel, wrong ndim) and assertions that op_output.emit is called with expected
keys and types.

- Downloaded `.snirf` files for use with the included data replayer. Example data can be found on [OpenNeuro](https://openneuro.org/datasets/ds006545) and copied locally to be run through the replayer.

```bash
wget -0 data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo in wget command flag.

The flag -0 should be -O (uppercase letter O, not zero) for specifying the output filename.

Proposed fix
-   wget -0 data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
+   wget -O data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
wget -0 data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
wget -O data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
🤖 Prompt for AI Agents
In @applications/bci_visualization/README.md at line 59, Fix the typo in the
wget command in the README: change the flag `-0` to the uppercase `-O` in the
line starting with `wget -0 data/examples/data.snirf
"https://s3.amazonaws.com/...` so the output filename is correctly specified
(ensure the rest of the command, including the output path
`data/examples/data.snirf` and the quoted URL, remains unchanged).

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Overview

Greptile Summary

This PR introduces a comprehensive BCI visualization application for real-time fNIRS brain imaging, developed in collaboration with Kernel. The implementation includes a complete reconstruction pipeline from SNIRF data input to 3D volume rendering.

Major Changes:

  • New Application: Complete BCI visualization application with reconstruction operators, data streaming, and volume rendering integration
  • Volume Renderer Enhancements: Added CUDA stream handling, fixed memory leak by clearing old volumes before receiving new ones, implemented camera pose caching for smooth multi-threaded execution, and added graceful handling when no density volume is present
  • Multi-threaded Architecture: Event-based scheduler with 5 worker threads, using ConditionType.NONE on volume renderer output and QueuePolicy.POP on passthrough to prevent deadlocks
  • Reconstruction Pipeline: Implements end-to-end processing including RHS building, normalization, regularized solving, and voxel conversion
  • Adaptive Normalization: Exponential moving average for running statistics with visualization scaling for white/gray matter activation

Issues Found:

  • Copyright year in 3 files incorrectly set to 2026 (should be 2025)
  • Incomplete sentence in README.md line 47-48
  • All issues are minor and non-blocking

Confidence Score: 4/5

  • This PR is safe to merge with only minor documentation and copyright year corrections needed
  • Score reflects well-structured code with proper error handling, CUDA stream management, and memory leak fixes in volume renderer. The application demonstrates good software engineering practices with clear separation of concerns. Minor issues are limited to copyright dates and incomplete documentation, which do not affect functionality.
  • No files require special attention - all critical components have proper error handling and resource management

Important Files Changed

File Analysis

Filename Score Overview
applications/bci_visualization/bci_visualization.py 4/5 Main application file integrating reconstruction and visualization pipelines with proper multi-threading setup
operators/volume_renderer/volume_renderer.cpp 4/5 Added CUDA stream handling, memory leak fix, cached camera pose, and graceful handling when no density volume present
operators/volume_renderer/dataset.cpp 5/5 Added ResetVolume method to clear old volumes and updated SetVolume to accept CUDA stream parameter
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py 4/5 Converts streaming voxel data to dense 3D volumes with adaptive normalization and resampling
applications/bci_visualization/operators/reconstruction/solver_operator.py 3/5 Regularized solver with copyright year 2026 (should be 2025)
applications/bci_visualization/README.md 4/5 Comprehensive documentation with incomplete section on line 47-48

Sequence Diagram

sequenceDiagram
    participant Stream as SNIRF Stream
    participant BuildRHS as Build RHS
    participant Normalize as Normalize
    participant Solver as Regularized Solver
    participant Convert as Convert to Voxels
    participant VoxelToVol as Voxel to Volume
    participant VolRender as Volume Renderer
    participant Passthrough as Color Buffer Passthrough
    participant Holoviz as HolovizOp
    
    Note over Stream,Holoviz: Reconstruction Pipeline (Thread 1)
    Stream->>BuildRHS: moments data
    BuildRHS->>Normalize: RHS batch
    Normalize->>Solver: normalized batch
    Solver->>Convert: solver result
    Convert->>VoxelToVol: HbO voxel data + affine
    
    Note over VoxelToVol,Holoviz: Visualization Pipeline (Thread 2)
    VoxelToVol->>VolRender: volume + mask + spacing
    VolRender->>Passthrough: color buffer (NONE condition)
    Passthrough->>Holoviz: color buffer (POP policy)
    Holoviz->>VolRender: camera pose feedback
    
    Note over Stream,Holoviz: Event-based scheduler with 5 worker threads
Loading

@@ -0,0 +1,94 @@
"""
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copyright year should be 2025, not 2026

Suggested change
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
SPDX-FileCopyrightText: Copyright (c) 2025 Kernel.

@@ -0,0 +1,187 @@
"""
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copyright year should be 2025, not 2026

Suggested change
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
SPDX-FileCopyrightText: Copyright (c) 2025 Kernel.

@@ -0,0 +1,156 @@
"""
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copyright year should be 2025, not 2026

Suggested change
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
SPDX-FileCopyrightText: Copyright (c) 2025 Kernel.

Comment on lines 47 to 48
In this Holohub application, we demonstrate a real-time source reconstruction pipeline that runs on a Jetson Thor at the native framerate of the Kernel Flow system (4.75 Hz) and visualizes the data in 3D using Holoviz. We did this by X, Y,
and Z (@Gabe or @Mimi to add high-level).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

incomplete sentence - "We did this by X, Y, and Z" needs actual content or should be removed

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🤖 Fix all issues with AI agents
In
@applications/bci_visualization/operators/reconstruction/build_rhs_operator.py:
- Around line 22-31: The class BuildRHSOperator should be renamed to BuildRHSOp
to follow the SDK naming convention; update the class declaration from
BuildRHSOperator to BuildRHSOp and search/replace any references to
BuildRHSOperator (imports, instantiations, tests) to use BuildRHSOp instead;
leave the super().__init__(fragment, name=self.__class__.__name__) as-is (it
will reflect the new name), and run the test/linters to ensure no remaining
references to the old class name remain.
- Line 1: The operators currently in operators/reconstruction must be
reorganized so each operator has its own subdirectory named for the operator
file; create directories build_rhs_operator/, convert_to_voxels_operator/,
normalize_operator/, solver_operator/ and move the corresponding files
(build_rhs_operator.py, convert_to_voxels_operator.py, normalize_operator.py,
solver_operator.py) into their matching folders, add a metadata.json and
README.md into each folder, and move/create the corresponding tests (e.g.,
test_build_rhs_operator.py) alongside the implementation; ensure imports and any
relative module paths are updated to reflect the new package layout.

In
@applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py:
- Around line 1-189: Add unit tests for ConvertToVoxelsOperator covering affine
computation, full-voxel expansion, HbO/HbR conversion and voxelization: create a
pytest file that imports ConvertToVoxelsOperator and the helper functions
_compute_affine, _convert_to_full_voxels, _voxelize_hbo, _compute_voxel_layout;
test _compute_affine with simple known ijk/xyz mappings (including degenerate
and single-voxel scenarios) and assert the 4x4 affine; test
_convert_to_full_voxels by passing a small trimmed array and
idxs_significant_voxels and asserting placement in a zeros full array; for
HbO/HbR conversion, either construct a deterministic HbO instance or mock
HbO.convert_mua_to_hb to return known arrays and verify cumulative accumulation
behavior in ConvertToVoxelsOperator (self._cum_hbo/_cum_hbr) after multiple
compute calls; finally test _voxelize_hbo/_compute_voxel_layout by supplying
VoxelMetadata-like ijk arrays and asserting scattered volume shape and voxel
values (including edge cases like single voxel and degenerate grids), using
pytest parametrization for variants.
- Around line 63-88: The affine computation in _compute_affine() is fragile:
remove the fixed global seed usage of rng (or accept a seed parameter and
document it) so behavior is intentional; ensure out and B are only referenced
after a successful non-degenerate selection by assigning them inside the loop
and breaking immediately when a valid B is found (avoid binding empty arrays
outside the loop); when a degenerate set is detected, log a warning with the
chosen indices (use the existing rng/inds) before continuing so failures are
observable; after exhausting attempts, raise a RuntimeError that includes how
many attempts were made and why (e.g., "failed to find non-degenerate point set
after N attempts"); keep the existing entry and affine construction but verify B
is invertible (or check matrix rank) before computing D to avoid using det==0 as
the sole check.

In
@applications/bci_visualization/operators/reconstruction/normalize_operator.py:
- Around line 1-163: Add unit tests for NormalizeOperator exercising its
normalization logic and caching: write tests that instantiate NormalizeOperator
(use use_gpu=False) and feed BuildRHSOutput-like fixtures to _normalize_batch or
through compute to validate (1) hard-coded normalizer path using
HARD_CODED_NORMALIZERS and _get_hard_coded_row_normalizers yields expected
scaled jacobian and rhs; (2) template jacobian multiplication in
_get_template_jacobians applies background_T correctly; (3) caching behavior for
_jacobian_cache and _hard_coded_normalized_jacobian_cache (call twice and assert
caches are reused/unchanged); (4) edge cases: zero RHS returns None or is
skipped and empty/zero-sized batches handled without exception; and (5)
per-wavelength WavelengthSystem construction creates one WavelengthSystem per
wavelength with jacobian, rhs, background shapes matching expectations and
num_absorbers computed from data_jacobians. Use CPU numpy/cupy interoperability
by constructing batch.data_jacobians, data_rhs, model_optical_properties,
idxs_significant_voxels consistent with BuildRHSOutput, and assert types/values
in returned NormalizedSolveBatch and WavelengthSystem objects.

In @applications/bci_visualization/operators/stream.py:
- Around line 1-50: Add pytest unit tests for the StreamOperator to satisfy
coding guidelines: create a test module that mocks a BaseNirsStream instance
(with get_channels(), start(), and stream_nirs() generator) and instantiates
StreamOperator(stream=mock_stream); verify setup() creates an output port named
"samples" by inspecting the OperatorSpec or using the operator's spec API, call
start() and assert mock_stream.start() was called and that self._channels was
set via get_channels(), then test compute() by driving the mocked stream_nirs()
to yield a numpy array and asserting op_output.emit was called with a
SampleOutput containing the same ndarray and channels; finally test
end-of-stream behavior by having stream_nirs() return None (or exhaust) and
asserting compute() raises StopIteration. Include tests for correct types
(SampleOutput) and use pytest fixtures/mocks for
op_input/op_output/ExecutionContext.
- Around line 42-50: The compute method currently raises StopIteration when the
iterator self._iter is exhausted; instead catch StopIteration from
next(self._iter), call self.stop_execution(), and return without emitting so the
operator signals termination correctly; update the compute function (referencing
compute, self._iter, self.stop_execution, and op_output.emit(SampleOutput(...),
"samples")) to use a try/except around next(...) that on StopIteration invokes
self.stop_execution() and returns, otherwise proceeds to emit the SampleOutput.

In
@applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py:
- Line 2: Update the SPDX copyright header string at the top of the file (the
SPDX-FileCopyrightText line) to include 2026 by changing the year range or
adding 2026 (e.g., "2025-2026") so the copyright covers the current year; modify
the existing header in voxel_stream_to_volume.py accordingly.
- Line 16: Add a pytest-based test module named test_voxel_stream_to_volume.py
alongside the operator that provides unit tests for the VoxelStreamToVolumeOp
class: include tests for operator initialization (valid/invalid params), port
setup verification (setup_ports), mask loading behavior (valid path, missing
file, invalid format via load_mask), first-frame affine handling
(process_first_frame or equivalent), voxel data validation (type and dimension
checks using validate_voxel), running statistics updates and normalization logic
(positive/negative/edge cases via update_stats/normalize), orientation
derivation from differing affine matrices (derive_orientation), resampling
correctness (resample), CUDA stream handling (handle_cuda_stream stubs/mocks),
and output emission (emit_volume/produce output); use pytest fixtures and mocks
for file I/O and CUDA/torch tensors where needed and assert expected exceptions,
return values, and emitted outputs.

In @applications/bci_visualization/streams/snirf.py:
- Around line 63-109: In _get_channels add robust validation and error handling
around SNIRF parsing: verify existence and shapes of sourcePos3D, detectorPos3D,
sourceLabels, detectorLabels and that data1 contains measurementList keys before
using them; wrap parsing of sourceLabel and detectorLabel (the decode() and
split("S"/"D") logic for sourceLabels/detectorLabels) in try/except to log and
raise a clear ValueError on malformed labels; check bounds before indexing
sourcePos3D/detectorPos3D and verify channel indices channel["sourceIndex"],
channel["detectorIndex"], channel["dataTypeIndex"], channel["wavelengthIndex"]
are present and within range, and similarly guard conversions (int(...)-1) to
avoid IndexError/TypeError, logging the problematic channel_key and re-raising a
descriptive exception when validation fails.
- Around line 1-156: Add a pytest test module that exercises SNIRFStream: verify
FileNotFoundError when constructing with a missing path, create a minimal
in-memory/temp HDF5 SNIRF file (matching keys used in _get_channels:
nirs/probe/sourcePos3D, detectorPos3D, sourceLabels, detectorLabels,
momentOrders and nirs/data1 with measurementList* groups containing sourceIndex,
detectorIndex, dataTypeIndex, wavelengthIndex, plus dataTimeSeries and time) and
assert start() populates unique channels (via get_channels()) and stream_nirs()
yields correctly-shaped arrays; also add a test that deliberately
corrupts/misformats a required dataset to assert start()/stream_nirs() raises an
appropriate exception; use pytest fixtures (tmp_path) and h5py to build files
and reference SNIRFStream, _get_channels, start, get_channels, stream_nirs in
test names and assertions so coverage tools trace these symbols.
- Around line 46-54: The start() method opens an HDF5 file into self._snirf_file
and prints a message but never closes the file and uses print() instead of the
logger; add a stop() or close() method that checks for self._snirf_file (and/or
hasattr(self, '_snirf_file')), calls its close() and sets it to None to release
resources, and replace the print("Got {} unique channels".format(...)) with the
configured logger (e.g., self.logger.info or a module logger) so that start()
logs the channel count instead of printing to stdout.

In @applications/bci_visualization/utils/reconstruction/reg_inv.py:
- Around line 113-116: Replace the assert statements that validate A and b with
explicit checks that raise informative exceptions (e.g., ValueError) including
the offending shapes/values: verify A is a 2D square array ((A.ndim == 2) and
(A.shape[0] == A.shape[1])) and raise if not; verify b has ndim 1 or 2 and
b.shape[0] == A.shape[0] and raise if not; verify cp.all(cp.isfinite(A)) and
cp.all(cp.isfinite(b)) and raise if not, including which array failed and
summary info (shapes, any non-finite count). Apply the same replacement pattern
to the validation at the later check referenced on line 123 so that no input
validation relies on Python assertions.
- Line 12: The global _HESSIAN_CACHE is not synchronized; import threading, add
a module-level _HESSIAN_CACHE_LOCK = threading.Lock(), and in
_build_regularized_system() wrap the cache read-check, compute, and write
sequence inside with _HESSIAN_CACHE_LOCK: so you check
_HESSIAN_CACHE.get(wavelength_idx) while holding the lock, return if present,
otherwise compute the Hessian and store it into _HESSIAN_CACHE[wavelength_idx]
before releasing the lock; keep the actual heavy GPU computation outside the
lock only if you restructure to a double-checked pattern (check under lock, if
miss release lock, compute, then re-acquire lock to check-and-set) to avoid long
lock holds.
🟡 Minor comments (6)
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py-91-112 (1)

91-112: Class naming convention violation and affine computation risk.

Two issues:

  1. The class is named ConvertToVoxelsOperator instead of ConvertToVoxelsOp as specified in coding guidelines.
  2. Line 104 computes the affine transformation during __init__, which means construction will fail if _compute_affine raises an exception. Consider lazy computation or better error context.
📝 Proposed fix for naming
-class ConvertToVoxelsOperator(Operator):
+class ConvertToVoxelsOp(Operator):

For affine computation, consider wrapping in a try-except to provide better error context:

try:
    self._affine = np.round(_compute_affine(xyz, ijk), 6)
except RuntimeError as e:
    raise RuntimeError(f"Failed to compute affine transformation for voxel grid: {e}") from e

As per coding guidelines.

applications/bci_visualization/operators/reconstruction/normalize_operator.py-27-45 (1)

27-45: Class naming convention violation.

The class is named NormalizeOperator instead of NormalizeOp as specified in the coding guidelines for operators.

📝 Proposed fix
-class NormalizeOperator(Operator):
+class NormalizeOp(Operator):
     """Apply Jacobian/RHS normalization before solver execution."""

Note: This will require updates in imports and references elsewhere in the codebase.

As per coding guidelines.

applications/bci_visualization/operators/stream.py-23-31 (1)

23-31: Class naming convention violation.

The coding guidelines specify that operator class names should use TitleCase with 'Op' suffix (e.g., 'AdaptiveThresholdingOp'). This class is named StreamOperator instead of StreamOp.

📝 Proposed fix
-class StreamOperator(Operator):
+class StreamOp(Operator):
     def __init__(
         self,
         stream: BaseNirsStream,

Note: This change will require updates in any code that imports or references StreamOperator.

As per coding guidelines.

applications/bci_visualization/bci_visualization.py-8-34 (1)

8-34: Fix import sorting and formatting.

The pipeline failure indicates that imports are incorrectly sorted and/or formatted. Per the coding guidelines, all code must pass linting checks using ./holohub lint before submission.

Please run the linting tool to automatically fix the import order:

#!/bin/bash
# Verify and show the expected import order
cd /path/to/holohub
./holohub lint --check applications/bci_visualization/bci_visualization.py
applications/bci_visualization/bci_visualization.py-1-6 (1)

1-6: Fix the copyright year to include 2026.

The copyright header shows 2025, but based on the pipeline failure and the current date (January 2026), the header should include 2026.

📅 Proposed fix
 """
-SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES.
+SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
 SPDX-License-Identifier: Apache-2.0
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py-289-290 (1)

289-290: Fix comment to match actual logging frequency.

The comment states statistics are logged every 100 frames, but the code logs every 10 frames.

📝 Proposed fix
-        # Log statistics every 100 frames for debugging
+        # Log statistics every 10 frames for debugging
         if self.frame_count % 10 == 0:
🧹 Nitpick comments (16)
applications/bci_visualization/streams/snirf.py (3)

12-14: Consolidate redundant imports.

The module imports ChannelInfo from streams.base_nirs at line 12 and then imports from .base_nirs at line 14. This is redundant and can cause confusion.

♻️ Proposed fix
-from streams.base_nirs import ChannelInfo
-
-from .base_nirs import BaseNirsStream
+from .base_nirs import BaseNirsStream, ChannelInfo

44-44: Consider using a custom exception class.

The exception message is specified as a string literal. While this is acceptable, consider defining a custom exception class if this error pattern is common across the codebase.


145-156: Address static analysis warnings and prefer logging.

Line 145 uses print() instead of logger, and line 146 has an unused loop variable ts and missing strict= parameter in zip().

♻️ Proposed fixes
-        print("Streaming {} samples from SNIRF".format(len(data)))
-        for ts, sample in zip(times, data):
+        logger.info("Streaming %d samples from SNIRF", len(data))
+        for _ts, sample in zip(times, data, strict=True):
applications/bci_visualization/utils/reconstruction/reg_inv.py (2)

14-14: Remove or utilize unused constant.

MAX_REASONABLE_COND_RATIO is defined but never referenced in the code. Either remove it or add condition number validation if it was intended for numerical stability checks.

♻️ Proposed fix

If unused, remove:

-MAX_REASONABLE_COND_RATIO = 10

If intended for validation, add checks in _solve_square_system:

cond_number = cp.linalg.cond(A)
if cond_number > MAX_REASONABLE_COND_RATIO * 1e15:
    logger.warning(f"System is ill-conditioned: cond={cond_number:.2e}")

17-37: Inconsistent type hints in docstring vs signature.

The docstring describes parameters as NDArray[np.float32], but the function signature uses object as the type hint. This inconsistency makes the API unclear.

Consider either:

  1. Using proper type hints: NDArray[np.float32] (requires from numpy.typing import NDArray)
  2. Using cp.ndarray if these are always CuPy arrays
  3. Updating docstrings to match the generic object signature

The same issue applies to _build_regularized_system and _solve_square_system.

applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (1)

140-141: Consider bounds checking or overflow protection for cumulative HbO/HbR.

The cumulative HbO and HbR values are accumulated indefinitely without bounds. Over long sessions, this could lead to numerical overflow or loss of precision.

Consider adding:

  • Periodic normalization or windowing of cumulative values
  • Overflow detection and warning
  • Optional reset mechanism

For example:

MAX_CUMULATIVE_VALUE = 1e6
if cp.max(cp.abs(self._cum_hbo)) > MAX_CUMULATIVE_VALUE:
    logger.warning("Cumulative HbO exceeds threshold, consider resetting")
applications/bci_visualization/operators/reconstruction/normalize_operator.py (2)

21-24: Document the hard-coded normalizer values.

The HARD_CODED_NORMALIZERS array contains magic numbers without explanation. Add a comment describing:

  • What these values represent (moment-specific scaling factors?)
  • Why these specific values were chosen
  • When to use hard-coded vs template-based normalization
📝 Suggested documentation
+# Per-feature (moment) normalization factors for each wavelength.
+# Derived from empirical analysis of typical NIRS signal magnitudes.
+# Index [wavelength_idx][feature_idx] = scaling factor
 HARD_CODED_NORMALIZERS = [  # for each feature type (moment)
-    np.array([1, 5e2, 5e5]),
-    np.array([0.5, 2.5e2, 2.5e5]),
+    np.array([1, 5e2, 5e5]),       # Wavelength 0: moment 0, 1, 2
+    np.array([0.5, 2.5e2, 2.5e5]),  # Wavelength 1: moment 0, 1, 2
 ]

102-107: Add explanatory comment for divmod validation.

Line 106 uses divmod to validate a relationship between columns and significant voxels, but the purpose isn't immediately clear. Add a comment explaining what num_absorbers represents and why the remainder must be zero.

📝 Proposed comment
+        # Jacobian columns represent (num_absorbers * num_significant_voxels).
+        # Extract num_absorbers by dividing total columns by voxels.
         num_absorbers, remainder = divmod(num_cols, num_significant)
-        assert not remainder
+        assert not remainder, f"Jacobian columns {num_cols} not evenly divisible by voxels {num_significant}"
applications/bci_visualization/operators/reconstruction/solver_operator.py (2)

25-34: Consider removing the unused use_gpu parameter.

The use_gpu parameter is stored but never referenced in the implementation. Line 58's comment states "GPU-only: always use CuPy", which makes this parameter misleading. Either remove it or document why it exists for future extensibility.

♻️ Proposed fix
 def __init__(
     self,
     *,
     reg: float = REG_DEFAULT,
-    use_gpu: bool = False,
     fragment: Any | None = None,
 ) -> None:
     super().__init__(fragment, name=self.__class__.__name__)
     self._reg = reg
-    self._use_gpu = use_gpu

53-94: Consider replacing assertions with explicit error handling.

The assertions at lines 64, 65, and 76 validate critical shape constraints but can be disabled with Python's -O flag. For production robustness, consider explicit if checks with informative error messages.

♻️ Example refactor for explicit validation
-            assert system.rhs.ndim == 1
+            if system.rhs.ndim != 1:
+                raise ValueError(f"Expected 1D RHS array, got shape {system.rhs.shape}")
-            assert system.jacobian.shape[1] == num_cols_expected
+            if system.jacobian.shape[1] != num_cols_expected:
+                raise ValueError(f"Jacobian column mismatch: expected {num_cols_expected}, got {system.jacobian.shape[1]}")
applications/bci_visualization/utils/reconstruction/assets.py (2)

68-71: Use "Raises" instead of "Throws" in docstring.

Python convention uses "Raises" for exception documentation, not "Throws".

♻️ Proposed fix
-    Throws
+    Raises
     ------
     FileNotFoundError
         If any of the specified files are not found.

99-105: Remove redundant pathlib.Path() wrapping.

Lines 91-97 already construct Path objects, so wrapping them again in pathlib.Path() is redundant.

♻️ Proposed fix
-    _mua = np.load(pathlib.Path(mua_path))
-    _musp = np.load(pathlib.Path(musp_path))
-    _idxs_significant_voxels = np.load(pathlib.Path(idxs_significant_voxels_path))
-    _ijk = np.load(pathlib.Path(ijk_path))
-    _xyz = np.load(pathlib.Path(xyz_path))
-    _wavelengths = np.load(pathlib.Path(wavelengths_path))
-    _resolution = tuple(np.load(pathlib.Path(resolution_path)).tolist())
+    _mua = np.load(mua_path)
+    _musp = np.load(musp_path)
+    _idxs_significant_voxels = np.load(idxs_significant_voxels_path)
+    _ijk = np.load(ijk_path)
+    _xyz = np.load(xyz_path)
+    _wavelengths = np.load(wavelengths_path)
+    _resolution = tuple(np.load(resolution_path).tolist())
applications/bci_visualization/utils/reconstruction/hbo.py (1)

61-67: Consider documenting or removing the unused use_gpu parameter.

Similar to RegularizedSolverOperator, the use_gpu parameter is stored but the implementation always uses CuPy (GPU). Either document that it's reserved for future CPU fallback or remove it for clarity.

applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (1)

128-131: Replace assertion with explicit error handling.

Using assert for runtime validation is problematic because assertions can be disabled with Python's -O optimization flag. For critical dimension validation, use an explicit conditional with a descriptive error message.

🔧 Proposed fix
         # Validate that jacobian features dimension matches realtime moments
         # 5D jacobian shape: (channels, features, wavelengths, voxels, simulation_types)
         num_features = realtime_moments.shape[0]
-        assert self._mega_jacobians_cpu.shape[1] == num_features
+        if self._mega_jacobians_cpu.shape[1] != num_features:
+            raise ValueError(
+                f"Jacobian features dimension ({self._mega_jacobians_cpu.shape[1]}) "
+                f"does not match moment features ({num_features})"
+            )
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (2)

109-109: Consider prefixing unused parameter with underscore.

The context parameter is required by the Holoscan operator interface but unused in this implementation. Consider prefixing it with an underscore (_context) to indicate it's intentionally unused and suppress the linter warning.

♻️ Proposed fix
-    def compute(self, op_input, op_output, context):
+    def compute(self, op_input, op_output, _context):

83-86: Replace print statements with logging.

For production code, use Python's logging module instead of print() statements to enable proper log level control and observability. The operator should create a logger instance at module level.

📊 Proposed fix

Add logger at the top of the file after imports:

 from nibabel.orientations import aff2axcodes
+
+import logging
+
+logger = logging.getLogger(__name__)

Then replace print statements with appropriate log levels:

-            print(
-                f"VoxelStreamToVolume: Loaded mask from {self.mask_nifti_path}, "
-                f"shape: {self.mask_voxel_raw.shape}, values: {np.unique(self.mask_voxel_raw)}"
-            )
+            logger.info(
+                "Loaded mask from %s, shape: %s, values: %s",
+                self.mask_nifti_path, self.mask_voxel_raw.shape, np.unique(self.mask_voxel_raw)
+            )

Apply similar changes to lines 132, 193, 275-278, 291-295, and 306.

Also applies to: 132-132, 193-193, 275-278, 291-295, 306-306

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c0e1e86 and 88286b5.

📒 Files selected for processing (17)
  • applications/bci_visualization/bci_visualization.py
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/stream.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/streams/__init__.py
  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/utils/__init__.py
  • applications/bci_visualization/utils/reconstruction/__init__.py
  • applications/bci_visualization/utils/reconstruction/assets.py
  • applications/bci_visualization/utils/reconstruction/hbo.py
  • applications/bci_visualization/utils/reconstruction/reg_inv.py
✅ Files skipped from review due to trivial changes (1)
  • applications/bci_visualization/streams/init.py
🚧 Files skipped from review as they are similar to previous changes (4)
  • applications/bci_visualization/utils/init.py
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
  • applications/bci_visualization/utils/reconstruction/init.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/init.py
🧰 Additional context used
📓 Path-based instructions (4)
**/operators/**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Class names for operators should use TitleCase with 'Op' suffix (e.g., 'AdaptiveThresholdingOp')

Files:

  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/stream.py
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
**/operators/**/*.{py,cpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Operator filename should match the directory name with appropriate extension (e.g., 'adaptive_thresholding.py')

Files:

  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/stream.py
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
**/operators/**/*.py

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Python operators must include unit tests in the same directory using pytest framework

Files:

  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/stream.py
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

**/*.{py,cpp,hpp}: Code must adhere to Holoscan SDK coding standards including style compliance, descriptive naming, and minimal abbreviations
Code must include appropriate error handling and validation for all operations
All code must pass linting checks using './holohub lint' before submission
All code contributions must use clear, English descriptive names for functions and variables
Code must include inline comments for complex logic and clear documentation of functionality

Files:

  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/utils/reconstruction/assets.py
  • applications/bci_visualization/operators/stream.py
  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/utils/reconstruction/reg_inv.py
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/bci_visualization.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/utils/reconstruction/hbo.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
🧠 Learnings (8)
📚 Learning: 2025-11-19T04:58:30.227Z
Learnt from: finalelement
Repo: nvidia-holoscan/holohub PR: 1237
File: applications/surgical_scene_recon/training/utils/sh_utils.py:1-23
Timestamp: 2025-11-19T04:58:30.227Z
Learning: When reviewing files that contain third-party code with original copyright notices (e.g., from PlenOctree, other open-source projects), do not suggest modifying the original copyright years or attribution. The original copyright notice must be preserved exactly as required by redistribution licenses (BSD, MIT, Apache, etc.). If substantial modifications are made by the current project, a separate copyright line can be added, but the original must remain unchanged.

Applied to files:

  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/bci_visualization.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(operators|applications|workflows))/**/metadata.json : Operators, applications, and workflows should use the '<holohub_data_dir>' placeholder in metadata.json 'run' command for referencing data directory

Applied to files:

  • applications/bci_visualization/bci_visualization.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.py : Python operators must include unit tests in the same directory using pytest framework

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/test_*.py : Unit test files for Python operators should use 'test_' prefix followed by the operator name (e.g., 'test_adaptive_thresholding.py')

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/test_*.py : Python operator test files should reuse common fixtures from 'conftest.py'

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.{py,cpp} : Operator filename should match the directory name with appropriate extension (e.g., 'adaptive_thresholding.py')

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/test_*.py : Python test files must use 'pytest' framework and include tests for initialization, port setup, error handling, compute logic, and edge cases

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.cpp : Operators with C++ implementations must include CMakeLists.txt for build system integration

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
🧬 Code graph analysis (8)
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (2)
applications/bci_visualization/utils/reconstruction/hbo.py (3)
  • ExtinctionCoefficient (15-58)
  • HbO (61-141)
  • convert_mua_to_hb (98-141)
applications/bci_visualization/operators/reconstruction/types.py (2)
  • SolverResult (52-58)
  • VoxelMetadata (16-19)
applications/bci_visualization/operators/stream.py (2)
applications/bci_visualization/streams/base_nirs.py (5)
  • BaseNirsStream (23-33)
  • ChannelInfo (13-20)
  • start (24-25)
  • get_channels (28-29)
  • stream_nirs (32-33)
applications/bci_visualization/streams/snirf.py (3)
  • start (46-53)
  • get_channels (55-61)
  • stream_nirs (111-156)
applications/bci_visualization/streams/snirf.py (2)
applications/bci_visualization/streams/base_nirs.py (5)
  • ChannelInfo (13-20)
  • BaseNirsStream (23-33)
  • start (24-25)
  • get_channels (28-29)
  • stream_nirs (32-33)
applications/bci_visualization/operators/stream.py (1)
  • start (37-40)
applications/bci_visualization/operators/reconstruction/solver_operator.py (2)
applications/bci_visualization/utils/reconstruction/reg_inv.py (1)
  • solve_regularized_system (17-51)
applications/bci_visualization/operators/reconstruction/types.py (2)
  • NormalizedSolveBatch (42-48)
  • SolverResult (52-58)
applications/bci_visualization/bci_visualization.py (5)
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (1)
  • BuildRHSOperator (22-187)
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (1)
  • ConvertToVoxelsOperator (91-189)
applications/bci_visualization/operators/reconstruction/normalize_operator.py (1)
  • NormalizeOperator (27-163)
applications/bci_visualization/operators/stream.py (1)
  • StreamOperator (23-50)
applications/bci_visualization/streams/snirf.py (1)
  • SNIRFStream (35-156)
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (3)
applications/bci_visualization/utils/reconstruction/assets.py (1)
  • Assets (22-43)
applications/bci_visualization/operators/stream.py (1)
  • SampleOutput (18-20)
applications/bci_visualization/operators/reconstruction/types.py (2)
  • BuildRHSOutput (23-31)
  • VoxelMetadata (16-19)
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (3)
operators/medical_imaging/stl_conversion_operator/stl_conversion_operator.py (1)
  • affine (353-355)
operators/volume_renderer/volume_renderer.cpp (5)
  • setup (460-544)
  • setup (460-460)
  • input (228-228)
  • compute (546-903)
  • compute (546-547)
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (2)
  • setup (113-116)
  • compute (118-147)
applications/bci_visualization/operators/reconstruction/normalize_operator.py (1)
applications/bci_visualization/operators/reconstruction/types.py (3)
  • BuildRHSOutput (23-31)
  • NormalizedSolveBatch (42-48)
  • WavelengthSystem (35-38)
🪛 GitHub Actions: Check Compliance
applications/bci_visualization/bci_visualization.py

[error] 1-1: Copyright header incomplete: current year not included in the header.

applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py

[error] 1-1: Copyright header incomplete: current year not included in the header.

🪛 GitHub Actions: Check linting
applications/bci_visualization/utils/reconstruction/reg_inv.py

[error] 1-1: Imports are incorrectly sorted and/or formatted.

applications/bci_visualization/bci_visualization.py

[error] 1-1: Imports are incorrectly sorted and/or formatted.

🪛 Ruff (0.14.10)
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py

78-78: Avoid specifying long messages outside the exception class

(TRY003)


122-122: Unused method argument: context

(ARG002)

applications/bci_visualization/operators/stream.py

43-43: Unused method argument: op_input

(ARG002)


43-43: Unused method argument: context

(ARG002)


48-48: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/streams/snirf.py

44-44: Avoid specifying long messages outside the exception class

(TRY003)


146-146: Loop control variable ts not used within loop body

Rename unused ts to _ts

(B007)


146-146: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

applications/bci_visualization/operators/reconstruction/__init__.py

12-21: __all__ is not sorted

Apply an isort-style sorting to __all__

(RUF022)

applications/bci_visualization/operators/reconstruction/solver_operator.py

44-44: Unused method argument: context

(ARG002)

applications/bci_visualization/operators/reconstruction/build_rhs_operator.py

86-89: Avoid specifying long messages outside the exception class

(TRY003)


92-92: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py

74-74: Avoid specifying long messages outside the exception class

(TRY003)


88-90: Avoid specifying long messages outside the exception class

(TRY003)


109-109: Unused method argument: context

(ARG002)


116-118: Avoid specifying long messages outside the exception class

(TRY003)


120-122: Avoid specifying long messages outside the exception class

(TRY003)


136-136: Avoid specifying long messages outside the exception class

(TRY003)


238-240: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/utils/reconstruction/hbo.py

34-34: Value being cast to int is already an integer

Remove unnecessary int call

(RUF046)


90-90: Value being rounded is already an integer

Remove unnecessary round call

(RUF057)


92-94: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/operators/reconstruction/normalize_operator.py

54-54: Unused method argument: context

(ARG002)


58-58: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (22)
applications/bci_visualization/operators/stream.py (1)

43-43: Unused parameters are false positives.

The static analysis tool flags op_input and context as unused, but these are required by the Holoscan Operator.compute() interface signature. This is a false positive and can be safely ignored.

applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (1)

122-122: Unused context parameter is a false positive.

The context parameter is flagged as unused, but it's required by the Holoscan Operator.compute() interface signature. This can be safely ignored.

applications/bci_visualization/operators/reconstruction/normalize_operator.py (1)

54-54: Unused context parameter is a false positive.

The context parameter is flagged as unused, but it's required by the Holoscan Operator.compute() interface. This can be safely ignored.

applications/bci_visualization/utils/reconstruction/reg_inv.py (1)

6-7: Fix import ordering to pass linting.

The pipeline failure indicates that imports are incorrectly sorted. Standard library imports should come before third-party imports.

🔧 Proposed fix
-import logging
 import cupy as cp
+import logging

This should resolve the linting error reported by the pipeline.

Likely an incorrect or invalid review comment.

applications/bci_visualization/operators/reconstruction/solver_operator.py (3)

1-23: LGTM!

The copyright header, imports, and class definition are correctly structured. The class constant REG_DEFAULT provides a sensible default regularization parameter.


36-38: LGTM!

Standard Holoscan operator setup with correctly declared input and output ports.


40-51: LGTM!

The CUDA stream handling is correct, using ExternalStream to properly synchronize GPU operations. The context parameter is unused but required by the Operator interface.

applications/bci_visualization/operators/reconstruction/__init__.py (1)

1-10: LGTM!

The copyright header and imports are correctly structured. The third-party copyright from Kernel is appropriately preserved.

Based on learnings, third-party copyright notices should be preserved exactly as required by redistribution licenses.

applications/bci_visualization/utils/reconstruction/assets.py (2)

1-46: LGTM!

The Assets dataclass is well-designed with:

  • Immutability via frozen=True
  • Comprehensive field documentation
  • Proper type annotations
  • Singleton pattern for efficient caching

123-139: LGTM!

The singleton pattern for asset caching is correctly implemented and appropriate for this use case.

applications/bci_visualization/bci_visualization.py (4)

36-72: LGTM!

The initialization properly stores all configuration parameters with appropriate types and defaults.


74-121: LGTM!

The resource allocation and reconstruction pipeline setup are correctly structured. The asset loading in compose() ensures single initialization, and all operators are properly configured with the fragment parameter.


122-233: LGTM!

The visualization pipeline is well-designed with:

  • Proper deadlock avoidance using ConditionType.NONE and passthrough operator
  • Clear documentation of the multi-threading considerations
  • Correct operator flow connections including camera pose feedback

The comments clearly explain the architectural decisions.


235-292: LGTM!

The main function properly:

  • Parses command-line arguments
  • Configures data paths with environment variable fallback
  • Initializes the application with appropriate parameters
  • Configures the scheduler for multi-threaded execution with deadlock detection

The use of HOLOSCAN_INPUT_PATH with a sensible default is appropriate for application code.

applications/bci_visualization/utils/reconstruction/hbo.py (4)

1-28: LGTM!

The ExtinctionCoefficient NamedTuple is well-defined with clear field names and documentation. The Kernel copyright is appropriately preserved.


29-49: LGTM!

The CSV parsing logic correctly handles scientific notation in wavelength values. The int(round(float(value))) chain is necessary because round() returns a float, contrary to the static analysis hint.


69-96: LGTM!

The wavelength lookup logic is correct with appropriate error handling. The round() call at line 90 is defensive coding for robustness, even though the parameter is typed as int.


98-141: LGTM!

The HbO/HbR conversion logic is well-implemented with:

  • Efficient lazy caching of extinction coefficients
  • Correct matrix operations for voxel space conversion
  • Proper use of cp.linalg.solve for the linear system

The assertion at line 138 validates internal invariants and is acceptable for this context.

applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (1)

95-187: LGTM: Well-structured compute pipeline with proper CUDA stream handling.

The compute method correctly:

  • Creates and propagates CUDA streams for downstream synchronization
  • Implements lazy GPU loading to defer large asset transfers
  • Handles first-frame baseline capture with appropriate early return
  • Performs channel mapping with error handling
  • Caches transformed Jacobian data for reuse
  • Sanitizes invalid data (NaN/Inf) in-place on GPU
  • Emits properly structured BuildRHSOutput with all required fields
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (3)

16-29: LGTM: Class naming follows conventions.

The class name VoxelStreamToVolumeOp correctly follows the TitleCase with 'Op' suffix convention, and the docstring clearly describes inputs and outputs.

As per coding guidelines, operator class names should use TitleCase with 'Op' suffix.


176-251: LGTM: Comprehensive orientation derivation logic.

The _derive_orientation_from_affine method correctly:

  • Extracts voxel spacing from the affine matrix
  • Parses nibabel orientation codes to determine axis assignments
  • Identifies flip requirements based on anatomical directions (R/L, A/P, S/I)
  • Uses sentinel values (4) to validate all axes are assigned
  • Maps data spacing to world coordinates

The explicit parsing approach makes the logic clear and maintainable.


109-174: Well-structured compute pipeline with proper stream handling.

The compute method correctly:

  • Validates input types and shapes with clear error messages
  • Handles first-frame affine setup
  • Executes all GPU operations within the proper CUDA stream context
  • Updates running statistics using exponential moving average
  • Applies normalization with visualization scaling
  • Performs GPU-based resampling to match mask geometry
  • Emits both density and mask volumes with complete spatial metadata

@@ -0,0 +1,187 @@
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Examine the actual directory structure of operators in this project

echo "=== Checking the specific file in question ==="
ls -la applications/bci_visualization/operators/reconstruction/ 2>/dev/null | head -20

echo -e "\n=== Checking full directory structure under reconstruction ==="
find applications/bci_visualization/operators/reconstruction/ -type f -name "*.py" 2>/dev/null | head -20

echo -e "\n=== Checking if reconstruction is a directory or if operators are nested deeper ==="
find applications/bci_visualization/operators/reconstruction/ -maxdepth 2 -type d 2>/dev/null

echo -e "\n=== Comparing with other operator patterns in the codebase ==="
find applications/ -path "*operators*" -type d -name "*operator" 2>/dev/null | head -10

echo -e "\n=== Checking for similar flat structures (multiple operators in one directory) ==="
find applications/ -path "*operators/*" -type f -name "*_operator.py" 2>/dev/null | head -20

echo -e "\n=== Looking at CONTRIBUTING.md for guidance ==="
grep -A 5 -B 5 "operator" CONTRIBUTING.md 2>/dev/null | head -40

Repository: nvidia-holoscan/holohub

Length of output: 3532


🏁 Script executed:

#!/bin/bash
# Search for more detailed operator structure guidance

echo "=== Searching CONTRIBUTING.md for operator directory/structure guidance ==="
grep -i "operator" CONTRIBUTING.md | grep -i "director\|struct\|organizat" | head -20

echo -e "\n=== Looking for operator examples or templates in CONTRIBUTING.md ==="
grep -A 10 -B 2 "operators" CONTRIBUTING.md | head -60

echo -e "\n=== Searching for any directory structure diagrams or examples ==="
grep -A 30 "Directory Structure\|Example\|Layout" CONTRIBUTING.md | head -60

echo -e "\n=== Check if there are other operator groupings in applications ==="
find applications/ -path "*/operators/*" -type d -maxdepth 2 | head -20

echo -e "\n=== Check for any nested operator directories in applications ==="
find applications/ -path "*/operators/*/*" -type d | head -20

Repository: nvidia-holoscan/holohub

Length of output: 5730


Each operator must have its own dedicated subdirectory matching its filename.

The coding guidelines and CONTRIBUTING.md explicitly specify that operator files should be organized with each operator in its own directory, where the filename matches the directory name. Currently, multiple operators (build_rhs_operator.py, convert_to_voxels_operator.py, normalize_operator.py, solver_operator.py) are in a flat structure under operators/reconstruction/.

Restructure to follow the documented pattern:

operators/reconstruction/build_rhs_operator/
├── metadata.json
├── README.md
├── build_rhs_operator.py
├── test_build_rhs_operator.py
└── [additional files]

Apply the same structure for each additional operator in the reconstruction module.

🤖 Prompt for AI Agents
In
@applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
at line 1, The operators currently in operators/reconstruction must be
reorganized so each operator has its own subdirectory named for the operator
file; create directories build_rhs_operator/, convert_to_voxels_operator/,
normalize_operator/, solver_operator/ and move the corresponding files
(build_rhs_operator.py, convert_to_voxels_operator.py, normalize_operator.py,
solver_operator.py) into their matching folders, add a metadata.json and
README.md into each folder, and move/create the corresponding tests (e.g.,
test_build_rhs_operator.py) alongside the implementation; ensure imports and any
relative module paths are updated to reflect the new package layout.

logger = logging.getLogger(__name__)


class BuildRHSOperator(Operator):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add required unit tests for the operator.

Python operators must include unit tests in the same directory using the pytest framework. No test file was found for this operator.

Based on learnings, Python operators must include unit tests in the same directory using pytest framework.

📋 Expected test structure

Create test_build_rhs_operator.py (or test_build_rhs_op.py after class rename) in the same directory with tests covering:

  • Operator initialization with valid/invalid assets
  • Port setup verification
  • Channel mapping with valid/invalid optode orders
  • Baseline handling (first frame vs subsequent frames)
  • GPU memory allocation and lazy loading
  • Jacobian cache building
  • Invalid data handling (NaN/Inf)
  • CUDA stream propagation
  • Edge cases (empty channels, mismatched dimensions)

Do you want me to generate a skeleton test file following pytest conventions?

Comment on lines +22 to +31
class BuildRHSOperator(Operator):
"""Convert realtime moments tensors into trimmed Right-Hand Side (RHS)/Jacobian batches."""

def __init__(
self,
*,
assets: Assets,
fragment: Any | None = None,
) -> None:
super().__init__(fragment, name=self.__class__.__name__)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Rename class to follow operator naming convention.

The class name should use 'Op' suffix instead of 'Operator' to align with Holoscan SDK coding standards.

As per coding guidelines, operator class names should use TitleCase with 'Op' suffix.

📝 Proposed fix
-class BuildRHSOperator(Operator):
+class BuildRHSOp(Operator):
     """Convert realtime moments tensors into trimmed Right-Hand Side (RHS)/Jacobian batches."""
 
     def __init__(

Also update the super().__init__ call to reference the correct class name:

-        super().__init__(fragment, name=self.__class__.__name__)
+        super().__init__(fragment, name=self.__class__.__name__)

(The __class__.__name__ will automatically pick up the new name, so no change needed here, but verify all references are updated.)

🤖 Prompt for AI Agents
In
@applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
around lines 22 - 31, The class BuildRHSOperator should be renamed to BuildRHSOp
to follow the SDK naming convention; update the class declaration from
BuildRHSOperator to BuildRHSOp and search/replace any references to
BuildRHSOperator (imports, instantiations, tests) to use BuildRHSOp instead;
leave the super().__init__(fragment, name=self.__class__.__name__) as-is (it
will reflect the new name), and run the test/linters to ensure no remaining
references to the old class name remain.

Comment on lines +63 to +88
rng = np.random.default_rng(0)

n = 4
ctr = 0
out: NDArray[np.float32] = np.array([]) # bind outside loop
B: NDArray[np.float32] = np.array([]) # bind outside loop
while ctr < 100:
ctr += 1
inds = rng.choice(np.arange(len(ijk)), size=n, replace=False)
ins = ijk[np.array(inds), :] # <- points
out = xyz[np.array(inds), :] # <- mapped to
B = np.vstack([np.transpose(ins), np.ones(n, dtype=np.float32)])
if np.linalg.det(B) == 0:
continue
if np.linalg.det(B) == 0:
raise RuntimeError("Cannot compute affine, algorithm failed after 100 attempts")
D = 1.0 / np.linalg.det(B)

def entry(r, d):
return np.linalg.det(np.delete(np.vstack([r, B]), (d + 1), axis=0))

M = [[(-1) ** i * D * entry(R, i) for i in range(n)] for R in np.transpose(out)]

affine = np.concatenate((M, np.array([0, 0, 0, 1]).reshape(1, -1)), axis=0)
assert affine.shape == (4, 4)
return affine
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Improve affine computation robustness and clarity.

Several concerns with _compute_affine():

  1. Lines 67-68: out and B are initialized as empty arrays but used outside the loop without guaranteed assignment if all iterations have det=0.
  2. Line 75: The continue silently skips degenerate point sets without logging.
  3. Lines 77-78: Error message could be more descriptive about why it failed.
  4. Line 63: Random seed is fixed (0), but there's no docstring explaining why this deterministic behavior is needed.
🔧 Proposed improvements
     rng = np.random.default_rng(0)
+    # Fixed seed ensures deterministic affine computation across runs
 
     n = 4
     ctr = 0
-    out: NDArray[np.float32] = np.array([])  # bind outside loop
-    B: NDArray[np.float32] = np.array([])  # bind outside loop
+    out: NDArray[np.float32] | None = None
+    B: NDArray[np.float32] | None = None
     while ctr < 100:
         ctr += 1
         inds = rng.choice(np.arange(len(ijk)), size=n, replace=False)
         ins = ijk[np.array(inds), :]
         out = xyz[np.array(inds), :]
         B = np.vstack([np.transpose(ins), np.ones(n, dtype=np.float32)])
         if np.linalg.det(B) == 0:
+            logger.debug(f"Degenerate point set at iteration {ctr}, retrying")
             continue
+        break
-    if np.linalg.det(B) == 0:
+    if B is None or np.linalg.det(B) == 0:
-        raise RuntimeError("Cannot compute affine, algorithm failed after 100 attempts")
+        raise RuntimeError(
+            f"Cannot compute affine: all {ctr} sampled point sets were degenerate. "
+            f"Voxel grid may be degenerate or contain too few unique points."
+        )
+    
+    assert out is not None  # type narrowing
     D = 1.0 / np.linalg.det(B)

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Ruff (0.14.10)

78-78: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In
@applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
around lines 63 - 88, The affine computation in _compute_affine() is fragile:
remove the fixed global seed usage of rng (or accept a seed parameter and
document it) so behavior is intentional; ensure out and B are only referenced
after a successful non-degenerate selection by assigning them inside the loop
and breaking immediately when a valid B is found (avoid binding empty arrays
outside the loop); when a degenerate set is detected, log a warning with the
chosen indices (use the existing rng/inds) before continuing so failures are
observable; after exhausting attempts, raise a RuntimeError that includes how
many attempts were made and why (e.g., "failed to find non-degenerate point set
after N attempts"); keep the existing entry and affine construction but verify B
is invertible (or check matrix rank) before computing D to avoid using det==0 as
the sole check.

Comment on lines +1 to +163
"""
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
SPDX-License-Identifier: Apache-2.0
"""

from __future__ import annotations

import logging
from typing import Any, List, Tuple

import cupy as cp
import numpy as np
from holoscan.core import ExecutionContext, InputContext, Operator, OperatorSpec, OutputContext
from numpy.typing import NDArray

from .types import BuildRHSOutput, NormalizedSolveBatch, WavelengthSystem

logger = logging.getLogger(__name__)


HARD_CODED_NORMALIZERS = [ # for each feature type (moment)
np.array([1, 5e2, 5e5]),
np.array([0.5, 2.5e2, 2.5e5]),
]


class NormalizeOperator(Operator):
"""Apply Jacobian/RHS normalization before solver execution."""

def __init__(
self,
*,
fragment: Any | None = None,
use_gpu: bool = False,
use_hard_coded_normalizers: bool = True,
) -> None:
super().__init__(fragment, name=self.__class__.__name__)
self._jacobian_cache: NDArray[np.float32] | None = None
self._use_gpu = use_gpu
self._max_rhs: NDArray[np.float32] | None = None

self._use_hard_coded_normalizers = use_hard_coded_normalizers
self._hard_coded_row_normalizers_cache: NDArray[np.float32] | None = None
self._hard_coded_normalized_jacobian_cache: NDArray[np.float32] | None = None

def setup(self, spec: OperatorSpec) -> None:
spec.input("batch")
spec.output("normalized")

def compute(
self,
op_input: InputContext,
op_output: OutputContext,
context: ExecutionContext,
) -> None:
batch = op_input.receive("batch")
if not isinstance(batch, BuildRHSOutput):
raise TypeError(f"NormalizeOperator expected BuildRHSOutput, got {type(batch)}")

cuda_stream = op_input.receive_cuda_stream("batch")

with cp.cuda.ExternalStream(cuda_stream):
result = self._normalize_batch(batch)
if result is None:
logger.info("Skipping normalization for frame because max_rhs is all zeros")
return

systems, num_absorbers = result

op_output.emit(
NormalizedSolveBatch(
systems=tuple(systems),
idxs_significant_voxels=batch.idxs_significant_voxels,
num_full_voxels=batch.num_full_voxels,
num_absorbers=num_absorbers,
wavelengths=batch.wavelengths,
voxel_metadata=batch.voxel_metadata,
),
"normalized",
)

def _get_hard_coded_row_normalizers(
self,
num_rows: int,
num_features: int,
num_wavelengths: int,
) -> np.ndarray:
if self._hard_coded_row_normalizers_cache is not None:
return self._hard_coded_row_normalizers_cache

row_normalizers = cp.full((num_rows, num_wavelengths), cp.nan)
for wavelength_idx in range(num_wavelengths):
for idx_feature in range(num_features):
row_normalizers[idx_feature::num_features, wavelength_idx] = HARD_CODED_NORMALIZERS[
wavelength_idx
][idx_feature]

assert not cp.any(cp.isnan(row_normalizers))
self._hard_coded_row_normalizers_cache = row_normalizers
return row_normalizers

def _normalize_batch(self, batch: BuildRHSOutput) -> Tuple[List[WavelengthSystem], int] | None:
num_cols = batch.data_jacobians.shape[-1]
num_significant = int(batch.idxs_significant_voxels.size)
num_absorbers, remainder = divmod(num_cols, num_significant)
assert not remainder

# normalize rows
rhs = cp.asarray(batch.data_rhs, dtype=cp.float32)
num_wavelengths = batch.data_rhs.shape[-1]
row_normalizers = self._get_hard_coded_row_normalizers(
batch.data_jacobians.shape[0], batch.num_features, num_wavelengths
)

jacobian_template = self._get_template_jacobians(batch)
if (
self._use_hard_coded_normalizers
and self._hard_coded_normalized_jacobian_cache is not None
):
jacobians = self._hard_coded_normalized_jacobian_cache
else:
jacobians = jacobian_template.copy()
jacobians /= row_normalizers[:, :, None]

rhs /= row_normalizers

if self._use_hard_coded_normalizers and self._hard_coded_normalized_jacobian_cache is None:
self._hard_coded_normalized_jacobian_cache = jacobians

systems: List[WavelengthSystem] = []
for idx_wavelength in range(num_wavelengths):
background_payload = cp.asarray(
batch.model_optical_properties[:, idx_wavelength],
dtype=cp.float32,
)
systems.append(
WavelengthSystem(
jacobian=jacobians[:, idx_wavelength, :],
rhs=rhs[:, idx_wavelength],
background=background_payload,
)
)

return systems, num_absorbers

def _get_template_jacobians(
self,
batch: BuildRHSOutput,
) -> NDArray[np.float32]:
"""
Retrieve or build the normalized Jacobian template for the given batch.
"""
if self._jacobian_cache is not None:
return self._jacobian_cache

template = cp.asarray(batch.data_jacobians, dtype=cp.float32).copy()
background = cp.asarray(batch.model_optical_properties, dtype=cp.float32)

background_T = cp.swapaxes(background, 0, 1)
template *= background_T[None, :, :]

self._jacobian_cache = template
return template
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add unit tests for NormalizeOp.

As per coding guidelines, Python operators must include unit tests. This operator performs critical normalization that affects solver accuracy and should be thoroughly tested.

Based on coding guidelines, would you like me to help generate unit tests covering:

  • Hard-coded normalization path
  • Template Jacobian normalization
  • Caching behavior
  • Edge cases (zero RHS, empty batches)
  • Per-wavelength system construction

As per coding guidelines.

🧰 Tools
🪛 Ruff (0.14.10)

54-54: Unused method argument: context

(ARG002)


58-58: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In
@applications/bci_visualization/operators/reconstruction/normalize_operator.py
around lines 1 - 163, Add unit tests for NormalizeOperator exercising its
normalization logic and caching: write tests that instantiate NormalizeOperator
(use use_gpu=False) and feed BuildRHSOutput-like fixtures to _normalize_batch or
through compute to validate (1) hard-coded normalizer path using
HARD_CODED_NORMALIZERS and _get_hard_coded_row_normalizers yields expected
scaled jacobian and rhs; (2) template jacobian multiplication in
_get_template_jacobians applies background_T correctly; (3) caching behavior for
_jacobian_cache and _hard_coded_normalized_jacobian_cache (call twice and assert
caches are reused/unchanged); (4) edge cases: zero RHS returns None or is
skipped and empty/zero-sized batches handled without exception; and (5)
per-wavelength WavelengthSystem construction creates one WavelengthSystem per
wavelength with jacobian, rhs, background shapes matching expectations and
num_absorbers computed from data_jacobians. Use CPU numpy/cupy interoperability
by constructing batch.data_jacobians, data_rhs, model_optical_properties,
idxs_significant_voxels consistent with BuildRHSOutput, and assert types/values
in returned NormalizedSolveBatch and WavelengthSystem objects.

Comment on lines +1 to +156
"""
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
SPDX-License-Identifier: Apache-2.0
"""

import logging
from pathlib import Path
from typing import Dict, Iterator, List, NamedTuple, cast

import h5py
import numpy as np
from streams.base_nirs import ChannelInfo

from .base_nirs import BaseNirsStream

logger = logging.getLogger(__name__)

NUM_MOMENTS = 3
NUM_WAVELENGTHS = 2


class SNIRFChannel(NamedTuple):
"""
Represents a single channel in a SNIRF file.
"""

moment: int
wavelength: int
source_module: int
source_number: int
detector_module: int
detector_number: int


class SNIRFStream(BaseNirsStream):
"""
Streams data from a SNIRF file.
See more about the spec at https://github.com/fNIRS/snirf
"""

def __init__(self, snirf_file: Path | str) -> None:
self._snirf_file_path = Path(snirf_file)
if not self._snirf_file_path.exists():
raise FileNotFoundError(f"SNIRF file '{snirf_file}' does not exist")

def start(self) -> None:
self._snirf_file = h5py.File(self._snirf_file_path, "r")

self._channels = self._get_channels()
self._unique_channels = [
ch for ch in self._channels if ch.moment == 0 and ch.wavelength == 0
]
print("Got {} unique channels".format(len(self._unique_channels)))

def get_channels(self) -> ChannelInfo:
return ChannelInfo(
source_module=np.array([ch.source_module for ch in self._unique_channels]),
source_number=np.array([ch.source_number for ch in self._unique_channels]),
detector_module=np.array([ch.detector_module for ch in self._unique_channels]),
detector_number=np.array([ch.detector_number for ch in self._unique_channels]),
)

def _get_channels(self) -> List[SNIRFChannel]:
source_pos_3d: List[np.ndarray] = self._snirf_file["nirs"]["probe"]["sourcePos3D"][()] # type: ignore
detector_pos_3d: List[np.ndarray] = self._snirf_file["nirs"]["probe"]["detectorPos3D"][()] # type: ignore

source_labels: List[bytes] = self._snirf_file["nirs"]["probe"]["sourceLabels"][()] # type: ignore
detector_labels: List[bytes] = self._snirf_file["nirs"]["probe"]["detectorLabels"][()] # type: ignore

source_pos_3d_map = {}
for sourceIdx, sourceLabel in enumerate(source_labels):
m, s = sourceLabel.decode().split("S")
source_pos_3d_map[(int(m.replace("M", "")), int(s))] = source_pos_3d[sourceIdx]

detector_pos_3d_map = {}
for detectorIdx, detectorLabel in enumerate(detector_labels):
m, d = detectorLabel.decode().split("D")
detector_pos_3d_map[(int(m.replace("M", "")), int(d))] = detector_pos_3d[detectorIdx]

moments = self._snirf_file["nirs"]["probe"]["momentOrders"][()] # type: ignore
data1 = cast(h5py.Dataset, self._snirf_file["nirs"]["data1"]) # type: ignore
channel_keys = [key for key in data1 if key.startswith("measurementList")]
# Sort channel keys numerically (e.g., measurementList1, measurementList2, ..., measurementList10)
# to match the column order in dataTimeSeries
channel_keys.sort(key=lambda x: int(x.replace("measurementList", "")))
channels: List[SNIRFChannel] = []
for channel_key in channel_keys:
channel = cast(h5py.Dataset, data1[channel_key])
source_module, source = (
source_labels[channel["sourceIndex"][()] - 1].decode().replace("M", "").split("S")
)
detector_module, detector = (
detector_labels[channel["detectorIndex"][()] - 1]
.decode()
.replace("M", "")
.split("D")
)
channels.append(
SNIRFChannel(
moment=int(moments[channel["dataTypeIndex"][()] - 1]), # type: ignore
wavelength=int(channel["wavelengthIndex"][()] - 1),
source_module=int(source_module),
source_number=int(source),
detector_module=int(detector_module),
detector_number=int(detector),
)
)

return channels

def stream_nirs(self) -> Iterator[np.ndarray]:
data1 = cast(h5py.Dataset, self._snirf_file["nirs"]["data1"]) # type: ignore
times: np.ndarray = data1["time"][()]
data: np.ndarray = data1["dataTimeSeries"][()]

unique_channel_lut = {
(ch.source_module, ch.source_number, ch.detector_module, ch.detector_number): idx
for idx, ch in enumerate(self._unique_channels)
}
channel_idxs: Dict[int, Dict[int, Dict[str, List[int]]]] = {}
for moment in range(NUM_MOMENTS):
channel_idxs[moment] = {}
for wavelength in range(NUM_WAVELENGTHS):
channel_order = [
(
idx,
unique_channel_lut.get(
(
ch.source_module,
ch.source_number,
ch.detector_module,
ch.detector_number,
),
-1,
),
)
for idx, ch in enumerate(self._channels)
if ch.moment == moment and ch.wavelength == wavelength
]
channel_idxs[moment][wavelength] = {
"snirf_channel_idxs": [idx for idx, _ in channel_order],
"unique_channel_idxs": [uniq_idx for _, uniq_idx in channel_order],
}

print("Streaming {} samples from SNIRF".format(len(data)))
for ts, sample in zip(times, data):
# sample is shape (n_channels,)
# send (n_moments, n_unique_channels, n_wavelengths)
to_send = np.full((NUM_MOMENTS, len(self._unique_channels), NUM_WAVELENGTHS), np.nan)
for moment in range(NUM_MOMENTS):
for wavelength in range(NUM_WAVELENGTHS):
snirf_channel_idxs = channel_idxs[moment][wavelength]["snirf_channel_idxs"]
unique_channel_idxs = channel_idxs[moment][wavelength]["unique_channel_idxs"]
to_send[moment, unique_channel_idxs, wavelength] = sample[snirf_channel_idxs]

yield to_send
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add unit tests for SNIRFStream.

As per coding guidelines, Python operators and modules must include unit tests. No test file is present for this stream implementation.

Based on coding guidelines, unit tests are required. Would you like me to help generate a test template that covers:

  • File existence validation
  • Channel parsing
  • Stream iteration
  • Error handling for malformed SNIRF files
🧰 Tools
🪛 Ruff (0.14.10)

44-44: Avoid specifying long messages outside the exception class

(TRY003)


146-146: Loop control variable ts not used within loop body

Rename unused ts to _ts

(B007)


146-146: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

🤖 Prompt for AI Agents
In @applications/bci_visualization/streams/snirf.py around lines 1 - 156, Add a
pytest test module that exercises SNIRFStream: verify FileNotFoundError when
constructing with a missing path, create a minimal in-memory/temp HDF5 SNIRF
file (matching keys used in _get_channels: nirs/probe/sourcePos3D,
detectorPos3D, sourceLabels, detectorLabels, momentOrders and nirs/data1 with
measurementList* groups containing sourceIndex, detectorIndex, dataTypeIndex,
wavelengthIndex, plus dataTimeSeries and time) and assert start() populates
unique channels (via get_channels()) and stream_nirs() yields correctly-shaped
arrays; also add a test that deliberately corrupts/misformats a required dataset
to assert start()/stream_nirs() raises an appropriate exception; use pytest
fixtures (tmp_path) and h5py to build files and reference SNIRFStream,
_get_channels, start, get_channels, stream_nirs in test names and assertions so
coverage tools trace these symbols.

Comment on lines +46 to +54
def start(self) -> None:
self._snirf_file = h5py.File(self._snirf_file_path, "r")

self._channels = self._get_channels()
self._unique_channels = [
ch for ch in self._channels if ch.moment == 0 and ch.wavelength == 0
]
print("Got {} unique channels".format(len(self._unique_channels)))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add resource cleanup and prefer logging over print statements.

The HDF5 file opened at line 47 is never explicitly closed, which can lead to resource leaks. Additionally, line 53 uses print() instead of the configured logger, which is inconsistent with logging practices.

🔧 Proposed fix

Add a stop() or close() method to properly close the file:

def stop(self) -> None:
    """Close the SNIRF file and release resources."""
    if hasattr(self, '_snirf_file') and self._snirf_file is not None:
        self._snirf_file.close()
        self._snirf_file = None

Replace print() with logger:

-        print("Got {} unique channels".format(len(self._unique_channels)))
+        logger.info("Got %d unique channels", len(self._unique_channels))
🤖 Prompt for AI Agents
In @applications/bci_visualization/streams/snirf.py around lines 46 - 54, The
start() method opens an HDF5 file into self._snirf_file and prints a message but
never closes the file and uses print() instead of the logger; add a stop() or
close() method that checks for self._snirf_file (and/or hasattr(self,
'_snirf_file')), calls its close() and sets it to None to release resources, and
replace the print("Got {} unique channels".format(...)) with the configured
logger (e.g., self.logger.info or a module logger) so that start() logs the
channel count instead of printing to stdout.

Comment on lines 1 to 125
"""
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
SPDX-License-Identifier: Apache-2.0
"""

import logging
import cupy as cp

logger = logging.getLogger(__name__)

# Cache holds CuPy arrays when running GPU-only. (Keyed by wavelength index.)
_HESSIAN_CACHE: dict[int, object] = {}

MAX_REASONABLE_COND_RATIO = 10


def solve_regularized_system(
data_jacobians,
data_rhs,
wavelength_idx: int,
reg: float,
) -> object:
"""
Parameters
----------
data_jacobians : NDArray[np.float32]
Jacobian matrix of shape (features * channels, reconstruction_elements * 2).
data_rhs : NDArray[np.float32]
Right-hand side data of shape (features * channels).
reg : float
Regularization parameter λ.
Returns
-------
NDArray[np.float32]
Solution array of shape (reconstruction_elements * 2).
"""
# add sample dimension
data_rhs = cp.asarray(data_rhs).reshape(1, -1)

# Form Hessian and get pre-computed matrix properties
hessian_reg = _build_regularized_system(
data_jacobians,
wavelength_idx,
reg,
)

# Dual formulation: solve smaller system, then back-substitute
alpha = _solve_square_system(hessian_reg, data_rhs.T)
solution = data_jacobians.T @ alpha
return solution.T.squeeze() # remove sample dimension


def _build_regularized_system(
data_jacobians,
wavelength_idx: int,
reg: float,
) -> object:
"""Build regularized system matrix.
Parameters
----------
data_jacobians : NDArray[np.float32]
Jacobian matrix.
wavelength_idx : int
Wavelength index for caching.
reg : float
Regularization parameter.
Returns
-------
NDArray[np.float32]
Regularized system matrix
"""
global _HESSIAN_CACHE
data_hessian_reg = _HESSIAN_CACHE.get(wavelength_idx)
if data_hessian_reg is not None:
logger.debug("Reusing cached Hessian")
return data_hessian_reg

# Smaller SPD system: (J J^T + λI) for underdetermined case
data_hessian = data_jacobians @ data_jacobians.T

data_hessian_reg = data_hessian + reg * cp.sqrt(cp.linalg.norm(data_hessian)) * cp.eye(
data_hessian.shape[0], dtype=data_jacobians.dtype
)

_HESSIAN_CACHE[wavelength_idx] = data_hessian_reg
logger.debug("Cached Hessian for reuse")

return data_hessian_reg


def _solve_square_system(
A,
b,
) -> object:
"""
Parameters
----------
A : NDArray[np.float32]
Square coefficient matrix of the linear system (typically a Hessian).
b : NDArray[np.float32]
Right-hand side vector or matrix.
Returns
-------
NDArray[np.float32]
Solution to the linear system Ax = b.
"""

# Validate input
assert (A.ndim == 2) and (A.shape[0] == A.shape[1])
assert b.ndim in {1, 2} and b.shape[0] == A.shape[0]
assert cp.all(cp.isfinite(A))
assert cp.all(cp.isfinite(b))

# Ensure symmetry for numerical stability
A = 0.5 * (A + A.T)

# Regular inverse
result = cp.linalg.solve(A, b)
assert cp.all(cp.isfinite(result))
return result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add unit tests for regularized solver.

No unit tests are present for this critical numerical solver component. Tests should verify correctness, numerical stability, caching behavior, and error handling.

Would you like me to help generate unit tests covering:

  • Solve correctness with known solutions
  • Cache reuse verification
  • Handling of ill-conditioned systems
  • Input validation error paths
  • Regularization parameter effects
🧰 Tools
🪛 GitHub Actions: Check linting

[error] 1-1: Imports are incorrectly sorted and/or formatted.

logger = logging.getLogger(__name__)

# Cache holds CuPy arrays when running GPU-only. (Keyed by wavelength index.)
_HESSIAN_CACHE: dict[int, object] = {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n applications/bci_visualization/utils/reconstruction/reg_inv.py

Repository: nvidia-holoscan/holohub

Length of output: 4090


🏁 Script executed:

rg "from.*reg_inv|import.*reg_inv" applications/bci_visualization/ -n

Repository: nvidia-holoscan/holohub

Length of output: 210


🏁 Script executed:

cat -n applications/bci_visualization/operators/reconstruction/solver_operator.py

Repository: nvidia-holoscan/holohub

Length of output: 3630


🏁 Script executed:

find applications/bci_visualization -name "*.py" -type f | grep -E "(app|main)" | head -20

Repository: nvidia-holoscan/holohub

Length of output: 1347


🏁 Script executed:

cat -n applications/bci_visualization/bci_visualization.py

Repository: nvidia-holoscan/holohub

Length of output: 11513


🏁 Script executed:

rg "Lock|threading|mutex|lock" applications/bci_visualization/utils/reconstruction/ -i

Repository: nvidia-holoscan/holohub

Length of output: 49


Protect global Hessian cache from concurrent access in multi-threaded environment.

The _HESSIAN_CACHE global dictionary is accessed without synchronization. With Holoscan's EventBasedScheduler running 5 worker threads, multiple operators can process different batches concurrently, causing race conditions when accessing the cache. For example, two threads may simultaneously call _build_regularized_system() with the same wavelength_idx, both read a cache miss, compute the Hessian, and overwrite each other's results.

Use threading.Lock() to protect cache read-modify-write operations in _build_regularized_system():

Example fix
import threading

_HESSIAN_CACHE: dict[int, object] = {}
_HESSIAN_CACHE_LOCK = threading.Lock()

def _build_regularized_system(data_jacobians, wavelength_idx: int, reg: float) -> object:
    global _HESSIAN_CACHE
    with _HESSIAN_CACHE_LOCK:
        data_hessian_reg = _HESSIAN_CACHE.get(wavelength_idx)
        if data_hessian_reg is not None:
            logger.debug("Reusing cached Hessian")
            return data_hessian_reg
        
        # Compute and cache
        data_hessian = data_jacobians @ data_jacobians.T
        data_hessian_reg = data_hessian + reg * cp.sqrt(cp.linalg.norm(data_hessian)) * cp.eye(...)
        _HESSIAN_CACHE[wavelength_idx] = data_hessian_reg
        logger.debug("Cached Hessian for reuse")
    
    return data_hessian_reg
🤖 Prompt for AI Agents
In @applications/bci_visualization/utils/reconstruction/reg_inv.py at line 12,
The global _HESSIAN_CACHE is not synchronized; import threading, add a
module-level _HESSIAN_CACHE_LOCK = threading.Lock(), and in
_build_regularized_system() wrap the cache read-check, compute, and write
sequence inside with _HESSIAN_CACHE_LOCK: so you check
_HESSIAN_CACHE.get(wavelength_idx) while holding the lock, return if present,
otherwise compute the Hessian and store it into _HESSIAN_CACHE[wavelength_idx]
before releasing the lock; keep the actual heavy GPU computation outside the
lock only if you restructure to a double-checked pattern (check under lock, if
miss release lock, compute, then re-acquire lock to check-and-set) to avoid long
lock holds.

Comment on lines +113 to +117
assert (A.ndim == 2) and (A.shape[0] == A.shape[1])
assert b.ndim in {1, 2} and b.shape[0] == A.shape[0]
assert cp.all(cp.isfinite(A))
assert cp.all(cp.isfinite(b))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Replace assertions with explicit validation and error messages.

Using assertions for input validation (lines 113-116) is problematic because assertions can be disabled with Python's -O flag, bypassing validation in production.

🔧 Proposed fix
-    assert (A.ndim == 2) and (A.shape[0] == A.shape[1])
-    assert b.ndim in {1, 2} and b.shape[0] == A.shape[0]
-    assert cp.all(cp.isfinite(A))
-    assert cp.all(cp.isfinite(b))
+    if A.ndim != 2 or A.shape[0] != A.shape[1]:
+        raise ValueError(f"Matrix A must be square, got shape {A.shape}")
+    if b.ndim not in {1, 2} or b.shape[0] != A.shape[0]:
+        raise ValueError(f"Vector b shape {b.shape} incompatible with A shape {A.shape}")
+    if not cp.all(cp.isfinite(A)):
+        raise ValueError("Matrix A contains non-finite values")
+    if not cp.all(cp.isfinite(b)):
+        raise ValueError("Vector b contains non-finite values")

Apply the same pattern to line 123.

🤖 Prompt for AI Agents
In @applications/bci_visualization/utils/reconstruction/reg_inv.py around lines
113 - 116, Replace the assert statements that validate A and b with explicit
checks that raise informative exceptions (e.g., ValueError) including the
offending shapes/values: verify A is a 2D square array ((A.ndim == 2) and
(A.shape[0] == A.shape[1])) and raise if not; verify b has ndim 1 or 2 and
b.shape[0] == A.shape[0] and raise if not; verify cp.all(cp.isfinite(A)) and
cp.all(cp.isfinite(b)) and raise if not, including which array failed and
summary info (shapes, any non-finite count). Apply the same replacement pattern
to the validation at the later check referenced on line 123 so that no input
validation relies on Python assertions.

@mimiliaogo mimiliaogo force-pushed the mimil/bci-kernel-viz branch from 88286b5 to 467cabf Compare January 8, 2026 08:03
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Overview

Greptile Summary

Introduces a complete BCI visualization application that processes streaming fNIRS data from Kernel Flow 2 headset through GPU-accelerated reconstruction to real-time 3D volume rendering. The reconstruction pipeline transforms time-of-flight moments into hemoglobin concentration voxels using regularized inverse solvers, while visualization enhancements to VolumeRendererOp enable async GPU streaming with proper CUDA stream handling and camera pose caching. Event-based multithreading prevents deadlocks between reconstruction and rendering stages through backpressure management.

Confidence Score: 2/5

  • Contains one critical synchronization bug that could cause GPU errors, plus several edge cases needing verification
  • The ColorBufferPassthroughOp receives CUDA streams but fails to propagate them downstream to HolovizOp, which will cause GPU synchronization issues and potential race conditions in the multithreaded pipeline. Additional concerns include unverified initialization behavior (cached_camera_pose_ may not default to identity), baseline skipping logic that could block downstream operators, and statistics initialization that rejects legitimate zero values. While the volume renderer improvements are solid, the CUDA stream propagation bug is a blocker that will cause runtime issues.
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp (missing CUDA stream propagation), operators/volume_renderer/volume_renderer.cpp (verify Matrix4x4 default constructor initializes to identity), applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (verify first frame skip doesn't block pipeline)

Important Files Changed

File Analysis

Filename Score Overview
operators/volume_renderer/volume_renderer.cpp 4/5 Adds CUDA stream handling, camera pose caching, memory leak fix, and deadlock prevention for multithreaded execution
operators/volume_renderer/dataset.cpp 5/5 Converts synchronous cudaMemcpy3D to async version with CUDA stream parameter for better GPU pipeline efficiency
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp 1/5 Passthrough operator with POP queue policy to drop stale frames; missing CUDA stream propagation causes sync issues
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py 3/5 Converts streaming voxel data to dense 3D volume with adaptive normalization, GPU resampling, and mask handling
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py 4/5 Builds RHS/Jacobian from moments with lazy GPU upload, baseline differencing, and CUDA stream creation at pipeline entry

Sequence Diagram

sequenceDiagram
    participant Stream as StreamOperator
    participant BuildRHS as BuildRHSOperator
    participant Normalize as NormalizeOperator
    participant Solver as RegularizedSolver
    participant Convert as ConvertToVoxels
    participant VoxelVol as VoxelStreamToVolume
    participant VolRender as VolumeRenderer
    participant Passthrough as ColorBufferPassthrough
    participant Holoviz as HolovizOp

    Note over Stream,Convert: Reconstruction Pipeline (GPU Thread)
    Stream->>BuildRHS: moments + channels
    Note right of BuildRHS: Create CUDA stream<br/>Upload Jacobian to GPU<br/>Baseline differencing
    BuildRHS->>Normalize: RHS + Jacobian (GPU)
    Normalize->>Solver: Normalized system (GPU)
    Note right of Solver: Regularized inverse<br/>Cached Hessian
    Solver->>Convert: mua/musp (GPU)
    Convert->>VoxelVol: HbO voxels + affine

    Note over VoxelVol,Holoviz: Visualization Pipeline (GPU Thread)
    VoxelVol->>VolRender: volume + mask + spacing
    Note right of VolRender: Async GPU copy<br/>Pose caching<br/>Memory leak fix
    VolRender->>Passthrough: color buffer + CUDA stream
    Note right of Passthrough: POP policy drops<br/>stale frames
    Passthrough->>Holoviz: color buffer
    Holoviz->>VolRender: camera pose feedback
Loading

Comment on lines +268 to +272
if (
(self.global_min is None or self.global_max is None)
and current_min != 0
and current_max != 0
):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Statistics initialization condition could fail if first frame has legitimately zero min/max values. Consider initializing unconditionally on first frame or checking frame_count instead.

Suggested change
if (
(self.global_min is None or self.global_max is None)
and current_min != 0
and current_max != 0
):
# Initialize on first frame
if self.global_min is None or self.global_max is None:
self.global_min = current_min
self.global_max = current_max

Comment on lines +113 to +116
realtime_moments = self._apply_baseline(realtime_moments)
if realtime_moments is None:
logger.info("Skipping RHS build for first frame (baseline capture)")
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First frame is skipped for baseline capture but no output is emitted, which could cause downstream operators to block waiting for data. Verify the event-based scheduler handles this case and that downstream operators won't deadlock waiting for the first emission. Does the event-based scheduler handle operators that skip emitting output on first compute?

@mimiliaogo mimiliaogo force-pushed the mimil/bci-kernel-viz branch from 467cabf to d08d384 Compare January 8, 2026 08:09
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

🤖 Fix all issues with AI agents
In @applications/bci_visualization/bci_visualization.py:
- Around line 1-34: The imports at the top of bci_visualization.py are not
isort-sorted; reorder them into standard groups (standard library, third-party,
local) and alphabetize within each group to satisfy linting: keep built-ins
(argparse, os, pathlib.Path) first, then holoscan imports
(holoscan.core.Application, ConditionType; holoscan.operators.HolovizOp;
holoscan.resources.CudaStreamPool, UnboundedAllocator;
holoscan.schedulers.EventBasedScheduler), then third-party/local package imports
(holohub.color_buffer_passthrough.ColorBufferPassthroughOp,
holohub.volume_renderer.VolumeRendererOp), followed by your operators and
streams (operators.reconstruction.*: BuildRHSOperator, ConvertToVoxelsOperator,
NormalizeOperator, RegularizedSolverOperator; operators.stream.StreamOperator;
operators.voxel_stream_to_volume.VoxelStreamToVolumeOp;
streams.base_nirs.BaseNirsStream; streams.snirf.SNIRFStream), and finally
utils.reconstruction.assets.get_assets; ensure no duplicate or unused imports
remain and adjust relative ordering to match isort conventions.

In
@applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp:
- Around line 30-38: In ColorBufferPassthroughOp::compute the variable
cuda_streams (retrieved via input.receive_cuda_streams("color_buffer_in")) is
unused; either remove that call if CUDA streams are not needed, or propagate the
streams with the output by emitting a tuple containing color_message.value() and
cuda_streams (e.g., change output.emit to emit the tuple to "color_buffer_out"),
or if the framework handles streams implicitly add a one-line comment above the
receive call explaining streams are intentionally ignored; update code in
ColorBufferPassthroughOp::compute accordingly and remove unused-variable
warnings.

In
@applications/bci_visualization/operators/reconstruction/normalize_operator.py:
- Around line 27-48: Add a new pytest file test_normalize_operator.py alongside
NormalizeOperator that imports NormalizeOperator and relevant fixtures from
conftest.py; write tests that (1) instantiate NormalizeOperator with various
constructor args (use_gpu True/False, use_hard_coded_normalizers True/False) to
verify attributes like _use_gpu and _use_hard_coded_normalizers are set, (2)
call setup(spec) using the OperatorSpec fixture to assert input("batch") and
output("normalized") ports are registered, (3) exercise error handling by
passing invalid inputs to compute() (or the operator's processing entrypoint)
and asserting appropriate exceptions/messages, (4) validate compute logic with
representative batch payloads including normal cases and edge cases (empty
batch, all-zero jacobian/RHS) and assert expected normalized outputs or caches
(_jacobian_cache, _max_rhs, _hard_coded_row_normalizers_cache,
_hard_coded_normalized_jacobian_cache) are populated, and (5) parametrize tests
to cover both GPU and CPU paths and reuse shared fixtures from conftest.py for
OperatorSpec, sample batches, and any mock device contexts.

In @applications/bci_visualization/operators/stream.py:
- Around line 42-50: Rename the unused parameters in compute to _op_input and
_context to mark them as intentionally unused, and replace the explicit
StopIteration instantiation with a bare raise StopIteration (i.e., remove the
string argument) to satisfy the static analysis simplification; update the
function signature (compute(self, _op_input: InputContext, op_output:
OutputContext, _context: ExecutionContext)) and the exception line accordingly
in the compute method.
- Around line 23-50: Add a new pytest file test_stream.py next to StreamOperator
that mocks a BaseNirsStream to exercise StreamOperator: test initialization with
valid and invalid (e.g., None or wrong type) stream arguments, verify setup()
calls spec.output("samples") using a fake OperatorSpec, test start() calls
stream.start() and assigns _channels from get_channels() and _iter from
stream_nirs(), test compute() emits SampleOutput via op_output.emit when the
iterator yields a sample and raises/propagates StopIteration when
next(self._iter, None) returns None, and add edge-case tests for empty streams
and malformed sample data; use mocks/stubs for spec, op_output, and
ExecutionContext and reference StreamOperator, setup, start, compute, and
SampleOutput in your tests.

In @applications/bci_visualization/streams/snirf.py:
- Around line 63-109: The _get_channels method lacks validation and will raise
uncaught exceptions for malformed SNIRF files; add explicit existence checks for
required datasets/groups (nirs/probe/sourcePos3D, detectorPos3D, sourceLabels,
detectorLabels, probe/momentOrders, nirs/data1), validate label formats before
splitting (reject/raise descriptive ValueError if a label does not match the
expected "MxSy" or "MxDy" pattern), check index bounds when indexing into
source_pos_3d/detector_pos_3d and when using
sourceIndex/detectorIndex/wavelengthIndex/dataTypeIndex from each
measurementList, and extract small helpers (e.g., _parse_source_label,
_parse_detector_label and _build_pos_map) to reduce complexity; ensure all
raised errors include the offending label/key and context (e.g., channel_key or
dataset name) and fail fast with clear messages rather than allowing raw
exceptions to propagate.
- Around line 145-156: Replace the print call with logger.info("Streaming {}
samples from SNIRF".format(len(data))) to use the module logger; rename the
unused loop variable ts to _ts in the for loop (for _ts, sample in ...) to
indicate it is intentionally ignored; and make the zip strict by changing
zip(times, data) to zip(times, data, strict=True) so mismatched lengths raise an
error early.
- Around line 46-53: The start() method currently uses print("Got {} unique
channels".format(len(self._unique_channels))) which breaks logging consistency;
replace that print call with the configured logger (e.g., logger.info("Got %d
unique channels", len(self._unique_channels)) or logger.info(f"Got
{len(self._unique_channels)} unique channels")) inside start(), ensuring the
module-level or class logger is used (and add/import the logger if it isn’t
already present) and keep the message and len(self._unique_channels) reference
intact.

In @operators/volume_renderer/dataset.cpp:
- Line 113: The declaration "std::unique_ptr<clara::viz::IBlob::AccessGuard>
access_gpu = cur_data_array->blob_->Access(cuda_stream);" exceeds 100
characters; break it into multiple lines to satisfy linting by splitting the
type and variable from the initializer (for example, put the template type and
variable on one line and the assignment/Access(...) call on the next). Update
the statement that creates access_gpu (referencing
std::unique_ptr<clara::viz::IBlob::AccessGuard>, access_gpu,
cur_data_array->blob_->Access, and cuda_stream) so the line lengths are under
the limit while preserving the same semantics and ownership.

In @operators/volume_renderer/volume_renderer.cpp:
- Line 289: The function signature for VolumeRendererOp::Impl::receive_volume is
over 100 characters; break the signature into multiple lines to satisfy linting:
place the return type and class qualifier on the first line, then put the
function name and each parameter on its own line (or wrap parameters across
lines) keeping the closing parenthesis and opening brace on their own line;
ensure the parameter names InputContext& input, ExecutionContext& context, and
Dataset::Types type are preserved and indentation follows project style.
- Around line 549-556: Remove the trailing whitespace on the line that declares
new_mask (the line calling impl_->receive_volume for
Dataset::Types::Segmentation); ensure the line "bool new_mask =
impl_->receive_volume(input, context, Dataset::Types::Segmentation);" has no
trailing spaces or tabs so the linter no longer flags it.
- Around line 306-313: Long lines and trailing whitespace in the volume handling
code cause lint failures; shorten the long expressions and remove trailing
spaces. Break the long static_cast/get call into multiple subexpressions (e.g.,
assign volume.value() to a local auto entity variable, then call
entity.get<nvidia::gxf::Tensor>("volume") into maybe_tensor) and keep each line
under 100 characters, and ensure no trailing whitespace after the
dataset_.ResetVolume(type); and the throw/runtime lines; update the variables
referenced (dataset_.ResetVolume(type), volume.value(), maybe_tensor,
entity.get<nvidia::gxf::Tensor>("volume")) accordingly.
- Line 340: The single-line call to dataset_.SetVolume(...) exceeds 100
characters; break the argument list across multiple lines to satisfy linting by
placing the function name and opening parenthesis on one line and each argument
(type, spacing, permute_axis, flip_axes, element_range, volume_tensor,
cuda_stream) on its own indented line with trailing commas, and close the call
on its own line; alternatively extract a long argument into a temporary variable
before calling dataset_.SetVolume to reduce line length (refer to
dataset_.SetVolume).
🧹 Nitpick comments (18)
applications/bci_visualization/README.md (1)

80-97: Specify language for fenced code block.

The code block starting at line 80 is missing a language identifier, which affects syntax highlighting and documentation rendering.

📝 Suggested fix
-```
+```text
 data/bci_visualization/
 ├── anatomy_labels_high_res.nii.gz      # Brain segmentation
 ├── config.json                          # Volume renderer configuration
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (1)

31-34: Simplify redundant type conversion.

The expression int(round(float(value))) contains a redundant int() call. In Python 3, round() already returns an integer when called with a single argument.

♻️ Proposed simplification
         def _parse_wavelength(value: str) -> int:
             # Some datasets store wavelength as scientific notation (e.g. "6.0000000e+02").
             # Parse as float then round to nearest integer nm.
-            return int(round(float(value)))
+            return round(float(value))
applications/bci_visualization/utils/reconstruction/hbo.py (2)

31-34: Simplify redundant type conversion.

The expression int(round(float(value))) contains a redundant int() call. In Python 3, round() already returns an integer when called with a single argument.

♻️ Proposed simplification
         def _parse_wavelength(value: str) -> int:
             # Some datasets store wavelength as scientific notation (e.g. "6.0000000e+02").
             # Parse as float then round to nearest integer nm.
-            return int(round(float(value)))
+            return round(float(value))

62-67: Unused constructor parameter.

The use_gpu parameter is accepted but never used in the class. The implementation unconditionally uses CuPy for GPU operations, making this parameter misleading.

Consider either:

  1. Removing the unused parameter, or
  2. Implementing CPU/GPU branching if the flexibility is needed
♻️ Option 1: Remove unused parameter
     def __init__(
-        self, coefficients: Dict[int, ExtinctionCoefficient], use_gpu: bool = False
+        self, coefficients: Dict[int, ExtinctionCoefficient]
     ) -> None:
         self._coefficients = coefficients
         self._cached_coefficients: cp.ndarray | None = None
-        self._use_gpu = use_gpu
applications/bci_visualization/streams/snirf.py (1)

12-14: Minor: Inconsistent import style for base_nirs module.

Line 12 uses an absolute import (from streams.base_nirs), while line 14 uses a relative import (from .base_nirs). Consider using relative imports consistently for modules within the same package.

♻️ Proposed fix
-from streams.base_nirs import ChannelInfo
+from .base_nirs import ChannelInfo
applications/bci_visualization/operators/reconstruction/normalize_operator.py (3)

27-28: Consider renaming to NormalizeOp for consistency.

Per coding guidelines, operator class names should use TitleCase with 'Op' suffix (e.g., NormalizeOp). This applies to other operators in the reconstruction package as well.


40-40: Remove unused _max_rhs field.

This field is initialized but never used anywhere in the class. If it's intended for future use, consider adding a TODO comment.

Proposed fix
         self._use_gpu = use_gpu
-        self._max_rhs: NDArray[np.float32] | None = None
 
         self._use_hard_coded_normalizers = use_hard_coded_normalizers

98-98: Consider replacing assert with explicit validation for production robustness.

Asserts can be disabled with Python's -O flag. For critical invariant checks, consider using explicit if/raise patterns.

applications/bci_visualization/operators/reconstruction/__init__.py (1)

12-21: Consider sorting __all__ alphabetically for consistency.

Static analysis flags unsorted __all__. Sorting improves readability and reduces merge conflicts.

Proposed fix
 __all__ = [
     "BuildRHSOperator",
+    "BuildRHSOutput",
     "ConvertToVoxelsOperator",
     "NormalizeOperator",
+    "NormalizedSolveBatch",
     "RegularizedSolverOperator",
-    "BuildRHSOutput",
-    "NormalizedSolveBatch",
     "SolverResult",
     "VoxelMetadata",
 ]
applications/bci_visualization/operators/reconstruction/solver_operator.py (1)

29-34: Remove unused use_gpu parameter and _use_gpu field.

The use_gpu parameter is stored but never used. The solver always uses CuPy (GPU) regardless of this flag.

Proposed fix
     def __init__(
         self,
         *,
         reg: float = REG_DEFAULT,
-        use_gpu: bool = False,
         fragment: Any | None = None,
     ) -> None:
         super().__init__(fragment, name=self.__class__.__name__)
         self._reg = reg
-        self._use_gpu = use_gpu
applications/bci_visualization/utils/reconstruction/reg_inv.py (3)

15-15: Remove unused MAX_REASONABLE_COND_RATIO constant.

This constant is defined but never referenced anywhere in the code.


18-23: Improve return type annotation.

The return type object is too generic. Consider using cp.ndarray or a type alias for better IDE support and static analysis.

Proposed fix
+from typing import Any
+
+# Type alias for CuPy arrays (cp.ndarray is the actual type)
+CuPyArray = Any  # or use cp.ndarray if cupy stubs are available
+
 def solve_regularized_system(
     data_jacobians,
     data_rhs,
     wavelength_idx: int,
     reg: float,
-) -> object:
+) -> CuPyArray:

76-76: Unnecessary global statement.

The global keyword is only needed when reassigning a module-level variable. Since you're mutating the dict (not reassigning), this line can be removed.

Proposed fix
-    global _HESSIAN_CACHE
     data_hessian_reg = _HESSIAN_CACHE.get(wavelength_idx)
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (3)

22-23: Class naming convention: consider using Op suffix.

Per coding guidelines, operator class names should use TitleCase with 'Op' suffix (e.g., BuildRHSOp). Currently using BuildRHSOperator.

This is a minor stylistic deviation; the current name is still clear and descriptive.


131-131: Use exception instead of assert for production validation.

Using assert for validating jacobian feature dimension match can be disabled with -O flag. For production code, consider raising a ValueError instead.

♻️ Proposed fix
-        assert self._mega_jacobians_cpu.shape[1] == num_features
+        if self._mega_jacobians_cpu.shape[1] != num_features:
+            raise ValueError(
+                f"Jacobian features dimension ({self._mega_jacobians_cpu.shape[1]}) "
+                f"does not match realtime moments ({num_features})"
+            )

1-5: Missing unit tests for operator.

Per coding guidelines, Python operators must include unit tests in the same directory using the pytest framework. Consider adding a test_build_rhs_operator.py file with tests covering initialization, port setup, error handling (e.g., missing channel mappings), compute logic, and edge cases (e.g., first-frame baseline).

Based on learnings from coding guidelines.

Would you like me to help generate a skeleton test file for this operator?

applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (2)

83-86: Consider using logging module instead of print() statements.

Multiple print() calls are used for debugging/status output. For production code, using Python's logging module (as done in build_rhs_operator.py) provides better control over log levels and output destination.

♻️ Proposed fix

Add at the top of the file:

import logging

logger = logging.getLogger(__name__)

Then replace print statements, e.g.:

-            print(
-                f"VoxelStreamToVolume: Loaded mask from {self.mask_nifti_path}, "
-                f"shape: {self.mask_voxel_raw.shape}, values: {np.unique(self.mask_voxel_raw)}"
-            )
+            logger.info(
+                "Loaded mask from %s, shape: %s, values: %s",
+                self.mask_nifti_path,
+                self.mask_voxel_raw.shape,
+                np.unique(self.mask_voxel_raw),
+            )

1-6: Missing unit tests for operator.

Per coding guidelines, Python operators must include unit tests in the same directory using the pytest framework. Consider adding a test_voxel_stream_to_volume.py file.

Based on learnings from coding guidelines.

Would you like me to help generate a skeleton test file for this operator?

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 88286b5 and 467cabf.

📒 Files selected for processing (35)
  • applications/bci_visualization/CMakeLists.txt
  • applications/bci_visualization/Dockerfile
  • applications/bci_visualization/README.md
  • applications/bci_visualization/bci_visualization.py
  • applications/bci_visualization/bci_visualization.yaml
  • applications/bci_visualization/operators/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
  • applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/stream.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/streams/__init__.py
  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/utils/__init__.py
  • applications/bci_visualization/utils/reconstruction/__init__.py
  • applications/bci_visualization/utils/reconstruction/assets.py
  • applications/bci_visualization/utils/reconstruction/hbo.py
  • applications/bci_visualization/utils/reconstruction/reg_inv.py
  • operators/volume_renderer/CMakeLists.txt
  • operators/volume_renderer/dataset.cpp
  • operators/volume_renderer/dataset.hpp
  • operators/volume_renderer/python/CMakeLists.txt
  • operators/volume_renderer/python/volume_renderer.cpp
  • operators/volume_renderer/python/volume_renderer_pydoc.hpp
  • operators/volume_renderer/video_buffer_blob.hpp
  • operators/volume_renderer/volume_renderer.cpp
  • operators/volume_renderer/volume_renderer.hpp
✅ Files skipped from review due to trivial changes (7)
  • operators/volume_renderer/python/volume_renderer_pydoc.hpp
  • operators/volume_renderer/CMakeLists.txt
  • applications/bci_visualization/utils/init.py
  • operators/volume_renderer/volume_renderer.hpp
  • applications/bci_visualization/streams/init.py
  • operators/volume_renderer/video_buffer_blob.hpp
  • operators/volume_renderer/python/CMakeLists.txt
🚧 Files skipped from review as they are similar to previous changes (8)
  • applications/bci_visualization/bci_visualization.yaml
  • applications/bci_visualization/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
  • applications/bci_visualization/Dockerfile
  • applications/bci_visualization/operators/CMakeLists.txt
  • applications/bci_visualization/utils/reconstruction/init.py
  • operators/volume_renderer/dataset.hpp
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
🧰 Additional context used
📓 Path-based instructions (7)
**/operators/**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Class names for operators should use TitleCase with 'Op' suffix (e.g., 'AdaptiveThresholdingOp')

Files:

  • operators/volume_renderer/dataset.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • operators/volume_renderer/python/volume_renderer.cpp
  • operators/volume_renderer/volume_renderer.cpp
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/stream.py
**/operators/**/*.{py,cpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Operator filename should match the directory name with appropriate extension (e.g., 'adaptive_thresholding.py')

Files:

  • operators/volume_renderer/dataset.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • operators/volume_renderer/python/volume_renderer.cpp
  • operators/volume_renderer/volume_renderer.cpp
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/stream.py
**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

**/*.{py,cpp,hpp}: Code must adhere to Holoscan SDK coding standards including style compliance, descriptive naming, and minimal abbreviations
Code must include appropriate error handling and validation for all operations
All code must pass linting checks using './holohub lint' before submission
All code contributions must use clear, English descriptive names for functions and variables
Code must include inline comments for complex logic and clear documentation of functionality

Files:

  • operators/volume_renderer/dataset.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • operators/volume_renderer/python/volume_renderer.cpp
  • operators/volume_renderer/volume_renderer.cpp
  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/utils/reconstruction/hbo.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/bci_visualization.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/utils/reconstruction/reg_inv.py
  • applications/bci_visualization/utils/reconstruction/assets.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/stream.py
**/operators/**/*.cpp

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Operators with C++ implementations must include CMakeLists.txt for build system integration

Files:

  • operators/volume_renderer/dataset.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • operators/volume_renderer/python/volume_renderer.cpp
  • operators/volume_renderer/volume_renderer.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
**/operators/**/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

CMakeLists.txt for operators must use 'add_holohub_operator()' with optional 'DEPENDS EXTENSIONS' for dependent extensions

Files:

  • applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt
**/applications/**/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

**/applications/**/CMakeLists.txt: CMakeLists.txt for applications must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators
Applications should include a testing section in CMakeLists.txt for functional testing using CTest

Files:

  • applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt
**/operators/**/*.py

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Python operators must include unit tests in the same directory using pytest framework

Files:

  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/stream.py
🧠 Learnings (18)
📚 Learning: 2025-11-17T06:32:45.908Z
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.cpp:64-87
Timestamp: 2025-11-17T06:32:45.908Z
Learning: In CUDA, when `cudaMemcpyAsync` is used with pageable (non-pinned) host memory, the operation is synchronous with respect to the host thread because the runtime must prevent memory modification during transfer. Only pinned (page-locked) memory allows truly asynchronous behavior. Therefore, no explicit `cudaStreamSynchronize` is required after `cudaMemcpyAsync` when the destination is pageable host memory.

Applied to files:

  • operators/volume_renderer/dataset.cpp
📚 Learning: 2025-10-22T16:53:45.393Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: operators/video_streaming/streaming_client_enhanced/python/CMakeLists.txt:16-24
Timestamp: 2025-10-22T16:53:45.393Z
Learning: The pybind11_add_holohub_module CMake macro in cmake/pybind11_add_holohub_module.cmake encapsulates all pybind11 setup internally, including finding pybind11, linking against holoscan::pybind11 conditionally, and linking the C++ operator target. Operator Python bindings in holohub should only call this macro without additional pybind11 setup.

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • operators/volume_renderer/python/volume_renderer.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt
📚 Learning: 2025-11-19T04:58:30.227Z
Learnt from: finalelement
Repo: nvidia-holoscan/holohub PR: 1237
File: applications/surgical_scene_recon/training/utils/sh_utils.py:1-23
Timestamp: 2025-11-19T04:58:30.227Z
Learning: When reviewing files that contain third-party code with original copyright notices (e.g., from PlenOctree, other open-source projects), do not suggest modifying the original copyright years or attribution. The original copyright notice must be preserved exactly as required by redistribution licenses (BSD, MIT, Apache, etc.). If substantial modifications are made by the current project, a separate copyright line can be added, but the original must remain unchanged.

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/bci_visualization.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/__init__.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/test_*.py : Python test files must use 'pytest' framework and include tests for initialization, port setup, error handling, compute logic, and edge cases

Applied to files:

  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/stream.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.py : Python operators must include unit tests in the same directory using pytest framework

Applied to files:

  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/stream.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/*.{py,cpp,hpp} : Code must include appropriate error handling and validation for all operations

Applied to files:

  • applications/bci_visualization/streams/snirf.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/CMakeLists.txt : CMakeLists.txt for operators must use 'add_holohub_operator()' with optional 'DEPENDS EXTENSIONS' for dependent extensions

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/applications/**/CMakeLists.txt : CMakeLists.txt for applications must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.cpp : Operators with C++ implementations must include CMakeLists.txt for build system integration

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/workflows/**/CMakeLists.txt : CMakeLists.txt for workflows must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/pkg/CMakeLists.txt : CMakeLists.txt for packages must use 'add_holohub_package()' function with optional APPLICATIONS and OPERATORS dependencies

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/test_*.py : Unit test files for Python operators should use 'test_' prefix followed by the operator name (e.g., 'test_adaptive_thresholding.py')

Applied to files:

  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/stream.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/test_*.py : Python operator test files should reuse common fixtures from 'conftest.py'

Applied to files:

  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/stream.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.{py,cpp} : Operator filename should match the directory name with appropriate extension (e.g., 'adaptive_thresholding.py')

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/ : Directory names for operators should use snake_case (e.g., 'adaptive_thresholding')

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Contributions must use consistent directory structures based on their type (operators, applications, workflows, extensions, tutorials, packages)

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.{py,cpp,hpp} : Class names for operators should use TitleCase with 'Op' suffix (e.g., 'AdaptiveThresholdingOp')

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(operators|applications|workflows))/**/metadata.json : Operators, applications, and workflows should use the '<holohub_data_dir>' placeholder in metadata.json 'run' command for referencing data directory

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/bci_visualization.py
🧬 Code graph analysis (9)
applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp (1)
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp (2)
  • setup (21-28)
  • setup (21-21)
applications/bci_visualization/streams/snirf.py (1)
applications/bci_visualization/streams/base_nirs.py (5)
  • ChannelInfo (13-20)
  • BaseNirsStream (23-33)
  • start (24-25)
  • get_channels (28-29)
  • stream_nirs (32-33)
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (3)
applications/bci_visualization/utils/reconstruction/assets.py (1)
  • Assets (22-43)
applications/bci_visualization/operators/stream.py (1)
  • SampleOutput (18-20)
applications/bci_visualization/operators/reconstruction/types.py (2)
  • BuildRHSOutput (23-31)
  • VoxelMetadata (16-19)
applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py (1)
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (1)
  • VoxelStreamToVolumeOp (16-361)
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (2)
applications/bci_visualization/utils/reconstruction/hbo.py (3)
  • ExtinctionCoefficient (15-58)
  • HbO (61-141)
  • convert_mua_to_hb (98-141)
applications/bci_visualization/operators/reconstruction/types.py (2)
  • SolverResult (52-58)
  • VoxelMetadata (16-19)
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (1)
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (2)
  • setup (113-116)
  • compute (118-147)
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp (1)
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp (2)
  • spec (34-34)
  • input (35-35)
applications/bci_visualization/operators/reconstruction/__init__.py (1)
applications/bci_visualization/operators/reconstruction/types.py (4)
  • BuildRHSOutput (23-31)
  • NormalizedSolveBatch (42-48)
  • SolverResult (52-58)
  • VoxelMetadata (16-19)
applications/bci_visualization/operators/stream.py (2)
applications/bci_visualization/streams/base_nirs.py (4)
  • ChannelInfo (13-20)
  • start (24-25)
  • get_channels (28-29)
  • stream_nirs (32-33)
applications/bci_visualization/streams/snirf.py (3)
  • start (46-53)
  • get_channels (55-61)
  • stream_nirs (111-156)
🪛 Cppcheck (2.19.0)
applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp

[error] 32-32: There is an unknown macro here somewhere. Configuration is required. If HOLOSCAN_OPERATOR_FORWARD_ARGS is a macro then please configure it.

(unknownMacro)

applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp

[error] 32-32: There is an unknown macro here somewhere. Configuration is required. If HOLOSCAN_OPERATOR_FORWARD_ARGS is a macro then please configure it.

(unknownMacro)

🪛 GitHub Actions: Check linting
operators/volume_renderer/dataset.cpp

[error] 113-113: Lines should be <= 100 characters long [whitespace/line_length]

operators/volume_renderer/volume_renderer.cpp

[error] 289-289: Lines should be <= 100 characters long [whitespace/line_length]


[error] 306-306: Lines should be <= 100 characters long [whitespace/line_length]


[error] 308-308: Line ends in whitespace. Consider deleting these extra spaces. [whitespace/end_of_line]


[error] 309-309: Lines should be <= 100 characters long [whitespace/line_length]


[error] 313-313: Line ends in whitespace. Consider deleting these extra spaces. [whitespace/end_of_line]


[error] 340-340: Lines should be <= 100 characters long [whitespace/line_length]


[error] 551-551: Line ends in whitespace. Consider deleting these extra spaces. [whitespace/end_of_line]


[error] 858-858: Lines should be <= 100 characters long [whitespace/line_length]

applications/bci_visualization/bci_visualization.py

[error] 1-1: Imports are incorrectly sorted and/or formatted.

🪛 LanguageTool
applications/bci_visualization/README.md

[uncategorized] ~33-~33: Do not mix variants of the same word (‘travelled’ and ‘traveled’) within a single text.
Context: ...r times correspond to photons that have travelled farther into the tissue. In a given sec...

(EN_EXACT_COHERENCY_RULE)


[style] ~44-~44: Consider a more concise word here.
Context: ... from the perspective of the detector). In order to have a more anatomical representation o...

(IN_ORDER_TO_PREMIUM)


[grammar] ~51-~51: Use a hyphen to join words.
Context: ...son Thor Developer kit. Any Holoscan SDK supported platform should work. To run...

(QB_NEW_EN_HYPHEN)

🪛 markdownlint-cli2 (0.18.1)
applications/bci_visualization/README.md

80-80: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.14.10)
applications/bci_visualization/streams/snirf.py

44-44: Avoid specifying long messages outside the exception class

(TRY003)


146-146: Loop control variable ts not used within loop body

Rename unused ts to _ts

(B007)


146-146: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

applications/bci_visualization/operators/reconstruction/normalize_operator.py

54-54: Unused method argument: context

(ARG002)


58-58: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/utils/reconstruction/hbo.py

34-34: Value being cast to int is already an integer

Remove unnecessary int call

(RUF046)


90-90: Value being rounded is already an integer

Remove unnecessary round call

(RUF057)


92-94: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/operators/reconstruction/build_rhs_operator.py

86-89: Avoid specifying long messages outside the exception class

(TRY003)


92-92: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/operators/reconstruction/solver_operator.py

44-44: Unused method argument: context

(ARG002)

applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py

78-78: Avoid specifying long messages outside the exception class

(TRY003)


122-122: Unused method argument: context

(ARG002)

applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py

74-74: Avoid specifying long messages outside the exception class

(TRY003)


88-90: Avoid specifying long messages outside the exception class

(TRY003)


109-109: Unused method argument: context

(ARG002)


116-118: Avoid specifying long messages outside the exception class

(TRY003)


120-122: Avoid specifying long messages outside the exception class

(TRY003)


136-136: Avoid specifying long messages outside the exception class

(TRY003)


238-240: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/operators/reconstruction/__init__.py

12-21: __all__ is not sorted

Apply an isort-style sorting to __all__

(RUF022)

applications/bci_visualization/operators/stream.py

43-43: Unused method argument: op_input

(ARG002)


43-43: Unused method argument: context

(ARG002)


48-48: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (25)
applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py (1)

1-8: LGTM!

The module structure is clean and follows Python packaging conventions. The operator class name VoxelStreamToVolumeOp correctly follows the TitleCase with 'Op' suffix guideline.

applications/bci_visualization/utils/reconstruction/assets.py (1)

1-139: LGTM!

The asset loading implementation is well-structured with:

  • Clear dataclass definition with comprehensive documentation
  • Proper use of pathlib for file operations
  • Appropriate singleton pattern for caching expensive assets
  • Clean type annotations throughout

The code follows Python best practices and is easy to understand.

applications/bci_visualization/streams/snirf.py (4)

1-4: LGTM! Copyright properly attributed.

The copyright attribution to Kernel is appropriate given the collaboration mentioned in the PR objectives, and the Apache-2.0 license is consistent with the project.


22-32: LGTM! Well-structured NamedTuple.

The SNIRFChannel structure clearly captures all relevant channel metadata with descriptive field names.


41-44: LGTM! Proper file validation.

The initialization correctly validates that the SNIRF file exists before proceeding. The error message is descriptive and helpful.


55-61: LGTM! Clean aggregation logic.

The method correctly extracts channel information from unique channels and constructs the ChannelInfo structure.

applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp (1)

21-28: LGTM! Appropriate queue policy for real-time streaming.

The kPop queue policy correctly drops stale frames when the queue is full, which is appropriate for real-time visualization where displaying the latest frame is more important than processing every frame.

applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt (1)

16-22: LGTM! Correct usage of pybind11_add_holohub_module.

The CMake configuration correctly uses the pybind11_add_holohub_module macro with appropriate parameters for binding the ColorBufferPassthroughOp to Python.

operators/volume_renderer/python/volume_renderer.cpp (2)

33-33: LGTM! Added necessary include for type registration.

The inclusion of emitter_receiver_registry.hpp is required for the new register_types function.


125-129: LGTM! Proper type registration for array types.

The register_types function correctly registers std::array<uint32_t, 3> and std::array<bool, 3> with the emitter/receiver registry, enabling these types to be passed between operators in the Holoscan pipeline. These are likely used for volume dimensions and boolean flags respectively.

applications/bci_visualization/operators/stream.py (3)

18-20: LGTM! Clear data structure for sample output.

The SampleOutput NamedTuple appropriately bundles the data array with channel metadata for downstream operators.


23-35: LGTM! Proper operator initialization and setup.

The initialization correctly stores the stream reference, and the setup method appropriately declares the output port.


37-40: LGTM! Correct streaming initialization.

The start method properly initializes the stream, retrieves channel information, and creates the iterator for subsequent sample emission.

operators/volume_renderer/dataset.cpp (1)

144-147: CUDA stream propagation looks correct.

The cudaMemcpy3DAsync with the provided stream enables proper async execution. The access_gpu guard scope ensures the destination memory remains valid during the copy operation.

applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp (2)

39-47: LGTM - Standard pybind11 trampoline for Holoscan operators.

The constructor correctly initializes the operator with fragment, name, and spec. The explicit (void)args silences unused parameter warnings appropriately.


50-60: LGTM - Python module binding is correctly structured.

The module exposes the operator class with appropriate constructor signature and setup method. The shared_ptr holder ensures proper Python/C++ lifetime management.

applications/bci_visualization/operators/reconstruction/solver_operator.py (1)

84-85: Validate that num_absorbers == 2 before indexing.

The code assumes exactly 2 absorbers (mua at index 0, musp at index 1). If batch.num_absorbers differs, this will produce incorrect results or fail.

Proposed validation
         # Reshape result to separate absorbers into mua/musp
+        if batch.num_absorbers != 2:
+            raise ValueError(f"Expected 2 absorbers (mua, musp), got {batch.num_absorbers}")
         reshaped = result.reshape(
             (-1, batch.num_absorbers, num_wavelengths),
             order="F",
         )
applications/bci_visualization/bci_visualization.py (3)

147-155: Well-documented deadlock prevention pattern.

The comments clearly explain the purpose of setting ConditionType.NONE and using the passthrough operator with kPop queue policy. This is important documentation for future maintainers.


284-284: LGTM - Scheduler configuration is appropriate.

Using EventBasedScheduler with 5 worker threads and stop_on_deadlock=True is suitable for this multi-stage pipeline.


263-278: The data path configuration in metadata.json is properly configured. The run command correctly uses the <holohub_data_dir> placeholder for both --renderer_config and --mask_path arguments, and the holoscan_sdk section includes the required minimum_required_version and tested_versions fields.

operators/volume_renderer/volume_renderer.cpp (2)

284-287: LGTM: Camera pose caching is a good addition.

The cached camera pose initialization and reuse logic correctly maintains stability when no new camera pose input is received. The identity matrix default is appropriate.


306-307: Good fix: Resetting volume before setting prevents memory growth.

Calling ResetVolume(type) before SetVolume addresses unbounded memory growth from accumulating volumes.

applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (1)

113-116: First-frame skip: verify downstream operators handle missing input gracefully.

On the first frame, _apply_baseline returns None and the operator returns early without emitting any output. Ensure downstream operators (e.g., NormalizeOperator, RegularizedSolverOperator) handle the absence of input on the first frame.

applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (2)

16-16: Class naming follows convention - LGTM.

The class VoxelStreamToVolumeOp correctly uses the TitleCase with 'Op' suffix convention.


337-359: GPU resampling implementation looks correct.

The affine transformation logic correctly computes the mapping matrix from target to source coordinates, and the CuPy-based resampling is appropriate for GPU-accelerated volume processing.

Comment on lines +91 to +189
class ConvertToVoxelsOperator(Operator):
"""Expand trimmed solver outputs back to the full voxel grid."""

def __init__(
self,
*,
coefficients: Dict[int, ExtinctionCoefficient],
ijk: NDArray[np.float32],
xyz: NDArray[np.float32],
fragment: Any | None = None,
use_gpu: bool = False,
) -> None:
self._hbo = HbO(coefficients, use_gpu=use_gpu)
self._affine = np.round(_compute_affine(xyz, ijk), 6)
self._affine_sent: bool = False
self._cached_affine: NDArray[np.float32] | None = None
self._use_gpu = use_gpu
self._cum_hbo: NDArray[np.float32] | None = None
self._cum_hbr: NDArray[np.float32] | None = None

super().__init__(fragment, name=self.__class__.__name__)

def setup(self, spec: OperatorSpec) -> None:
spec.input("result")
spec.output("affine_4x4")
spec.output("hb_voxel_data")

def compute(
self,
op_input: InputContext,
op_output: OutputContext,
context: ExecutionContext,
) -> None:
result: SolverResult = op_input.receive("result")
cuda_stream = op_input.receive_cuda_stream("result")

with cp.cuda.ExternalStream(cuda_stream):
data_mua_full = _convert_to_full_voxels(
result.data_mua,
result.num_full_voxels,
result.idxs_significant_voxels,
)

data_hbo, data_hbr = self._hbo.convert_mua_to_hb(
data_mua_full,
result.wavelengths,
result.idxs_significant_voxels,
)

self._cum_hbo = data_hbo if self._cum_hbo is None else self._cum_hbo + data_hbo
self._cum_hbr = data_hbr if self._cum_hbr is None else self._cum_hbr + data_hbr

layout = self._compute_voxel_layout(result.voxel_metadata)
hb_volume = self._voxelize_hbo(self._cum_hbo, layout)

self._emit_affine_once(op_output)
op_output.emit(hb_volume, "hb_voxel_data")

def _emit_affine_once(self, op_output: OutputContext) -> None:
if self._affine_sent:
return

op_output.emit(self._affine, "affine_4x4")
self._affine_sent = True

def _voxelize_hbo(
self,
data_hbo: NDArray[np.float32],
layout: Tuple[NDArray[np.int_], Tuple[int, int, int], NDArray[np.int_]],
) -> NDArray[np.float32]:
scatter_coords, normalized_shape, _ijk_int = layout
scatter_coords = scatter_coords.astype(np.int32, copy=False) # for indexing

num_voxels = data_hbo.shape[0]
assert num_voxels == scatter_coords.shape[0]

# scatter ijk to full voxel grid
volume_small: NDArray[np.float32] = cp.zeros(normalized_shape, dtype=data_hbo.dtype)
x_idx, y_idx, z_idx = scatter_coords.T
volume_small[x_idx, y_idx, z_idx] = data_hbo

return volume_small

def _compute_voxel_layout(
self,
metadata: VoxelMetadata,
) -> Tuple[NDArray[np.int_], Tuple[int, int, int], NDArray[np.int_]]:
"""
Compute normalized voxel coordinates and grid shape from metadata.
"""
ijk = cp.asarray(metadata.ijk)
assert ijk.ndim == 2 and ijk.shape[1] == 3

ijk_int = cp.rint(ijk)
min_idx = ijk_int.min(axis=0)
normalized = ijk_int - min_idx
shape = tuple(int(axis_max) + 1 for axis_max in normalized.max(axis=0))
assert all(dim > 0 for dim in shape)
return normalized, cast(Tuple[int, int, int], shape), ijk_int
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add unit tests for this operator.

Python operators must include unit tests in the same directory using the pytest framework. No test file is present for ConvertToVoxelsOperator.

Based on learnings, as per coding guidelines for operators in **/operators/**/*.py.

Would you like me to generate a test template for this operator?

🧰 Tools
🪛 Ruff (0.14.10)

122-122: Unused method argument: context

(ARG002)

Comment on lines +27 to +48
class NormalizeOperator(Operator):
"""Apply Jacobian/RHS normalization before solver execution."""

def __init__(
self,
*,
fragment: Any | None = None,
use_gpu: bool = False,
use_hard_coded_normalizers: bool = True,
) -> None:
super().__init__(fragment, name=self.__class__.__name__)
self._jacobian_cache: NDArray[np.float32] | None = None
self._use_gpu = use_gpu
self._max_rhs: NDArray[np.float32] | None = None

self._use_hard_coded_normalizers = use_hard_coded_normalizers
self._hard_coded_row_normalizers_cache: NDArray[np.float32] | None = None
self._hard_coded_normalized_jacobian_cache: NDArray[np.float32] | None = None

def setup(self, spec: OperatorSpec) -> None:
spec.input("batch")
spec.output("normalized")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd -t f 'test.*normalize' applications/bci_visualization/operators/reconstruction/

Repository: nvidia-holoscan/holohub

Length of output: 49


🏁 Script executed:

# Also check parent directories in case tests are organized differently
fd -t f 'test.*normalize' applications/bci_visualization/operators/

Repository: nvidia-holoscan/holohub

Length of output: 49


🏁 Script executed:

# Check if the operators/reconstruction directory exists and list its contents
ls -la applications/bci_visualization/operators/reconstruction/

Repository: nvidia-holoscan/holohub

Length of output: 586


🏁 Script executed:

find applications/bci_visualization/operators/ -name "conftest.py" -o -name "test_*.py" -o -name "*_test.py"

Repository: nvidia-holoscan/holohub

Length of output: 49


🏁 Script executed:

# Also check what's in the parent operators directory
ls -la applications/bci_visualization/operators/

Repository: nvidia-holoscan/holohub

Length of output: 571


Add unit tests for NormalizeOperator using pytest framework.

Python operators must include unit tests in the same directory using pytest framework. Create test_normalize_operator.py in applications/bci_visualization/operators/reconstruction/ with tests covering initialization, port setup, error handling, compute logic, and edge cases. Reuse common fixtures from conftest.py where applicable.

🤖 Prompt for AI Agents
In
@applications/bci_visualization/operators/reconstruction/normalize_operator.py
around lines 27 - 48, Add a new pytest file test_normalize_operator.py alongside
NormalizeOperator that imports NormalizeOperator and relevant fixtures from
conftest.py; write tests that (1) instantiate NormalizeOperator with various
constructor args (use_gpu True/False, use_hard_coded_normalizers True/False) to
verify attributes like _use_gpu and _use_hard_coded_normalizers are set, (2)
call setup(spec) using the OperatorSpec fixture to assert input("batch") and
output("normalized") ports are registered, (3) exercise error handling by
passing invalid inputs to compute() (or the operator's processing entrypoint)
and asserting appropriate exceptions/messages, (4) validate compute logic with
representative batch payloads including normal cases and edge cases (empty
batch, all-zero jacobian/RHS) and assert expected normalized outputs or caches
(_jacobian_cache, _max_rhs, _hard_coded_row_normalizers_cache,
_hard_coded_normalized_jacobian_cache) are populated, and (5) parametrize tests
to cover both GPU and CPU paths and reuse shared fixtures from conftest.py for
OperatorSpec, sample batches, and any mock device contexts.

Comment on lines +59 to +62
result = cp.zeros(
(num_cols_expected, num_wavelengths),
dtype=batch.systems[0].jacobian.dtype,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add validation for empty batch.systems.

Accessing batch.systems[0] on line 61 will raise IndexError if systems is empty.

Proposed fix
+        if not batch.systems:
+            raise ValueError("batch.systems cannot be empty")
+
         # GPU-only: always use CuPy.
         result = cp.zeros(
             (num_cols_expected, num_wavelengths),
             dtype=batch.systems[0].jacobian.dtype,
         )

Comment on lines +42 to +50
def compute(
self, op_input: InputContext, op_output: OutputContext, context: ExecutionContext
) -> None:

sample = next(self._iter, None)
if sample is None:
raise StopIteration("No more samples available in the stream.")

op_output.emit(SampleOutput(sample, self._channels), "samples")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Address unused parameters and simplify exception.

Two improvements:

  1. Lines 42-43: The parameters op_input and context are unused. Prefix them with underscores to indicate they're intentionally unused: _op_input and _context.

  2. Line 48: Simplify the exception instantiation per static analysis. The message is straightforward enough to include inline.

♻️ Proposed fix
     def compute(
-        self, op_input: InputContext, op_output: OutputContext, context: ExecutionContext
+        self, _op_input: InputContext, op_output: OutputContext, _context: ExecutionContext
     ) -> None:

         sample = next(self._iter, None)
         if sample is None:
-            raise StopIteration("No more samples available in the stream.")
+            raise StopIteration

         op_output.emit(SampleOutput(sample, self._channels), "samples")
🧰 Tools
🪛 Ruff (0.14.10)

43-43: Unused method argument: op_input

(ARG002)


43-43: Unused method argument: context

(ARG002)


48-48: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In @applications/bci_visualization/operators/stream.py around lines 42 - 50,
Rename the unused parameters in compute to _op_input and _context to mark them
as intentionally unused, and replace the explicit StopIteration instantiation
with a bare raise StopIteration (i.e., remove the string argument) to satisfy
the static analysis simplification; update the function signature (compute(self,
_op_input: InputContext, op_output: OutputContext, _context: ExecutionContext))
and the exception line accordingly in the compute method.

Comment on lines 549 to 561
bool new_volume = impl_->receive_volume(input, context, Dataset::Types::Density);
bool new_mask = impl_->receive_volume(input, context, Dataset::Types::Segmentation);

// there are datasets without mask volume, if we receive a density volume
// only, reset the mask volume
if (new_volume && !new_mask) {
impl_->dataset_.ResetVolume(Dataset::Types::Segmentation);
} else {
new_volume = true;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix trailing whitespace on line 551.

The linter flagged trailing whitespace on line 551.

🔧 Proposed fix
   bool new_volume = impl_->receive_volume(input, context, Dataset::Types::Density);
   bool new_mask = impl_->receive_volume(input, context, Dataset::Types::Segmentation);
-  
+
   // there are datasets without mask volume, if we receive a density volume
   // only, reset the mask volume
   if (new_volume && !new_mask) {
🧰 Tools
🪛 GitHub Actions: Check linting

[error] 551-551: Line ends in whitespace. Consider deleting these extra spaces. [whitespace/end_of_line]

🤖 Prompt for AI Agents
In @operators/volume_renderer/volume_renderer.cpp around lines 549 - 556, Remove
the trailing whitespace on the line that declares new_mask (the line calling
impl_->receive_volume for Dataset::Types::Segmentation); ensure the line "bool
new_mask = impl_->receive_volume(input, context, Dataset::Types::Segmentation);"
has no trailing spaces or tabs so the linter no longer flags it.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Overview

Greptile Summary

Introduces a complete BCI visualization application that processes fNIRS data through GPU-accelerated reconstruction to real-time 3D volume rendering. The pipeline streams SNIRF format data through operators that build Jacobian matrices, normalize and solve regularized inverse problems, convert results to voxel space, and render HbO activity volumes. Volume renderer enhancements include CUDA stream integration for async GPU operations, camera pose caching to maintain view state between frames, and a memory leak fix that resets old volumes before loading new ones.

Confidence Score: 3/5

  • Generally safe with minor edge case handling issues in reconstruction operators
  • The PR introduces substantial new functionality with good architecture and CUDA stream handling. However, there are three logic issues that could cause runtime failures: (1) statistics initialization in VoxelStreamToVolumeOp can fail to initialize when min/max are exactly zero, leading to indefinite zero outputs, (2) potential TypeError when updating uninitialized statistics, and (3) log operation on zero/negative NIRS moments will raise domain errors. These issues affect the reconstruction pipeline's robustness but don't represent security vulnerabilities. The volume renderer changes are well-implemented with proper memory management and stream synchronization.
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (statistics initialization logic), applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (log domain validation)

Important Files Changed

File Analysis

Filename Score Overview
operators/volume_renderer/volume_renderer.cpp 4/5 Adds CUDA stream handling, camera pose caching, memory leak fix, and conditional rendering logic for streaming volumes
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py 3/5 Converts fNIRS moments to Jacobian/RHS batches with GPU caching and CUDA stream propagation; skips first frame for baseline
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py 3/5 Converts HbO voxel data to 3D volumes with adaptive normalization, resampling, and mask integration for rendering

Sequence Diagram

sequenceDiagram
    participant Stream as StreamOperator
    participant BuildRHS as BuildRHSOperator
    participant Normalize as NormalizeOperator
    participant Solver as RegularizedSolver
    participant ToVoxels as ConvertToVoxels
    participant ToVolume as VoxelStreamToVolume
    participant Renderer as VolumeRenderer
    participant Holoviz as HolovizOp

    Stream->>BuildRHS: fNIRS moments
    Note over BuildRHS: Allocate CUDA stream<br/>Convert to Jacobian/RHS<br/>Skip first frame (baseline)
    BuildRHS->>Normalize: Jacobian & RHS batch
    Note over Normalize: Apply row normalization<br/>Cache normalized Jacobian
    Normalize->>Solver: Normalized systems
    Note over Solver: Solve per wavelength<br/>GPU-accelerated inverse
    Solver->>ToVoxels: mua/musp results
    Note over ToVoxels: Convert to HbO/HbR<br/>Emit affine once
    ToVoxels->>ToVolume: HbO voxel data + affine
    Note over ToVolume: Adaptive normalization<br/>Resample to mask space
    ToVolume->>Renderer: Dense volume + mask
    Note over Renderer: Async CUDA rendering<br/>Cache camera pose
    Renderer->>Holoviz: Color buffer
    Holoviz->>Renderer: Camera pose updates
Loading

Comment on lines +268 to +272
if (
(self.global_min is None or self.global_max is None)
and current_min != 0
and current_max != 0
):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The initialization logic skips frames where current_min == 0 or current_max == 0, but this condition may be too restrictive. If the first frame legitimately has a min or max of exactly 0 (e.g., all-zero activations), statistics will never initialize, and _normalize_and_process_activated_voxels will return zeros indefinitely (line 307). Consider initializing even when min/max are zero, or use a more specific check like cp.all(hb_voxel == 0) to detect truly empty frames.

alpha = self.stats_alpha if self.frame_count > 10 else 0.3

# Update running statistics
self.global_min = (1 - alpha) * self.global_min + alpha * current_min
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If self.global_min or self.global_max is None (statistics not yet initialized), this line will raise a TypeError when attempting arithmetic with None. The logic assumes initialization always succeeds on the first non-zero frame, but the early return at line 279 means this update is skipped during initialization. Verify that self.global_min and self.global_max are always set before reaching this line, or add a guard condition.


# take log of moment 0 to convert to optical density
# shape is (moments, channels, wavelengths)
cp.log(realtime_moments[0], out=realtime_moments[0])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In-place log operation on realtime_moments[0] will fail if any elements are negative or zero, raising domain errors. NIRS moment data can contain zeros or negative values due to noise or measurement issues. Consider adding validation or replacing zeros/negatives before the log operation (e.g., cp.maximum(realtime_moments[0], epsilon)) to prevent runtime errors.

Comment on lines +113 to +116
realtime_moments = self._apply_baseline(realtime_moments)
if realtime_moments is None:
logger.info("Skipping RHS build for first frame (baseline capture)")
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When _apply_baseline returns None (first frame), the operator returns early without calling op_output.emit, but the CUDA stream was already allocated at line 104. This leaves the stream allocated but unused. While the stream pool will eventually reclaim it, consider whether downstream operators expecting output will handle the missing frame gracefully, or if a sentinel value should be emitted instead.

default_data_path = os.path.join(os.getcwd(), "data/bci_visualization")
kernel_data = Path(os.environ.get("HOLOSCAN_INPUT_PATH", default_data_path))

stream = SNIRFStream(kernel_data / "data.snirf")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hardcoded path kernel_data / "data.snirf" assumes this file exists at runtime. If the file is missing, the application will fail during SNIRFStream initialization (line 43-44 in snirf.py). Consider adding validation after constructing the path or providing a more descriptive error message at startup to guide users on data setup requirements.

Signed-off-by: Mimi Liao <[email protected]>
@mimiliaogo mimiliaogo force-pushed the mimil/bci-kernel-viz branch from d08d384 to 322cf03 Compare January 8, 2026 08:19
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Overview

Greptile Summary

This PR introduces a comprehensive BCI visualization application that processes streaming fNIRS data from Kernel Flow 2 hardware through a GPU-accelerated reconstruction pipeline and renders 3D brain activity in real-time using volume rendering.

Key Changes

New BCI Visualization Application

  • Complete end-to-end pipeline: SNIRF data streaming → inverse problem reconstruction → 3D volume rendering
  • Event-based multi-threaded scheduler separating reconstruction and visualization for optimal performance
  • GPU-accelerated operators using CuPy for reconstruction (Jacobian, normalization, regularized solver)
  • Adaptive normalization with exponential moving average for stable visualization

Volume Renderer Enhancements

  • Added streaming support for dynamic volume data
  • Implemented camera pose caching to maintain view when no input received
  • Fixed memory leak by adding ResetVolume() to clear old volumes
  • Enhanced CUDA stream handling for proper GPU synchronization
  • Added support for mask volumes with tissue segmentation

New Operators

  • ColorBufferPassthroughOp: Queue management with POP policy to prevent backpressure/deadlocks
  • VoxelStreamToVolumeOp: Converts streaming voxel data to dense 3D volumes with resampling
  • Reconstruction operators: BuildRHS, Normalize, Solver, ConvertToVoxels for fNIRS processing

Issues Found

Critical Issues

  • CUDA stream synchronization broken in ColorBufferPassthroughOp - streams received but not propagated
  • Non-deterministic affine computation using random sampling could fail intermittently
  • Relative path fallback may fail when running from different directories

Validation Issues

  • Multiple uses of Python assert statements for runtime validation (disabled with -O flag)
  • Should use explicit if checks with ValueError for production code

Minor Issues

  • Documentation typo: wget -0 should be wget -O
  • Hardcoded sentinel values and magic numbers in several places
  • Late validation of required parameters (mask path checked in start() vs __init__)

Architecture

The application uses a dual-pipeline architecture:

  1. Reconstruction Pipeline: Streams SNIRF → builds Jacobian/RHS → normalizes → solves inverse problem → converts to voxels
  2. Visualization Pipeline: Resamples voxels → renders with ClaraViz → displays via Holoviz with camera interaction

Both pipelines run on separate threads coordinated by EventBasedScheduler for responsive interaction and smooth rendering.

Confidence Score: 3/5

  • This PR has functional issues that should be addressed before merge, particularly CUDA stream synchronization
  • Score reflects critical CUDA synchronization bug in ColorBufferPassthroughOp that could cause race conditions, plus widespread use of assertions for validation instead of proper error handling. The non-deterministic affine computation is concerning for production reliability. While the architecture is well-designed and the code is generally high quality, these issues need fixing to ensure stable operation.
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp requires immediate attention for CUDA stream propagation. The reconstruction operators need assertion fixes for production safety.

Important Files Changed

File Analysis

Filename Score Overview
applications/bci_visualization/bci_visualization.py 4/5 Main application entry point. Sets up reconstruction and visualization pipelines with event-based scheduler. Issue: relative path fallback may fail in different working directories.
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp 3/5 Passthrough operator for queue policy management. Critical issue: receives CUDA streams but doesn't propagate them, breaking pipeline synchronization.
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py 3/5 Converts solver output to voxel grid with HbO/HbR conversion. Issues: non-deterministic affine computation using random sampling, multiple assertions for validation.
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py 4/5 Builds Jacobian/RHS matrices for reconstruction. Properly manages CUDA streams and GPU memory. Issue: uses assertions for validation instead of explicit checks.
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py 4/5 Converts voxel stream to volume format with adaptive normalization and resampling. Well-designed statistics tracking. Minor: mask validation happens late, hardcoded sentinel values.
operators/volume_renderer/volume_renderer.cpp 4/5 Enhanced with streaming support, camera pose caching, and CUDA stream handling. Good: memory leak fix with ResetVolume. Properly manages bidirectional camera pose flow.
applications/bci_visualization/streams/snirf.py 4/5 SNIRF file format parser for fNIRS data replay. Well-structured channel mapping. Issue: np.nan assigned to array may cause dtype issues.
applications/bci_visualization/README.md 5/5 Comprehensive documentation with background on fNIRS, pipeline overview, and configuration details. Issue: typo in wget command (-0 should be -O).

Sequence Diagram

sequenceDiagram
    participant User
    participant BciVisualizationApp
    participant StreamOp as StreamOperator
    participant BuildRHS as BuildRHSOperator
    participant Normalize as NormalizeOperator
    participant Solver as RegularizedSolverOperator
    participant ConvertVoxels as ConvertToVoxelsOperator
    participant VoxelToVolume as VoxelStreamToVolumeOp
    participant VolumeRenderer as VolumeRendererOp
    participant Passthrough as ColorBufferPassthroughOp
    participant Holoviz as HolovizOp

    User->>BciVisualizationApp: Start application
    BciVisualizationApp->>StreamOp: Initialize with SNIRFStream
    BciVisualizationApp->>VolumeRenderer: Load config & initialize
    BciVisualizationApp->>VoxelToVolume: Load anatomy mask

    Note over BciVisualizationApp: Reconstruction Pipeline (Thread 1)
    loop For each frame
        StreamOp->>StreamOp: Read SNIRF data
        StreamOp->>BuildRHS: moments data
        BuildRHS->>BuildRHS: Create CUDA stream
        BuildRHS->>BuildRHS: Apply baseline (diff with prev frame)
        BuildRHS->>BuildRHS: Build Jacobian & RHS matrices
        BuildRHS->>Normalize: batch + CUDA stream
        Normalize->>Normalize: Normalize Jacobian/RHS
        Normalize->>Solver: normalized batch
        Solver->>Solver: Solve regularized system (Tikhonov)
        Solver->>ConvertVoxels: solver result (mua/musp)
        ConvertVoxels->>ConvertVoxels: Convert mua to HbO/HbR
        ConvertVoxels->>ConvertVoxels: Voxelize to 3D grid
        ConvertVoxels->>VoxelToVolume: affine_4x4 (once) + hb_voxel_data
    end

    Note over BciVisualizationApp: Visualization Pipeline (Thread 2)
    loop For each rendered frame
        VoxelToVolume->>VoxelToVolume: Update running statistics
        VoxelToVolume->>VoxelToVolume: Normalize with adaptive scaling
        VoxelToVolume->>VoxelToVolume: Resample to mask dimensions
        VoxelToVolume->>VolumeRenderer: volume + spacing + mask
        VolumeRenderer->>VolumeRenderer: Upload volumes to ClaraViz
        VolumeRenderer->>VolumeRenderer: Ray-cast volume rendering
        Holoviz->>VolumeRenderer: camera_pose (user interaction)
        VolumeRenderer->>VolumeRenderer: Apply camera transform
        VolumeRenderer->>Passthrough: color_buffer + CUDA event/stream
        Passthrough->>Holoviz: color_buffer (POP policy)
        Holoviz->>User: Display 3D visualization
    end
Loading

auto color_message = input.receive<holoscan::gxf::Entity>("color_buffer_in");
if (!color_message) { throw std::runtime_error("Failed to receive color buffer message"); }

auto cuda_streams = input.receive_cuda_streams("color_buffer_in");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CUDA streams received but never propagated to output

The operator receives CUDA streams on line 35 but doesn't forward them to the output message. This could break CUDA stream synchronization in the pipeline.

Comment on lines +73 to +74
if not self.mask_nifti_path:
raise ValueError("VoxelStreamToVolume: No mask NIfTI path provided")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mask path validation happens too late

Check happens in start() after initialization. If mask_nifti_path is None, error occurs at runtime rather than at construction. Consider validating in __init__ for earlier failure detection.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +69 to +78
while ctr < 100:
ctr += 1
inds = rng.choice(np.arange(len(ijk)), size=n, replace=False)
ins = ijk[np.array(inds), :] # <- points
out = xyz[np.array(inds), :] # <- mapped to
B = np.vstack([np.transpose(ins), np.ones(n, dtype=np.float32)])
if np.linalg.det(B) == 0:
continue
if np.linalg.det(B) == 0:
raise RuntimeError("Cannot compute affine, algorithm failed after 100 attempts")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-deterministic affine computation with potential failure

The function uses random sampling (line 71) to compute affine matrix. If determinant is zero after 100 attempts, it raises RuntimeError. This is non-deterministic and could fail intermittently. Consider using a deterministic algorithm or increasing robustness.

Comment on lines +264 to +265
default_data_path = os.path.join(os.getcwd(), "data/bci_visualization")
kernel_data = Path(os.environ.get("HOLOSCAN_INPUT_PATH", default_data_path))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Environment variable fallback to relative path may fail

HOLOSCAN_INPUT_PATH defaults to relative data/bci_visualization. If script is run from different working directory, path resolution will fail. Consider using __file__ for absolute path resolution.

for ts, sample in zip(times, data):
# sample is shape (n_channels,)
# send (n_moments, n_unique_channels, n_wavelengths)
to_send = np.full((NUM_MOMENTS, len(self._unique_channels), NUM_WAVELENGTHS), np.nan)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential data type issue with np.nan assignment

Assigning np.nan to potentially integer arrays. If to_send array gets cast to int dtype elsewhere, NaN values become undefined. Consider explicit float dtype or using a sentinel value like -1.

Comment on lines +98 to +106
assert not cp.any(cp.isnan(row_normalizers))
self._hard_coded_row_normalizers_cache = row_normalizers
return row_normalizers

def _normalize_batch(self, batch: BuildRHSOutput) -> Tuple[List[WavelengthSystem], int] | None:
num_cols = batch.data_jacobians.shape[-1]
num_significant = int(batch.idxs_significant_voxels.size)
num_absorbers, remainder = divmod(num_cols, num_significant)
assert not remainder
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assertions used for production checks

Lines 98 and 106 use assert for data validation. Replace with explicit checks raising ValueError to ensure validation isn't bypassed.

sample_mua[:, idxs_significant_voxels] = data_mua[idxs_significant_voxels, :].T
sample_hb = cp.linalg.solve(self._cached_coefficients, sample_mua)

assert sample_hb.shape == (len(wavelengths), num_voxels)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assertion on shape validation

Line 138 uses assert to validate shape. Use explicit check with ValueError for runtime safety.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

M = [[(-1) ** i * D * entry(R, i) for i in range(n)] for R in np.transpose(out)]

affine = np.concatenate((M, np.array([0, 0, 0, 1]).reshape(1, -1)), axis=0)
assert affine.shape == (4, 4)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assertion for affine shape check

Line 87 uses assert to verify affine matrix shape. Should use explicit validation with exception.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +233 to +240
if 4 in [
rl_axis,
is_axis,
pa_axis,
]: # 4 is a sentinel to indicate any axis that was not set
raise ValueError(
f"Could not determine all axes from orientation: {''.join(orientation_codes)}"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded sentinel value 4 for axis validation

Lines 233-240 use hardcoded 4 as sentinel for unset axes. Consider using None or a named constant for better readability and maintainability.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +58 to +59
wget -0 data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo in wget command flag

Line 58: -0 should be -O (capital O, not zero) for output file specification in wget command.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🤖 Fix all issues with AI agents
In @applications/bci_visualization/bci_visualization.py:
- Around line 1-34: The import statements in bci_visualization.py are not
ordered per isort conventions; reorder them into standard library (argparse, os,
Path), then third-party/installed packages (holoscan.core, holoscan.operators,
holoscan.resources, holoscan.schedulers), then local/project modules grouped
logically (operators.*: BuildRHSOperator, ConvertToVoxelsOperator,
NormalizeOperator, RegularizedSolverOperator, StreamOperator,
VoxelStreamToVolumeOp; streams.*: BaseNirsStream, SNIRFStream;
utils.reconstruction.assets:get_assets;
holohub.color_buffer_passthrough:ColorBufferPassthroughOp;
holohub.volume_renderer:VolumeRendererOp), removing blank-line misplacements so
imports are in contiguous groups; after reordering run ./holohub lint to
auto-fix remaining ordering issues.

In
@applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp:
- Around line 30-38: Remove the unused cuda_streams retrieval in
ColorBufferPassthroughOp::compute: delete the
call/input.receive_cuda_streams("color_buffer_in") and the cuda_streams local
variable since it is never used; leave the rest of compute (receiving
color_message and emitting it) unchanged to retain passthrough behavior.

In
@applications/bci_visualization/operators/reconstruction/build_rhs_operator.py:
- Line 131: Replace the runtime assert in build_rhs_operator.py with explicit
validation: check that self._mega_jacobians_cpu.shape[1] == num_features and if
not raise a clear exception (e.g., ValueError) including the actual shape and
expected num_features, mentioning the variable names (self._mega_jacobians_cpu,
num_features) and context (e.g., in the method that constructs RHS operator) so
the failure is informative and not bypassed by Python -O.
- Around line 1-187: The PR is missing pytest unit tests for BuildRHSOperator;
add a new test module test_build_rhs_operator.py that instantiates
BuildRHSOperator with mock Assets (valid and invalid) to verify proper init,
exercise setup() to confirm input/output ports, call _apply_baseline to assert
first-frame returns None and subsequent frames return correct diffs, test
_get_channel_indices with a valid optode list and with malformed/missing keys to
assert ValueError, validate _zero_out_invalids turns NaN/inf to zeros on a CuPy
array, and run compute() with a minimal SampleOutput using a mocked
ExecutionContext to ensure CUDA stream allocation/propagation and that emitted
BuildRHSOutput contains expected fields (data_jacobians, data_rhs,
model_optical_properties, idxs_significant_voxels, voxel counts, wavelengths);
use pytest fixtures and monkeypatching to avoid real GPU dependencies where
possible (e.g., patch cp.cuda.ExternalStream/context or replace heavy assets
with tiny arrays) and add edge-case tests for empty channels and non-finite
values.

In
@applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py:
- Around line 1-189: Add a new pytest test module named
test_convert_to_voxels_operator.py that covers ConvertToVoxelsOperator:
instantiate the operator with a small synthetic extinction coefficients dict and
minimal ijk/xyz arrays to verify initialization and that setup() registers the
"result", "affine_4x4", and "hb_voxel_data" ports; write tests for
_convert_to_full_voxels by passing a trimmed array and idxs_significant_voxels
and asserting full reconstruction; test HbO/HbR conversion by injecting a mocked
or lightweight HbO.convert_mua_to_hb (or use real HbO with deterministic
coefficients) to validate output shapes and cumulative addition behavior of
_cum_hbo/_cum_hbr after multiple compute() calls; test _compute_affine for
normal and singular cases by supplying ijk/xyz that produce a valid 4x4 affine
and a case that forces the RuntimeError when B is singular; and test
_compute_voxel_layout and _voxelize_hbo with varied metadata (including empty
voxel lists) to assert returned normalized coords, shape, and that voxelization
places values at expected indices; use small deterministic data, monkeypatch
cp.cuda.ExternalStream or run compute() without GPU by setting use_gpu=False,
and assert _emit_affine_once emits the affine only once by capturing
op_output.emit calls.

In
@applications/bci_visualization/operators/reconstruction/normalize_operator.py:
- Around line 70-80: The compute method currently emits NormalizedSolveBatch via
op_output.emit but fails to propagate the CUDA stream received from
op_input.receive_cuda_stream("batch"); capture the stream (e.g., stream =
op_input.receive_cuda_stream("batch")) and call
op_output.set_cuda_stream(stream) before emitting so downstream operators (e.g.,
RegularizedSolverOperator) execute on the same CUDA stream; ensure you set the
stream on op_output immediately prior to the op_output.emit of the
NormalizedSolveBatch.

In @applications/bci_visualization/operators/reconstruction/solver_operator.py:
- Around line 49-51: The CUDA stream used in the with
cp.cuda.ExternalStream(cuda_stream): block is not propagated to downstream
output, so after computing result = self._solve_batch(batch) and before
op_output.emit(result, "result") call, invoke
op_output.set_cuda_stream(cuda_stream, "result") to attach the stream to the
"result" output; ensure this happens inside the same ExternalStream context so
GPU work ordering is preserved for self._solve_batch and the emitted tensor.

In @applications/bci_visualization/operators/stream.py:
- Around line 23-50: Add pytest unit tests for StreamOperator in a new
test_stream.py beside the operator; cover initialization and that setup
registers the "samples" output port, lifecycle behavior by mocking a
BaseNirsStream to verify start() is called, get_channels() value is used, and
stream_nirs() yields expected samples, emission via op_output (use a fake
OutputContext or spy) to assert SampleOutput(sample, channels) is emitted, and
exhaustion behavior where next(self._iter, None) returns None causes
StopIteration; reference StreamOperator, BaseNirsStream.start,
BaseNirsStream.get_channels, BaseNirsStream.stream_nirs, and SampleOutput in
test names and assertions and use pytest fixtures/mocks to simulate the stream.

In
@applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py:
- Around line 289-295: The comment above the logging block is inaccurate: it
says "every 100 frames" but the code uses self.frame_count % 10 == 0; update
either the comment to reflect every 10 frames or change the modulus to 100 to
match the comment. Locate the logging block referencing self.frame_count,
self.global_min, self.global_max, current_min, and current_max in
VoxelStreamToVolume and make the comment and the condition consistent (e.g.,
change the comment to "Log statistics every 10 frames" if keeping % 10, or
change % 10 to % 100 if you want 100-frame logging).

In @applications/bci_visualization/utils/reconstruction/hbo.py:
- Around line 136-140: The shape assertion is wrong because
cp.linalg.solve(self._cached_coefficients, sample_mua) returns (k, num_voxels)
where k is the number of columns in self._cached_coefficients (Hb components),
not len(wavelengths); update the assertion to check sample_hb.shape ==
(self._cached_coefficients.shape[1], num_voxels) (or explicitly (2, num_voxels)
if the code assumes two components), and ensure subsequent indexing of sample_hb
(data_hbo = sample_hb[0], data_hbr = sample_hb[1]) remains consistent with that
corrected shape check.

In @applications/bci_visualization/utils/reconstruction/reg_inv.py:
- Around line 76-92: The cache currently uses only wavelength_idx which is not
unique across batches; change the lookup and store to use a cache_key that
includes batch-unique Jacobian info (e.g. cache_key = (wavelength_idx,
data_jacobians.shape, int(data_jacobians.data.ptr)) or a hash of data_jacobians)
so _HESSIAN_CACHE.get(...) and _HESSIAN_CACHE[...] use that key instead of
wavelength_idx, or alternatively ensure the cache is cleared between batches;
update references to _HESSIAN_CACHE, wavelength_idx, and data_jacobians
accordingly so cached Hessians cannot be reused across different Jacobians.
🧹 Nitpick comments (20)
operators/volume_renderer/volume_renderer.cpp (1)

284-286: Pose caching logic improves efficiency.

The cached pose mechanism (lines 769-773) avoids recomputing the camera pose when no new input is received, which is a good optimization for streaming applications.

The implementation relies on clara::viz::Matrix4x4 default-initializing to identity (as noted in the comment on line 738). While this appears to be the intended behavior, explicit initialization would make this assumption more robust:

♻️ Optional: Explicitly initialize cached pose to identity

In the Impl constructor or member initializer list, consider explicitly initializing the cached pose to make the assumption clear and prevent potential issues if the default constructor behavior changes:

// In the Impl class definition or constructor
clara::viz::Matrix4x4 cached_camera_pose_{clara::viz::Matrix4x4::Identity()};

This makes the identity initialization explicit rather than relying on default constructor behavior.

Also applies to: 769-773

applications/bci_visualization/README.md (1)

1-231: Optional documentation improvements.

The documentation is comprehensive and well-structured. A few minor optional refinements:

  • Line 33: Consider using consistent spelling ("traveled" vs "travelled")
  • Line 80: Add a language identifier to the fenced code block for syntax highlighting

These are minor stylistic improvements and can be addressed at your discretion.

applications/bci_visualization/operators/stream.py (1)

46-48: Consider extracting exception message to a constant.

For better maintainability, consider extracting the exception message to a module-level constant or using a shorter inline message.

♻️ Proposed refactor
+# Module-level constant
+_STREAM_EXHAUSTED_MSG = "No more samples available in the stream."
+
 class StreamOperator(Operator):
     # ... existing code ...
     
     def compute(
         self, op_input: InputContext, op_output: OutputContext, context: ExecutionContext
     ) -> None:
         sample = next(self._iter, None)
         if sample is None:
-            raise StopIteration("No more samples available in the stream.")
+            raise StopIteration(_STREAM_EXHAUSTED_MSG)
         
         op_output.emit(SampleOutput(sample, self._channels), "samples")
applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp (1)

35-48: Trampoline class discards py::args without processing.

The constructor accepts py::args but immediately discards them with (void)args;. If the intention is to support additional keyword arguments or positional arguments from Python, they should be parsed. If no extra arguments are expected, consider removing py::args from the signature to make the API explicit.

applications/bci_visualization/operators/reconstruction/normalize_operator.py (2)

91-100: Index confusion: iterating features but indexing HARD_CODED_NORMALIZERS by wavelength.

The loop iterates idx_feature over num_features, but HARD_CODED_NORMALIZERS[wavelength_idx] is accessed, which returns a 3-element array indexed by idx_feature. Given HARD_CODED_NORMALIZERS has only 2 elements (one per wavelength), this works correctly for 2 wavelengths. However, the code structure is confusing — the outer loop should be wavelengths and inner loop features, matching the data layout.

Suggested clarification
-        for wavelength_idx in range(num_wavelengths):
-            for idx_feature in range(num_features):
-                row_normalizers[idx_feature::num_features, wavelength_idx] = HARD_CODED_NORMALIZERS[
-                    wavelength_idx
-                ][idx_feature]
+        # HARD_CODED_NORMALIZERS[wavelength_idx][feature_idx] provides normalizer value
+        for wavelength_idx in range(num_wavelengths):
+            for feature_idx in range(num_features):
+                # Assign to every feature_idx-th row (strided by num_features)
+                row_normalizers[feature_idx::num_features, wavelength_idx] = (
+                    HARD_CODED_NORMALIZERS[wavelength_idx][feature_idx]
+                )

27-28: Operator class naming convention.

Per coding guidelines, operator class names should use TitleCase with 'Op' suffix (e.g., NormalizeOp). The current name NormalizeOperator uses 'Operator' suffix instead. Also, Python operators must include unit tests in the same directory using pytest framework.

applications/bci_visualization/operators/reconstruction/solver_operator.py (2)

79-85: Hardcoded assumption that num_absorbers == 2.

The code assumes exactly 2 absorbers (mua at index 0, musp at index 1). If batch.num_absorbers differs, the reshape will produce incorrect slicing. Consider adding a validation assertion.

Proposed validation
+        # Validate expected absorber count (mua, musp)
+        assert batch.num_absorbers == 2, f"Expected 2 absorbers (mua, musp), got {batch.num_absorbers}"
+
         # Reshape result to separate absorbers into mua/musp
         reshaped = result.reshape(
             (-1, batch.num_absorbers, num_wavelengths),
             order="F",
         )

20-21: Operator class naming convention.

Per coding guidelines, operator class names should use 'Op' suffix (e.g., RegularizedSolverOp). Also, unit tests are required for Python operators.

applications/bci_visualization/utils/reconstruction/reg_inv.py (2)

15-15: Unused constant MAX_REASONABLE_COND_RATIO.

This constant is defined but never referenced in the code. Either remove it or implement the intended condition-number check for numerical stability.


18-23: Use specific return types instead of object.

The return type annotations use object which provides no type information. Consider using cp.ndarray or NDArray for better IDE support and static analysis.

Proposed type annotations
 def solve_regularized_system(
     data_jacobians,
     data_rhs,
     wavelength_idx: int,
     reg: float,
-) -> object:
+) -> cp.ndarray:

Apply similar changes to _build_regularized_system and _solve_square_system.

Also applies to: 55-59, 95-98

applications/bci_visualization/bci_visualization.py (2)

54-55: Unused parameter tol.

The tol parameter is stored as self._tol but never used in compose() or passed to any operator. Either remove it or wire it to the appropriate operator.


263-265: Consider using <holohub_data_dir> placeholder for data paths.

The hardcoded default path "data/bci_visualization" may not align with HoloHub conventions. Per learnings, applications should use the <holohub_data_dir> placeholder in metadata.json for referencing the data directory.

applications/bci_visualization/utils/reconstruction/assets.py (2)

99-105: Redundant Path() wrapping.

Variables like mua_path, musp_path, etc. are already pathlib.Path objects (created on lines 91-97). The additional pathlib.Path() wrapping in np.load() calls is unnecessary.

Proposed cleanup
-    _mua = np.load(pathlib.Path(mua_path))
-    _musp = np.load(pathlib.Path(musp_path))
-    _idxs_significant_voxels = np.load(pathlib.Path(idxs_significant_voxels_path))
-    _ijk = np.load(pathlib.Path(ijk_path))
-    _xyz = np.load(pathlib.Path(xyz_path))
-    _wavelengths = np.load(pathlib.Path(wavelengths_path))
-    _resolution = tuple(np.load(pathlib.Path(resolution_path)).tolist())
+    _mua = np.load(mua_path)
+    _musp = np.load(musp_path)
+    _idxs_significant_voxels = np.load(idxs_significant_voxels_path)
+    _ijk = np.load(ijk_path)
+    _xyz = np.load(xyz_path)
+    _wavelengths = np.load(wavelengths_path)
+    _resolution = tuple(np.load(resolution_path).tolist())

123-139: Singleton cache ignores path arguments after first call.

get_assets() caches the result in _assets but does not verify that subsequent calls use the same paths. If called with different paths after the first load, it silently returns stale data. Consider validating path consistency or documenting this behavior.

Proposed validation
 def get_assets(
     jacobian_path: pathlib.Path | str,
     channel_mapping_path: pathlib.Path | str,
     voxel_info_dir: pathlib.Path | str,
     coefficients_path: pathlib.Path | str,
 ) -> Assets:
     global _assets
     if _assets is None:
         # load
         _assets = _load_assets(
             mega_jacobian_path=jacobian_path,
             channel_mapping_path=channel_mapping_path,
             voxel_info_dir=voxel_info_dir,
             coefficients_path=coefficients_path,
         )
+    else:
+        logger.debug("Returning cached assets (paths not re-validated)")

     return _assets
applications/bci_visualization/utils/reconstruction/hbo.py (2)

90-90: Redundant round() call on integer wavelength.

The parameter wavelength is typed as int, so round(wavelength) is unnecessary. The static analyzer correctly flags this.

Proposed fix
-        coefficient = self._coefficients.get(round(wavelength))
+        coefficient = self._coefficients.get(wavelength)

66-67: Unused _use_gpu flag.

The _use_gpu instance variable is stored but never referenced in the class methods. Either remove it or implement GPU/CPU conditional logic.

applications/bci_visualization/streams/snirf.py (2)

53-53: Use logger instead of print() for operational messages.

Line 53 uses print() for logging operational information. Use the configured logger instead for consistency with standard logging practices.

📝 Proposed fix
-        print("Got {} unique channels".format(len(self._unique_channels)))
+        logger.info("Got %d unique channels", len(self._unique_channels))

145-146: Fix unused loop variable and add explicit strict parameter.

The ts (timestamp) variable is not used in the loop body, and the zip() call lacks an explicit strict= parameter to ensure both iterables have the same length.

♻️ Proposed fix
-        print("Streaming {} samples from SNIRF".format(len(data)))
-        for ts, sample in zip(times, data):
+        logger.info("Streaming %d samples from SNIRF", len(data))
+        for _ts, sample in zip(times, data, strict=True):
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (1)

63-78: Simplify variable initialization in _compute_affine.

Lines 67-68 initialize out and B outside the loop with empty arrays, but they're only meaningfully assigned inside the loop. This initialization is unnecessary and could be confusing.

♻️ Proposed refactor
     rng = np.random.default_rng(0)

     n = 4
     ctr = 0
-    out: NDArray[np.float32] = np.array([])  # bind outside loop
-    B: NDArray[np.float32] = np.array([])  # bind outside loop
+    B: NDArray[np.float32]
+    out: NDArray[np.float32]
+    
     while ctr < 100:
         ctr += 1
         inds = rng.choice(np.arange(len(ijk)), size=n, replace=False)
         ins = ijk[np.array(inds), :]  # <- points
         out = xyz[np.array(inds), :]  # <- mapped to
         B = np.vstack([np.transpose(ins), np.ones(n, dtype=np.float32)])
         if np.linalg.det(B) == 0:
             continue
+        break
+        
     if np.linalg.det(B) == 0:
         raise RuntimeError("Cannot compute affine, algorithm failed after 100 attempts")
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (1)

233-240: Consider more descriptive error for orientation parsing failure.

The validation uses a sentinel value of 4 to detect unset axes. While functional, this approach could be clearer. Consider tracking which specific axes failed to parse.

💡 Alternative approach
-        # Validation
-        if 4 in [
-            rl_axis,
-            is_axis,
-            pa_axis,
-        ]:  # 4 is a sentinel to indicate any axis that was not set
-            raise ValueError(
-                f"Could not determine all axes from orientation: {''.join(orientation_codes)}"
-            )
+        # Validation: check all axes were assigned
+        unset_axes = []
+        if rl_axis == 4:
+            unset_axes.append("RL")
+        if is_axis == 4:
+            unset_axes.append("IS")
+        if pa_axis == 4:
+            unset_axes.append("PA")
+        if unset_axes:
+            raise ValueError(
+                f"Could not determine axes {unset_axes} from orientation: "
+                f"{''.join(orientation_codes)}"
+            )
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 467cabf and 322cf03.

📒 Files selected for processing (36)
  • applications/CMakeLists.txt
  • applications/bci_visualization/CMakeLists.txt
  • applications/bci_visualization/Dockerfile
  • applications/bci_visualization/README.md
  • applications/bci_visualization/bci_visualization.py
  • applications/bci_visualization/bci_visualization.yaml
  • applications/bci_visualization/operators/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
  • applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/stream.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/streams/__init__.py
  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/utils/__init__.py
  • applications/bci_visualization/utils/reconstruction/__init__.py
  • applications/bci_visualization/utils/reconstruction/assets.py
  • applications/bci_visualization/utils/reconstruction/hbo.py
  • applications/bci_visualization/utils/reconstruction/reg_inv.py
  • operators/volume_renderer/CMakeLists.txt
  • operators/volume_renderer/dataset.cpp
  • operators/volume_renderer/dataset.hpp
  • operators/volume_renderer/python/CMakeLists.txt
  • operators/volume_renderer/python/volume_renderer.cpp
  • operators/volume_renderer/python/volume_renderer_pydoc.hpp
  • operators/volume_renderer/video_buffer_blob.hpp
  • operators/volume_renderer/volume_renderer.cpp
  • operators/volume_renderer/volume_renderer.hpp
✅ Files skipped from review due to trivial changes (3)
  • operators/volume_renderer/volume_renderer.hpp
  • operators/volume_renderer/video_buffer_blob.hpp
  • applications/bci_visualization/utils/init.py
🚧 Files skipped from review as they are similar to previous changes (8)
  • operators/volume_renderer/python/volume_renderer.cpp
  • operators/volume_renderer/dataset.hpp
  • applications/bci_visualization/Dockerfile
  • operators/volume_renderer/python/volume_renderer_pydoc.hpp
  • applications/bci_visualization/operators/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/python/CMakeLists.txt
  • applications/bci_visualization/CMakeLists.txt
  • operators/volume_renderer/CMakeLists.txt
🧰 Additional context used
📓 Path-based instructions (7)
**/operators/**/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

CMakeLists.txt for operators must use 'add_holohub_operator()' with optional 'DEPENDS EXTENSIONS' for dependent extensions

Files:

  • operators/volume_renderer/python/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
**/applications/**/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

**/applications/**/CMakeLists.txt: CMakeLists.txt for applications must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators
Applications should include a testing section in CMakeLists.txt for functional testing using CTest

Files:

  • applications/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
**/operators/**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Class names for operators should use TitleCase with 'Op' suffix (e.g., 'AdaptiveThresholdingOp')

Files:

  • operators/volume_renderer/dataset.cpp
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
  • operators/volume_renderer/volume_renderer.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/stream.py
**/operators/**/*.{py,cpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Operator filename should match the directory name with appropriate extension (e.g., 'adaptive_thresholding.py')

Files:

  • operators/volume_renderer/dataset.cpp
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • operators/volume_renderer/volume_renderer.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/stream.py
**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

**/*.{py,cpp,hpp}: Code must adhere to Holoscan SDK coding standards including style compliance, descriptive naming, and minimal abbreviations
Code must include appropriate error handling and validation for all operations
All code must pass linting checks using './holohub lint' before submission
All code contributions must use clear, English descriptive names for functions and variables
Code must include inline comments for complex logic and clear documentation of functionality

Files:

  • operators/volume_renderer/dataset.cpp
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
  • applications/bci_visualization/streams/snirf.py
  • operators/volume_renderer/volume_renderer.cpp
  • applications/bci_visualization/utils/reconstruction/assets.py
  • applications/bci_visualization/streams/__init__.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/bci_visualization.py
  • applications/bci_visualization/utils/reconstruction/hbo.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/utils/reconstruction/reg_inv.py
  • applications/bci_visualization/utils/reconstruction/__init__.py
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/stream.py
**/operators/**/*.cpp

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Operators with C++ implementations must include CMakeLists.txt for build system integration

Files:

  • operators/volume_renderer/dataset.cpp
  • operators/volume_renderer/volume_renderer.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
**/operators/**/*.py

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Python operators must include unit tests in the same directory using pytest framework

Files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/reconstruction/__init__.py
  • applications/bci_visualization/operators/stream.py
🧠 Learnings (23)
📚 Learning: 2025-11-19T04:58:30.227Z
Learnt from: finalelement
Repo: nvidia-holoscan/holohub PR: 1237
File: applications/surgical_scene_recon/training/utils/sh_utils.py:1-23
Timestamp: 2025-11-19T04:58:30.227Z
Learning: When reviewing files that contain third-party code with original copyright notices (e.g., from PlenOctree, other open-source projects), do not suggest modifying the original copyright years or attribution. The original copyright notice must be preserved exactly as required by redistribution licenses (BSD, MIT, Apache, etc.). If substantial modifications are made by the current project, a separate copyright line can be added, but the original must remain unchanged.

Applied to files:

  • operators/volume_renderer/python/CMakeLists.txt
  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
  • applications/bci_visualization/operators/reconstruction/solver_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/bci_visualization.py
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
  • applications/bci_visualization/operators/reconstruction/__init__.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/applications/**/CMakeLists.txt : CMakeLists.txt for applications must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators

Applied to files:

  • applications/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/pkg/CMakeLists.txt : CMakeLists.txt for packages must use 'add_holohub_package()' function with optional APPLICATIONS and OPERATORS dependencies

Applied to files:

  • applications/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/workflows/**/CMakeLists.txt : CMakeLists.txt for workflows must use 'add_holohub_application()' with optional 'DEPENDS OPERATORS' for dependent operators

Applied to files:

  • applications/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/CMakeLists.txt : CMakeLists.txt for operators must use 'add_holohub_operator()' with optional 'DEPENDS EXTENSIONS' for dependent extensions

Applied to files:

  • applications/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/pkg/**/CMakeLists.txt : CMakeLists.txt for packages must use 'holohub_configure_deb()' with required NAME, DESCRIPTION, VERSION, VENDOR, CONTACT and optional SECTION, PRIORITY parameters

Applied to files:

  • applications/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(applications|workflows))/**/CMakeLists.txt : Applications and workflows must include CMakeLists.txt for build system integration

Applied to files:

  • applications/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/gxf_extensions/**/CMakeLists.txt : CMakeLists.txt for GXF extensions must use 'add_holohub_extension()' function

Applied to files:

  • applications/CMakeLists.txt
📚 Learning: 2025-11-17T06:32:45.908Z
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.cpp:64-87
Timestamp: 2025-11-17T06:32:45.908Z
Learning: In CUDA, when `cudaMemcpyAsync` is used with pageable (non-pinned) host memory, the operation is synchronous with respect to the host thread because the runtime must prevent memory modification during transfer. Only pinned (page-locked) memory allows truly asynchronous behavior. Therefore, no explicit `cudaStreamSynchronize` is required after `cudaMemcpyAsync` when the destination is pageable host memory.

Applied to files:

  • operators/volume_renderer/dataset.cpp
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.py : Python operators must include unit tests in the same directory using pytest framework

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/stream.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/test_*.py : Unit test files for Python operators should use 'test_' prefix followed by the operator name (e.g., 'test_adaptive_thresholding.py')

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/stream.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/test_*.py : Python operator test files should reuse common fixtures from 'conftest.py'

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/stream.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.{py,cpp} : Operator filename should match the directory name with appropriate extension (e.g., 'adaptive_thresholding.py')

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/test_*.py : Python test files must use 'pytest' framework and include tests for initialization, port setup, error handling, compute logic, and edge cases

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/streams/snirf.py
  • applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
  • applications/bci_visualization/operators/reconstruction/normalize_operator.py
  • applications/bci_visualization/operators/stream.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.cpp : Operators with C++ implementations must include CMakeLists.txt for build system integration

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/ : Directory names for operators should use snake_case (e.g., 'adaptive_thresholding')

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Contributions must use consistent directory structures based on their type (operators, applications, workflows, extensions, tutorials, packages)

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/operators/**/*.{py,cpp,hpp} : Class names for operators should use TitleCase with 'Op' suffix (e.g., 'AdaptiveThresholdingOp')

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/(@(operators|applications|workflows))/**/metadata.json : Operators, applications, and workflows should use the '<holohub_data_dir>' placeholder in metadata.json 'run' command for referencing data directory

Applied to files:

  • applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
  • applications/bci_visualization/bci_visualization.py
📚 Learning: 2025-10-20T22:19:33.520Z
Learnt from: chengronglai
Repo: nvidia-holoscan/holohub PR: 1136
File: benchmarks/realtime_threads_benchmarking/realtime_thread_benchmark.cpp:245-251
Timestamp: 2025-10-20T22:19:33.520Z
Learning: In the Holoscan framework, operators added via add_operator() can execute without explicitly attaching a scheduling condition - they will still be scheduled and their compute() method will be invoked by the runtime.

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp
📚 Learning: 2025-12-18T16:19:45.117Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-12-18T16:19:45.117Z
Learning: Applies to **/*.{py,cpp,hpp} : Code must include appropriate error handling and validation for all operations

Applied to files:

  • applications/bci_visualization/streams/snirf.py
📚 Learning: 2025-10-22T16:33:55.411Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
📚 Learning: 2025-10-22T16:53:45.393Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: operators/video_streaming/streaming_client_enhanced/python/CMakeLists.txt:16-24
Timestamp: 2025-10-22T16:53:45.393Z
Learning: The pybind11_add_holohub_module CMake macro in cmake/pybind11_add_holohub_module.cmake encapsulates all pybind11 setup internally, including finding pybind11, linking against holoscan::pybind11 conditionally, and linking the C++ operator target. Operator Python bindings in holohub should only call this macro without additional pybind11 setup.

Applied to files:

  • applications/bci_visualization/operators/color_buffer_passthrough/CMakeLists.txt
  • applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp
🧬 Code graph analysis (12)
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (3)
applications/bci_visualization/utils/reconstruction/assets.py (1)
  • Assets (22-43)
applications/bci_visualization/operators/stream.py (1)
  • SampleOutput (18-20)
applications/bci_visualization/operators/reconstruction/types.py (2)
  • BuildRHSOutput (23-31)
  • VoxelMetadata (16-19)
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp (2)
operators/volume_renderer/volume_renderer.hpp (2)
  • spec (30-30)
  • input (31-31)
operators/volume_renderer/volume_renderer.cpp (1)
  • input (228-228)
applications/bci_visualization/streams/snirf.py (2)
applications/bci_visualization/streams/base_nirs.py (5)
  • ChannelInfo (13-20)
  • BaseNirsStream (23-33)
  • start (24-25)
  • get_channels (28-29)
  • stream_nirs (32-33)
applications/bci_visualization/operators/stream.py (1)
  • start (37-40)
operators/volume_renderer/volume_renderer.cpp (1)
operators/volume_renderer/dataset.hpp (2)
  • type (61-64)
  • type (71-71)
applications/bci_visualization/utils/reconstruction/assets.py (1)
applications/bci_visualization/utils/reconstruction/hbo.py (2)
  • ExtinctionCoefficient (15-58)
  • from_csv (30-49)
applications/bci_visualization/operators/reconstruction/solver_operator.py (3)
applications/bci_visualization/utils/reconstruction/reg_inv.py (1)
  • solve_regularized_system (18-52)
applications/bci_visualization/operators/reconstruction/types.py (2)
  • NormalizedSolveBatch (42-48)
  • SolverResult (52-58)
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (1)
  • compute (95-187)
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (1)
applications/bci_visualization/operators/reconstruction/types.py (2)
  • SolverResult (52-58)
  • VoxelMetadata (16-19)
applications/bci_visualization/operators/voxel_stream_to_volume/__init__.py (1)
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (1)
  • VoxelStreamToVolumeOp (16-361)
applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py (1)
applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (2)
  • setup (113-116)
  • compute (118-147)
applications/bci_visualization/operators/reconstruction/normalize_operator.py (2)
applications/bci_visualization/operators/reconstruction/types.py (3)
  • BuildRHSOutput (23-31)
  • NormalizedSolveBatch (42-48)
  • WavelengthSystem (35-38)
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py (1)
  • compute (95-187)
applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp (1)
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp (2)
  • setup (21-28)
  • setup (21-21)
applications/bci_visualization/operators/stream.py (2)
applications/bci_visualization/streams/base_nirs.py (4)
  • ChannelInfo (13-20)
  • start (24-25)
  • get_channels (28-29)
  • stream_nirs (32-33)
applications/bci_visualization/streams/snirf.py (3)
  • start (46-53)
  • get_channels (55-61)
  • stream_nirs (111-156)
🪛 Cppcheck (2.19.0)
applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp

[error] 32-32: There is an unknown macro here somewhere. Configuration is required. If HOLOSCAN_OPERATOR_FORWARD_ARGS is a macro then please configure it.

(unknownMacro)

applications/bci_visualization/operators/color_buffer_passthrough/python/color_buffer_passthrough.cpp

[error] 32-32: There is an unknown macro here somewhere. Configuration is required. If HOLOSCAN_OPERATOR_FORWARD_ARGS is a macro then please configure it.

(unknownMacro)

🪛 GitHub Actions: Check linting
applications/bci_visualization/bci_visualization.py

[error] 1-1: Python isort check failed: Imports are incorrectly sorted and/or formatted.

🪛 LanguageTool
applications/bci_visualization/README.md

[uncategorized] ~33-~33: Do not mix variants of the same word (‘travelled’ and ‘traveled’) within a single text.
Context: ...r times correspond to photons that have travelled farther into the tissue. In a given sec...

(EN_EXACT_COHERENCY_RULE)


[style] ~44-~44: Consider a more concise word here.
Context: ... from the perspective of the detector). In order to have a more anatomical representation o...

(IN_ORDER_TO_PREMIUM)


[grammar] ~51-~51: Use a hyphen to join words.
Context: ...son Thor Developer kit. Any Holoscan SDK supported platform should work. To run...

(QB_NEW_EN_HYPHEN)

🪛 markdownlint-cli2 (0.18.1)
applications/bci_visualization/README.md

80-80: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.14.10)
applications/bci_visualization/operators/reconstruction/build_rhs_operator.py

86-89: Avoid specifying long messages outside the exception class

(TRY003)


92-92: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/streams/snirf.py

44-44: Avoid specifying long messages outside the exception class

(TRY003)


146-146: Loop control variable ts not used within loop body

Rename unused ts to _ts

(B007)


146-146: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

applications/bci_visualization/operators/reconstruction/solver_operator.py

44-44: Unused method argument: context

(ARG002)

applications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py

78-78: Avoid specifying long messages outside the exception class

(TRY003)


122-122: Unused method argument: context

(ARG002)

applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py

74-74: Avoid specifying long messages outside the exception class

(TRY003)


88-90: Avoid specifying long messages outside the exception class

(TRY003)


109-109: Unused method argument: context

(ARG002)


116-118: Avoid specifying long messages outside the exception class

(TRY003)


120-122: Avoid specifying long messages outside the exception class

(TRY003)


136-136: Avoid specifying long messages outside the exception class

(TRY003)


238-240: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/utils/reconstruction/hbo.py

34-34: Value being cast to int is already an integer

Remove unnecessary int call

(RUF046)


90-90: Value being rounded is already an integer

Remove unnecessary round call

(RUF057)


92-94: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/operators/reconstruction/normalize_operator.py

54-54: Unused method argument: context

(ARG002)


58-58: Avoid specifying long messages outside the exception class

(TRY003)

applications/bci_visualization/operators/reconstruction/__init__.py

12-21: __all__ is not sorted

Apply an isort-style sorting to __all__

(RUF022)

applications/bci_visualization/operators/stream.py

43-43: Unused method argument: op_input

(ARG002)


43-43: Unused method argument: context

(ARG002)


48-48: Avoid specifying long messages outside the exception class

(TRY003)

Comment on lines +1 to +34
"""
SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
SPDX-License-Identifier: Apache-2.0
BCI Visualization Application - streams synthetic voxel data and renders as 3D volume.
"""

import argparse
import os
from pathlib import Path

from holoscan.core import Application, ConditionType
from holoscan.operators import HolovizOp
from holoscan.resources import CudaStreamPool, UnboundedAllocator
from holoscan.schedulers import EventBasedScheduler
from operators.reconstruction import (
BuildRHSOperator,
ConvertToVoxelsOperator,
NormalizeOperator,
RegularizedSolverOperator,
)
from operators.stream import StreamOperator
from streams.base_nirs import BaseNirsStream
from streams.snirf import SNIRFStream

# Import reconstruction utilities
from utils.reconstruction.assets import get_assets

from holohub.color_buffer_passthrough import ColorBufferPassthroughOp
from holohub.volume_renderer import VolumeRendererOp

# Import local operators
from operators.voxel_stream_to_volume import VoxelStreamToVolumeOp

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix import ordering to pass linting.

The pipeline failure indicates imports are incorrectly sorted. The imports should follow isort conventions: standard library → third-party → local imports, with appropriate grouping.

Suggested import order
 import argparse
 import os
 from pathlib import Path

 from holoscan.core import Application, ConditionType
 from holoscan.operators import HolovizOp
 from holoscan.resources import CudaStreamPool, UnboundedAllocator
 from holoscan.schedulers import EventBasedScheduler
+
+from holohub.color_buffer_passthrough import ColorBufferPassthroughOp
+from holohub.volume_renderer import VolumeRendererOp
+
 from operators.reconstruction import (
     BuildRHSOperator,
     ConvertToVoxelsOperator,
     NormalizeOperator,
     RegularizedSolverOperator,
 )
 from operators.stream import StreamOperator
+from operators.voxel_stream_to_volume import VoxelStreamToVolumeOp
 from streams.base_nirs import BaseNirsStream
 from streams.snirf import SNIRFStream
-
-# Import reconstruction utilities
 from utils.reconstruction.assets import get_assets
-
-from holohub.color_buffer_passthrough import ColorBufferPassthroughOp
-from holohub.volume_renderer import VolumeRendererOp
-
-# Import local operators
-from operators.voxel_stream_to_volume import VoxelStreamToVolumeOp

Run ./holohub lint to auto-fix import ordering.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"""
SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
SPDX-License-Identifier: Apache-2.0
BCI Visualization Application - streams synthetic voxel data and renders as 3D volume.
"""
import argparse
import os
from pathlib import Path
from holoscan.core import Application, ConditionType
from holoscan.operators import HolovizOp
from holoscan.resources import CudaStreamPool, UnboundedAllocator
from holoscan.schedulers import EventBasedScheduler
from operators.reconstruction import (
BuildRHSOperator,
ConvertToVoxelsOperator,
NormalizeOperator,
RegularizedSolverOperator,
)
from operators.stream import StreamOperator
from streams.base_nirs import BaseNirsStream
from streams.snirf import SNIRFStream
# Import reconstruction utilities
from utils.reconstruction.assets import get_assets
from holohub.color_buffer_passthrough import ColorBufferPassthroughOp
from holohub.volume_renderer import VolumeRendererOp
# Import local operators
from operators.voxel_stream_to_volume import VoxelStreamToVolumeOp
"""
SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES.
SPDX-License-Identifier: Apache-2.0
BCI Visualization Application - streams synthetic voxel data and renders as 3D volume.
"""
import argparse
import os
from pathlib import Path
from holoscan.core import Application, ConditionType
from holoscan.operators import HolovizOp
from holoscan.resources import CudaStreamPool, UnboundedAllocator
from holoscan.schedulers import EventBasedScheduler
from holohub.color_buffer_passthrough import ColorBufferPassthroughOp
from holohub.volume_renderer import VolumeRendererOp
from operators.reconstruction import (
BuildRHSOperator,
ConvertToVoxelsOperator,
NormalizeOperator,
RegularizedSolverOperator,
)
from operators.stream import StreamOperator
from operators.voxel_stream_to_volume import VoxelStreamToVolumeOp
from streams.base_nirs import BaseNirsStream
from streams.snirf import SNIRFStream
from utils.reconstruction.assets import get_assets
🧰 Tools
🪛 GitHub Actions: Check linting

[error] 1-1: Python isort check failed: Imports are incorrectly sorted and/or formatted.

🤖 Prompt for AI Agents
In @applications/bci_visualization/bci_visualization.py around lines 1 - 34, The
import statements in bci_visualization.py are not ordered per isort conventions;
reorder them into standard library (argparse, os, Path), then
third-party/installed packages (holoscan.core, holoscan.operators,
holoscan.resources, holoscan.schedulers), then local/project modules grouped
logically (operators.*: BuildRHSOperator, ConvertToVoxelsOperator,
NormalizeOperator, RegularizedSolverOperator, StreamOperator,
VoxelStreamToVolumeOp; streams.*: BaseNirsStream, SNIRFStream;
utils.reconstruction.assets:get_assets;
holohub.color_buffer_passthrough:ColorBufferPassthroughOp;
holohub.volume_renderer:VolumeRendererOp), removing blank-line misplacements so
imports are in contiguous groups; after reordering run ./holohub lint to
auto-fix remaining ordering issues.

Comment on lines +16 to +46
cmake_minimum_required(VERSION 3.20)
project(color_buffer_passthrough LANGUAGES CXX)

find_package(holoscan 2.0 REQUIRED CONFIG
PATHS "/opt/nvidia/holoscan" "/workspace/holoscan-sdk/install")

add_library(color_buffer_passthrough SHARED
cpp/color_buffer_passthrough.cpp
cpp/color_buffer_passthrough.hpp
)

target_link_libraries(color_buffer_passthrough
PRIVATE
holoscan::core
)

target_include_directories(color_buffer_passthrough
PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/cpp
INTERFACE
${CMAKE_CURRENT_SOURCE_DIR}
)

install(
TARGETS color_buffer_passthrough
EXPORT holoscan-ops
)

if(HOLOHUB_BUILD_PYTHON)
add_subdirectory(python)
endif()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use add_holohub_operator() as required by coding guidelines.

The coding guidelines specify that "CMakeLists.txt for operators must use 'add_holohub_operator()' with optional 'DEPENDS EXTENSIONS' for dependent extensions." This file manually uses add_library() and manually configures linking, includes, and installation.

Please refactor to use the standard add_holohub_operator() macro for consistency with other holohub operators.

♻️ Suggested refactor using add_holohub_operator()
cmake_minimum_required(VERSION 3.20)

add_holohub_operator(color_buffer_passthrough
  SOURCES
    cpp/color_buffer_passthrough.cpp
    cpp/color_buffer_passthrough.hpp
)

if(HOLOHUB_BUILD_PYTHON)
  add_subdirectory(python)
endif()

Note: Verify the exact syntax and available parameters for add_holohub_operator() in the holohub CMake modules.

Comment on lines +30 to +38
void ColorBufferPassthroughOp::compute(InputContext& input, OutputContext& output,
ExecutionContext& context) {
auto color_message = input.receive<holoscan::gxf::Entity>("color_buffer_in");
if (!color_message) { throw std::runtime_error("Failed to receive color buffer message"); }

auto cuda_streams = input.receive_cuda_streams("color_buffer_in");

output.emit(color_message.value(), "color_buffer_out");
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

rg -nP --type=cpp -C3 'receive_cuda_streams' | head -100

Repository: nvidia-holoscan/holohub

Length of output: 1085


🏁 Script executed:

# Also check if there are any linting rules or guidelines about unused variables
fd --type=txt --type=md | xargs grep -l "unused.*variable\|CUDA.*stream" 2>/dev/null | head -20

Repository: nvidia-holoscan/holohub

Length of output: 237


🏁 Script executed:

# Look at the actual file structure to understand context better
fd 'color_buffer_passthrough\.(cpp|hpp)' --type=f

Repository: nvidia-holoscan/holohub

Length of output: 366


🏁 Script executed:

rg -n 'receive_cuda_streams' --type=cpp -B 5 -A 10 2>/dev/null | head -150

Repository: nvidia-holoscan/holohub

Length of output: 2087


🏁 Script executed:

# Find other operators with similar structure to understand the pattern
fd 'operators' --type d | head -10

Repository: nvidia-holoscan/holohub

Length of output: 445


🏁 Script executed:

# Check the header file for context
cat -n applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.hpp

Repository: nvidia-holoscan/holohub

Length of output: 1648


🏁 Script executed:

# Search across all operators for receive_cuda_streams patterns
find . -name "*.cpp" -path "*/operators/*" -type f | xargs grep -l "receive_cuda_streams" 2>/dev/null

Repository: nvidia-holoscan/holohub

Length of output: 167


🏁 Script executed:

# Look at a few examples of how receive_cuda_streams is actually used
rg 'receive_cuda_streams' -A 5 --type=cpp | grep -v '^--$' | head -80

Repository: nvidia-holoscan/holohub

Length of output: 821


Remove unused cuda_streams variable.

Line 35 retrieves CUDA streams but never uses them. Since this operator only passes data through with queue policy control (as documented in the header), the cuda_streams call can be removed without affecting functionality.

🧰 Tools
🪛 Cppcheck (2.19.0)

[error] 32-32: There is an unknown macro here somewhere. Configuration is required. If HOLOSCAN_OPERATOR_FORWARD_ARGS is a macro then please configure it.

(unknownMacro)

🤖 Prompt for AI Agents
In
@applications/bci_visualization/operators/color_buffer_passthrough/cpp/color_buffer_passthrough.cpp
around lines 30 - 38, Remove the unused cuda_streams retrieval in
ColorBufferPassthroughOp::compute: delete the
call/input.receive_cuda_streams("color_buffer_in") and the cuda_streams local
variable since it is never used; leave the rest of compute (receiving
color_message and emitting it) unchanged to retain passthrough behavior.

Comment on lines +1 to +187
"""
SPDX-FileCopyrightText: Copyright (c) 2026 Kernel.
SPDX-License-Identifier: Apache-2.0
"""

from __future__ import annotations

import logging
from typing import Any, List, Tuple

import cupy as cp
import numpy as np
from holoscan.core import ExecutionContext, InputContext, Operator, OperatorSpec, OutputContext
from utils.reconstruction.assets import Assets

from ..stream import SampleOutput
from .types import BuildRHSOutput, VoxelMetadata

logger = logging.getLogger(__name__)


class BuildRHSOperator(Operator):
"""Convert realtime moments tensors into trimmed Right-Hand Side (RHS)/Jacobian batches."""

def __init__(
self,
*,
assets: Assets,
fragment: Any | None = None,
) -> None:
super().__init__(fragment, name=self.__class__.__name__)

# Keep CPU copies for disk-loaded assets; GPU copies are created lazily on first compute.
self._model_optical_properties_cpu = np.concatenate((assets.mua, assets.musp)).astype(
np.float32, copy=False
)
self._mega_jacobians_cpu = assets.mega_jacobian
self._channel_mapping = assets.channel_mapping
self._idxs_significant_voxels_cpu = assets.idxs_significant_voxels
self._voxel_metadata = VoxelMetadata(
ijk=assets.ijk, xyz=assets.xyz, resolution=assets.resolution
)
self._wavelengths = assets.wavelengths

# GPU caches (CuPy arrays on the propagated CUDA stream)
self._mega_jacobians_gpu = None
self._model_optical_properties_gpu = None
self._idxs_significant_voxels_gpu = None
self._jacobian_cache = None
self._last_frame = None # previous frame (GPU)

def setup(self, spec: OperatorSpec) -> None:
spec.input("moments")
spec.output("batch")

def _apply_baseline(self, realtime_moments):
"""
simple diff against previous frame
"""
if self._last_frame is None:
self._last_frame = realtime_moments.copy()
return None

# diff with last frame and update last frame
diff = realtime_moments - self._last_frame
self._last_frame = realtime_moments.copy()
return diff

def _zero_out_invalids(self, data_rhs) -> None:
invalid_samples = ~cp.isfinite(data_rhs)
if not cp.any(invalid_samples):
return

# NOTE: this is in-place on GPU, async on the current CUDA stream.
cp.nan_to_num(data_rhs, copy=False, nan=0.0, posinf=0.0, neginf=0.0)

def _get_channel_indices(self, optode_order: List[Tuple[int, int, int, int]]) -> List[int]:
"""Map optode tuples to jacobian channel indices (CPU-side dict lookups)."""
indices: List[int] = []
for src_module, src, det_module, det in optode_order:
try:
srcs = self._channel_mapping[str(src_module)]
detectors = srcs[str(src)][str(det_module)]
jacobian_index = detectors[str(det)]
except KeyError as e:
raise ValueError(
"Channel without jacobian mapping "
f"(src_module={src_module}, src={src}, det_module={det_module}, det={det})"
) from e
indices.append(int(jacobian_index))
if not indices:
raise ValueError("Empty channel mapping; no channels resolved to jacobian indices")
return indices

def compute(
self,
op_input: InputContext,
op_output: OutputContext,
context: ExecutionContext,
) -> None:
payload: SampleOutput = op_input.receive("moments")

# Create the CUDA stream at the earliest GPU-producing operator and propagate it downstream.
cuda_stream = context.allocate_cuda_stream("reconstruction_stream")
with cp.cuda.ExternalStream(cuda_stream):
# Host->device copy is enqueued on the current stream (may be sync if host memory isn't pinned).
realtime_moments = cp.asarray(payload.data, dtype=cp.float32)

# take log of moment 0 to convert to optical density
# shape is (moments, channels, wavelengths)
cp.log(realtime_moments[0], out=realtime_moments[0])

realtime_moments = self._apply_baseline(realtime_moments)
if realtime_moments is None:
logger.info("Skipping RHS build for first frame (baseline capture)")
return

flowaxis_optodes: List[Tuple[int, int, int, int]] = [
(
payload.channels.source_module[channel_idx],
payload.channels.source_number[channel_idx],
payload.channels.detector_module[channel_idx],
payload.channels.detector_number[channel_idx],
)
for channel_idx in range(len(payload.channels))
]

# Validate that jacobian features dimension matches realtime moments
# 5D jacobian shape: (channels, features, wavelengths, voxels, simulation_types)
num_features = realtime_moments.shape[0]
assert self._mega_jacobians_cpu.shape[1] == num_features

with cp.cuda.ExternalStream(cuda_stream):
# One-time GPU uploads of large static assets.
if self._mega_jacobians_gpu is None:
self._mega_jacobians_gpu = cp.asarray(self._mega_jacobians_cpu, dtype=cp.float32)
if self._model_optical_properties_gpu is None:
self._model_optical_properties_gpu = cp.asarray(
self._model_optical_properties_cpu, dtype=cp.float32
)
if self._idxs_significant_voxels_gpu is None:
self._idxs_significant_voxels_gpu = cp.asarray(
self._idxs_significant_voxels_cpu, dtype=cp.int64
)

if self._jacobian_cache is None:
channel_indices = self._get_channel_indices(flowaxis_optodes)
channel_indices_gpu = cp.asarray(channel_indices, dtype=cp.int64)

jacobians = self._mega_jacobians_gpu[channel_indices_gpu, :, :, :, :] # 5d

# swap axes so it's features, channels first
jacobians = jacobians.transpose(1, 0, 2, 3, 4)
# reshape to 3d and use Fortran-style ordering
jacobians = cp.reshape(
jacobians,
(
jacobians.shape[0] * jacobians.shape[1],
jacobians.shape[2],
jacobians.shape[3] * jacobians.shape[4],
),
order="F",
)
self._jacobian_cache = jacobians

data_jacobians = self._jacobian_cache
# swap from (moments, channels, wavelengths) to (channels, moments, wavelengths)
# then reshape to (channels x moments, wavelengths)
data_rhs = realtime_moments.transpose(1, 0, 2).reshape(-1, realtime_moments.shape[2])

self._zero_out_invalids(data_rhs)

# Propagate CUDA stream downstream for correct ordering of GPU work.
op_output.set_cuda_stream(cuda_stream, "batch")
op_output.emit(
BuildRHSOutput(
data_jacobians=data_jacobians,
data_rhs=data_rhs,
model_optical_properties=self._model_optical_properties_gpu,
idxs_significant_voxels=self._idxs_significant_voxels_gpu,
num_full_voxels=int(self._voxel_metadata.ijk.shape[0]),
num_features=int(num_features),
wavelengths=tuple(self._wavelengths.tolist()),
voxel_metadata=self._voxel_metadata,
),
"batch",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Missing unit tests for BuildRHSOperator.

This operator implements critical reconstruction logic but lacks unit tests. As per coding guidelines, Python operators must include unit tests using pytest framework.

Based on coding guidelines, create test_build_rhs_operator.py in the same directory with tests for:

  • Operator initialization with valid/invalid assets
  • Port setup verification
  • Baseline application logic (first frame skip, subsequent diffs)
  • Channel index mapping and error cases
  • GPU memory management and CUDA stream handling
  • Edge cases (NaN/inf handling, empty channels)
🧰 Tools
🪛 Ruff (0.14.10)

86-89: Avoid specifying long messages outside the exception class

(TRY003)


92-92: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In
@applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
around lines 1 - 187, The PR is missing pytest unit tests for BuildRHSOperator;
add a new test module test_build_rhs_operator.py that instantiates
BuildRHSOperator with mock Assets (valid and invalid) to verify proper init,
exercise setup() to confirm input/output ports, call _apply_baseline to assert
first-frame returns None and subsequent frames return correct diffs, test
_get_channel_indices with a valid optode list and with malformed/missing keys to
assert ValueError, validate _zero_out_invalids turns NaN/inf to zeros on a CuPy
array, and run compute() with a minimal SampleOutput using a mocked
ExecutionContext to ensure CUDA stream allocation/propagation and that emitted
BuildRHSOutput contains expected fields (data_jacobians, data_rhs,
model_optical_properties, idxs_significant_voxels, voxel counts, wavelengths);
use pytest fixtures and monkeypatching to avoid real GPU dependencies where
possible (e.g., patch cp.cuda.ExternalStream/context or replace heavy assets
with tiny arrays) and add edge-case tests for empty channels and non-finite
values.

# Validate that jacobian features dimension matches realtime moments
# 5D jacobian shape: (channels, features, wavelengths, voxels, simulation_types)
num_features = realtime_moments.shape[0]
assert self._mega_jacobians_cpu.shape[1] == num_features
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Replace assert with proper validation and error message.

Using assert for runtime validation is problematic because assertions can be disabled with Python's -O flag. Use an explicit validation check with a descriptive error message instead.

🔧 Proposed fix
-        assert self._mega_jacobians_cpu.shape[1] == num_features
+        if self._mega_jacobians_cpu.shape[1] != num_features:
+            raise ValueError(
+                f"Jacobian features dimension mismatch: "
+                f"expected {num_features}, got {self._mega_jacobians_cpu.shape[1]}"
+            )
🤖 Prompt for AI Agents
In
@applications/bci_visualization/operators/reconstruction/build_rhs_operator.py
at line 131, Replace the runtime assert in build_rhs_operator.py with explicit
validation: check that self._mega_jacobians_cpu.shape[1] == num_features and if
not raise a clear exception (e.g., ValueError) including the actual shape and
expected num_features, mentioning the variable names (self._mega_jacobians_cpu,
num_features) and context (e.g., in the method that constructs RHS operator) so
the failure is informative and not bypassed by Python -O.

Comment on lines +23 to +50
class StreamOperator(Operator):
def __init__(
self,
stream: BaseNirsStream,
*,
fragment: Any | None = None,
) -> None:
super().__init__(fragment, name=self.__class__.__name__)
self._stream = stream
self._channels: ChannelInfo

def setup(self, spec: OperatorSpec) -> None:
spec.output("samples")

def start(self) -> None:
self._stream.start()
self._channels = self._stream.get_channels()
self._iter = self._stream.stream_nirs()

def compute(
self, op_input: InputContext, op_output: OutputContext, context: ExecutionContext
) -> None:

sample = next(self._iter, None)
if sample is None:
raise StopIteration("No more samples available in the stream.")

op_output.emit(SampleOutput(sample, self._channels), "samples")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Missing unit tests required by coding guidelines.

Python operators must include unit tests in the same directory using the pytest framework. Please add a test file (e.g., test_stream.py) that covers:

  • Operator initialization and port setup
  • Stream lifecycle (start, iteration, exhaustion)
  • SampleOutput emission
  • StopIteration handling when stream is exhausted

Based on coding guidelines, this is required for all Python operators.

Would you like me to generate a template test file for this operator?

🧰 Tools
🪛 Ruff (0.14.10)

43-43: Unused method argument: op_input

(ARG002)


43-43: Unused method argument: context

(ARG002)


48-48: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In @applications/bci_visualization/operators/stream.py around lines 23 - 50, Add
pytest unit tests for StreamOperator in a new test_stream.py beside the
operator; cover initialization and that setup registers the "samples" output
port, lifecycle behavior by mocking a BaseNirsStream to verify start() is
called, get_channels() value is used, and stream_nirs() yields expected samples,
emission via op_output (use a fake OutputContext or spy) to assert
SampleOutput(sample, channels) is emitted, and exhaustion behavior where
next(self._iter, None) returns None causes StopIteration; reference
StreamOperator, BaseNirsStream.start, BaseNirsStream.get_channels,
BaseNirsStream.stream_nirs, and SampleOutput in test names and assertions and
use pytest fixtures/mocks to simulate the stream.

Comment on lines +1 to +361
"""
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES.
SPDX-License-Identifier: Apache-2.0
VoxelStreamToVolume operator: converts streaming voxel data to dense 3D volume.
"""

import cupy as cp
import cupyx.scipy.ndimage
import nibabel as nib
import numpy as np
from holoscan.core import ConditionType, Operator, OperatorSpec
from nibabel.orientations import aff2axcodes


class VoxelStreamToVolumeOp(Operator):
"""
Convert streaming HbO/HbR voxel data [I, J, K, 2] into a 3D volume tensor for VolumeRendererOp.
Inputs:
- affine_4x4: np.ndarray shape (4, 4) (processed once if provided)
- hb_voxel_data: np.ndarray shape (I, J, K, n_channels) where last dim is channels [HbO, HbR] (HbO: 0, HbR: 1)
Outputs:
- volume: holoscan.gxf.Entity containing a tensor named "volume" with shape (Z,Y,X)
- spacing: np.ndarray shape (3,) derived from affine
- permute_axis: np.ndarray shape (3,) derived from affine
- flip_axes: np.ndarray shape (3,) derived from affine
"""

def __init__(self, fragment, *args, **kwargs):
# Anatomy mask NIfTI file
self.mask_nifti_path = kwargs.pop("mask_nifti_path", None)

# Exponential moving average factor for running statistics (0 < alpha <= 1)
# Higher alpha = faster adaptation, lower alpha = more stable
self.stats_alpha = kwargs.pop("stats_alpha", 0.1)

# Visualization scale factor for amplifying activations
# Needed because global min/max includes the whole brain (larger values),
# but we only visualize white/gray matter (smaller activations).
# Higher scale = more sensitive visualization of small activations
self.visualization_scale = kwargs.pop("visualization_scale", 10)

# Density range, must be same as the VolumeRendererOp's density range
self.density_min = kwargs.pop("density_min", -100)
self.density_max = kwargs.pop("density_max", 100)

super().__init__(fragment, *args, **kwargs)

# Internal state
self.affine = None

# Metadata, set from the first frame, reused for subsequent frames
self.dims = None # np.array([X, Y, Z], dtype=np.uint32)
self.out_spacing = None # np.ndarray float32 (3,)
self.permute_axis = None # np.ndarray uint32 (3,)
self.flip_axes = None # np.ndarray bool (3,)
self.roi_mask = None # np.ndarray bool (I, J, K)

# Raw incoming mask (I, J, K) for pass-through emission (loaded from file if provided)
self.mask_voxel_raw = None
self.mask_volume_gpu = None
self.mask_affine = None
self.mask_shape = None

# Running statistics for adaptive normalization (initialized from first frame)
self.global_min = None
self.global_max = None
self.frame_count = 0

def start(self):
if not self.mask_nifti_path:
raise ValueError("VoxelStreamToVolume: No mask NIfTI path provided")

try:
img = nib.load(self.mask_nifti_path)
mask_3d = img.get_fdata()
# Segmentation volumes must be unsigned 8-bit integer
self.mask_voxel_raw = np.asarray(mask_3d, dtype=np.uint8)
self.mask_affine = img.affine
self.mask_shape = mask_3d.shape
print(
f"VoxelStreamToVolume: Loaded mask from {self.mask_nifti_path}, "
f"shape: {self.mask_voxel_raw.shape}, values: {np.unique(self.mask_voxel_raw)}"
)
except Exception as e:
raise RuntimeError(
f"VoxelStreamToVolume: Failed to load mask NIfTI '{self.mask_nifti_path}': {e}"
) from e

def setup(self, spec: OperatorSpec):
spec.input("affine_4x4").condition(
ConditionType.NONE
) # (4, 4), only emit at the first frame
spec.input("hb_voxel_data") # (I, J, K)

spec.output("volume")
spec.output("spacing")
spec.output("permute_axis")
spec.output("flip_axes")

# brain anatomy mask
spec.output("mask_volume").condition(ConditionType.NONE)
spec.output("mask_spacing").condition(ConditionType.NONE)
spec.output("mask_permute_axis").condition(ConditionType.NONE)
spec.output("mask_flip_axes").condition(ConditionType.NONE)

def compute(self, op_input, op_output, context):
# Receive Hb voxel data (cupy array)
hb_voxel = op_input.receive("hb_voxel_data") # (I, J, K)
cuda_stream = op_input.receive_cuda_stream("hb_voxel_data")

# Check voxel data is valid
if not isinstance(hb_voxel, cp.ndarray):
raise TypeError(
f"VoxelStreamToVolume: Invalid voxel data type: {type(hb_voxel)}, expected cupy array"
)
if hb_voxel.ndim != 3:
raise ValueError(
f"VoxelStreamToVolume: Invalid voxel data shape: {hb_voxel.shape}, expected 3D"
)

# Receive affine matrix only at the first frame
affine = op_input.receive("affine_4x4")
if affine is not None:
self.affine = np.array(affine, dtype=np.float32).reshape(4, 4)
# Derive spacing/orientation from affine - use mask's affine as we will resample data to mask's size
self.out_spacing, self.permute_axis, self.flip_axes = (
self._derive_orientation_from_affine(self.mask_affine)
)
print("VoxelStreamToVolume: Received affine matrix")

# Check if affine has been set at least once
if self.affine is None:
raise ValueError("VoxelStreamToVolume: No affine matrix received")

with cp.cuda.ExternalStream(cuda_stream):
# Update running statistics from incoming data
self._update_running_statistics(hb_voxel)

# Note: +-1 to add a buffer avoiding edge case in ClaraViz boundaries.
hb_voxel_normalized = self._normalize_and_process_activated_voxels(
hb_voxel,
normalize_min_value=self.density_min + 1,
normalize_max_value=self.density_max - 1,
)

# Resample to mask's size
volume_gpu = self._cupy_resample(
hb_voxel_normalized, self.affine, self.mask_affine, self.mask_shape
)

volume_gpu = cp.transpose(volume_gpu, (2, 1, 0))
volume_gpu = cp.ascontiguousarray(volume_gpu, dtype=cp.float32)

# If we have a mask, emit oriented mask every frame for the renderer
if self.mask_volume_gpu is None:
with cp.cuda.ExternalStream(cuda_stream):
self.mask_volume_gpu = cp.asarray(self.mask_voxel_raw, dtype=cp.uint8)
self.mask_volume_gpu = cp.transpose(self.mask_volume_gpu, (2, 1, 0))
self.mask_volume_gpu = cp.ascontiguousarray(self.mask_volume_gpu)

# Emit mask outputs
op_output.emit({"volume": self.mask_volume_gpu}, "mask_volume")
op_output.emit(self.out_spacing, "mask_spacing", "std::array<float, 3>")
op_output.emit(self.permute_axis, "mask_permute_axis", "std::array<uint32_t, 3>")
op_output.emit(self.flip_axes, "mask_flip_axes", "std::array<bool, 3>")

# Emit density outputs
op_output.emit({"volume": volume_gpu}, "volume")
op_output.emit(self.out_spacing, "spacing", "std::array<float, 3>")
op_output.emit(self.permute_axis, "permute_axis", "std::array<uint32_t, 3>")
op_output.emit(self.flip_axes, "flip_axes", "std::array<bool, 3>")

def _derive_orientation_from_affine(self, affine_4x4: np.ndarray):
"""
Derive spacing, axis permutation, and flips from affine.
spacing: voxel sizes along data axes (I,J,K) mapped to [X,Y,Z] ordering
permute_axis: for each data axis (I,J,K), index of world axis (X=0,Y=1,Z=2)
flip_axes: whether the axis is flipped (negative orientation)
"""

R = affine_4x4[:3, :3].astype(np.float32)
# spacing along data axes (length of each column)
spacing_ijk = np.linalg.norm(R, axis=0).astype(np.float32)
# Avoid zeros
spacing_ijk[spacing_ijk == 0] = 1.0

# 1. Get the Orientation String from Nibabel
# nibabel returns where the axis points TO (e.g., 'RAS')
orientation_codes = aff2axcodes(affine_4x4)
print(f"Detected Orientation: {''.join(orientation_codes)}")

# 2. Parse orientation codes to determine axis assignment and flips
# Nibabel convention: codes indicate the direction each axis points TO
# Flip is needed when axis points in the negative direction (L, P, I)
rl_axis = 4
is_axis = 4
pa_axis = 4

rl_flip = False
is_flip = False
pa_flip = False

# Iterate through the codes (0=x, 1=y, 2=z in data array)
for axis, code in enumerate(orientation_codes):
# --- Right-Left Axis ---
if code in ["R", "r"]:
rl_axis = axis
rl_flip = False # Points right (positive direction)
elif code in ["L", "l"]:
rl_axis = axis
rl_flip = True # Points left (negative direction, needs flip)

# --- Inferior-Superior Axis ---
elif code in ["S", "s"]:
is_axis = axis
is_flip = False # Points superior (positive direction)
elif code in ["I", "i"]:
is_axis = axis
is_flip = True # Points inferior (negative direction, needs flip)

# --- Posterior-Anterior Axis ---
elif code in ["A", "a"]:
pa_axis = axis
pa_flip = False # Points anterior (positive direction)
elif code in ["P", "p"]:
pa_axis = axis
pa_flip = True # Points posterior (negative direction, needs flip)

# Validation
if 4 in [
rl_axis,
is_axis,
pa_axis,
]: # 4 is a sentinel to indicate any axis that was not set
raise ValueError(
f"Could not determine all axes from orientation: {''.join(orientation_codes)}"
)

# 3. Construct the final parameters
permute = [rl_axis, is_axis, pa_axis]
flips = [rl_flip, is_flip, pa_flip]

# spacing returned in [X, Y, Z] order by mapping data spacings
spacing_xyz = np.zeros(3, dtype=np.float32)
for a in range(3):
spacing_xyz[permute[a]] = spacing_ijk[a]

return spacing_xyz, permute, flips

def _update_running_statistics(self, hb_voxel: cp.ndarray):
"""
Update running min/max statistics using exponential moving average.
Initializes from first frame with valid data.
Args:
hb_voxel: Current voxel data (cupy array)
"""
self.frame_count += 1

# Compute current frame statistics from all voxels
current_min = float(cp.min(hb_voxel))
current_max = float(cp.max(hb_voxel))

# Initialize on first frame
if (
(self.global_min is None or self.global_max is None)
and current_min != 0
and current_max != 0
):
self.global_min = current_min
self.global_max = current_max
print(
f"VoxelStreamToVolume: Initialized statistics from first frame - "
f"min={self.global_min:.6f}, max={self.global_max:.6f}"
)
return

# Use exponential moving average for smooth adaptation
# For first few frames, use larger alpha for faster convergence
alpha = self.stats_alpha if self.frame_count > 10 else 0.3

# Update running statistics
self.global_min = (1 - alpha) * self.global_min + alpha * current_min
self.global_max = (1 - alpha) * self.global_max + alpha * current_max

# Log statistics every 100 frames for debugging
if self.frame_count % 10 == 0:
print(
f"VoxelStreamToVolume: Frame {self.frame_count} - "
f"Running stats: min={self.global_min:.6f}, max={self.global_max:.6f} "
f"(current: min={current_min:.6f}, max={current_max:.6f})"
)

def _normalize_and_process_activated_voxels(
self, hb_voxel: np.ndarray, normalize_min_value: float, normalize_max_value: float
):
"""
Normalize the volume to [normalize_min_value, normalize_max_value] while preserving 0 as baseline.
Applies visualization scale factor to amplify small activations in white/gray matter.
"""
# If statistics not initialized yet, return zeros (waiting for first valid frame)
if self.global_min is None or self.global_max is None:
print("VoxelStreamToVolume: Waiting for statistics initialization...")
return cp.zeros_like(hb_voxel, dtype=cp.float32)

# Step 1/2: Normalize while preserving 0 as baseline.
hb = hb_voxel.astype(cp.float32, copy=False)
hb_voxel_normalized = cp.zeros_like(hb, dtype=cp.float32)

if self.global_max > 0:
# Apply visualization scale to amplify small activations
# Global max includes whole brain, but we visualize white/gray matter (smaller values)
pos_scale = (
float(normalize_max_value) / float(self.global_max)
) * self.visualization_scale
pos_mask = hb >= 0
hb_voxel_normalized[pos_mask] = hb[pos_mask] * pos_scale

if self.global_min < 0:
# Apply visualization scale to amplify small deactivations
neg_scale = (
float(abs(normalize_min_value)) / float(abs(self.global_min))
) * self.visualization_scale
neg_mask = hb < 0
hb_voxel_normalized[neg_mask] = hb[neg_mask] * neg_scale

# Step 3: Clip in-place to ensure values stay in range.
cp.clip(
hb_voxel_normalized, normalize_min_value, normalize_max_value, out=hb_voxel_normalized
)

return hb_voxel_normalized

def _cupy_resample(self, data_gpu, src_affine, target_affine, target_shape):
# 1. Calculate the transform matrix (Target -> Source)
inv_src_affine = np.linalg.inv(src_affine)
mapping_matrix = inv_src_affine @ target_affine

# Extract the rotation/scaling (3x3) and translation parts
# SciPy/CuPy affine_transform expects: input_coords = matrix @ output_coords + offset
matrix = mapping_matrix[:3, :3]
offset = mapping_matrix[:3, 3]

# 2. Move data to GPU
data_gpu = cp.asarray(data_gpu, dtype=cp.float32)
matrix_gpu = cp.asarray(matrix)
offset_gpu = cp.asarray(offset)

# 3. Resample
resampled_gpu = cupyx.scipy.ndimage.affine_transform(
data_gpu,
matrix=matrix_gpu,
offset=offset_gpu,
output_shape=target_shape,
order=1,
)

return resampled_gpu
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Missing unit tests for VoxelStreamToVolumeOp.

This operator implements complex voxel-to-volume conversion with orientation handling, normalization, and resampling logic, but lacks unit tests. As per coding guidelines, Python operators must include unit tests using pytest framework.

Based on coding guidelines, create test_voxel_stream_to_volume.py in the same directory with tests for:

  • Operator initialization and configuration parameters
  • Port setup verification
  • Mask loading and validation (valid/invalid paths, malformed files)
  • Affine derivation for various orientation codes (RAS, LPS, etc.)
  • Running statistics updates and EMA behavior
  • Normalization with zero-baseline preservation
  • GPU resampling correctness
  • First-frame vs subsequent-frame behavior
  • Error handling for missing affine, invalid voxel shapes
🧰 Tools
🪛 Ruff (0.14.10)

74-74: Avoid specifying long messages outside the exception class

(TRY003)


88-90: Avoid specifying long messages outside the exception class

(TRY003)


109-109: Unused method argument: context

(ARG002)


116-118: Avoid specifying long messages outside the exception class

(TRY003)


120-122: Avoid specifying long messages outside the exception class

(TRY003)


136-136: Avoid specifying long messages outside the exception class

(TRY003)


238-240: Avoid specifying long messages outside the exception class

(TRY003)

Comment on lines +289 to +295
# Log statistics every 100 frames for debugging
if self.frame_count % 10 == 0:
print(
f"VoxelStreamToVolume: Frame {self.frame_count} - "
f"Running stats: min={self.global_min:.6f}, max={self.global_max:.6f} "
f"(current: min={current_min:.6f}, max={current_max:.6f})"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix inconsistent comment for logging frequency.

The comment on line 289 states "Log statistics every 100 frames" but the code checks self.frame_count % 10 == 0, which logs every 10 frames.

🔧 Proposed fix
-        # Log statistics every 100 frames for debugging
+        # Log statistics every 10 frames for debugging
         if self.frame_count % 10 == 0:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Log statistics every 100 frames for debugging
if self.frame_count % 10 == 0:
print(
f"VoxelStreamToVolume: Frame {self.frame_count} - "
f"Running stats: min={self.global_min:.6f}, max={self.global_max:.6f} "
f"(current: min={current_min:.6f}, max={current_max:.6f})"
)
# Log statistics every 10 frames for debugging
if self.frame_count % 10 == 0:
print(
f"VoxelStreamToVolume: Frame {self.frame_count} - "
f"Running stats: min={self.global_min:.6f}, max={self.global_max:.6f} "
f"(current: min={current_min:.6f}, max={current_max:.6f})"
)
🤖 Prompt for AI Agents
In
@applications/bci_visualization/operators/voxel_stream_to_volume/voxel_stream_to_volume.py
around lines 289 - 295, The comment above the logging block is inaccurate: it
says "every 100 frames" but the code uses self.frame_count % 10 == 0; update
either the comment to reflect every 10 frames or change the modulus to 100 to
match the comment. Locate the logging block referencing self.frame_count,
self.global_min, self.global_max, current_min, and current_max in
VoxelStreamToVolume and make the comment and the condition consistent (e.g.,
change the comment to "Log statistics every 10 frames" if keeping % 10, or
change % 10 to % 100 if you want 100-frame logging).

Comment on lines +136 to +140
sample_hb = cp.linalg.solve(self._cached_coefficients, sample_mua)

assert sample_hb.shape == (len(wavelengths), num_voxels)
data_hbo = sample_hb[0]
data_hbr = sample_hb[1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Shape assertion may be incorrect for varying wavelength counts.

Line 138 asserts sample_hb.shape == (len(wavelengths), num_voxels), but cp.linalg.solve(A, B) where A is (n, n) and B is (n, m) returns shape (n, m). Here, self._cached_coefficients has shape (len(wavelengths), 2), so solving produces shape (2, num_voxels) — representing HbO and HbR, not wavelengths. The assertion appears incorrect.

Proposed fix
-        assert sample_hb.shape == (len(wavelengths), num_voxels)
+        # solve() returns (2, num_voxels) for [HbO, HbR]
+        assert sample_hb.shape == (2, num_voxels), f"Expected (2, {num_voxels}), got {sample_hb.shape}"
🤖 Prompt for AI Agents
In @applications/bci_visualization/utils/reconstruction/hbo.py around lines 136
- 140, The shape assertion is wrong because
cp.linalg.solve(self._cached_coefficients, sample_mua) returns (k, num_voxels)
where k is the number of columns in self._cached_coefficients (Hb components),
not len(wavelengths); update the assertion to check sample_hb.shape ==
(self._cached_coefficients.shape[1], num_voxels) (or explicitly (2, num_voxels)
if the code assumes two components), and ensure subsequent indexing of sample_hb
(data_hbo = sample_hb[0], data_hbr = sample_hb[1]) remains consistent with that
corrected shape check.

Comment on lines +76 to +92
global _HESSIAN_CACHE
data_hessian_reg = _HESSIAN_CACHE.get(wavelength_idx)
if data_hessian_reg is not None:
logger.debug("Reusing cached Hessian")
return data_hessian_reg

# Smaller SPD system: (J J^T + λI) for underdetermined case
data_hessian = data_jacobians @ data_jacobians.T

data_hessian_reg = data_hessian + reg * cp.sqrt(cp.linalg.norm(data_hessian)) * cp.eye(
data_hessian.shape[0], dtype=data_jacobians.dtype
)

_HESSIAN_CACHE[wavelength_idx] = data_hessian_reg
logger.debug("Cached Hessian for reuse")

return data_hessian_reg
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

rg -n 'data_jacobians' applications/bci_visualization/ --type=py -C 2

Repository: nvidia-holoscan/holohub

Length of output: 7603


🏁 Script executed:

# First, let's examine the file to understand the function signature and context
head -100 applications/bci_visualization/utils/reconstruction/reg_inv.py | cat -n

Repository: nvidia-holoscan/holohub

Length of output: 3240


🏁 Script executed:

# Get more context around the function
sed -n '50,120p' applications/bci_visualization/utils/reconstruction/reg_inv.py | cat -n

Repository: nvidia-holoscan/holohub

Length of output: 2418


🏁 Script executed:

# Search for _HESSIAN_CACHE usage to see if it's cleared or validated anywhere
rg '_HESSIAN_CACHE' applications/bci_visualization/ --type=py -B 2 -A 2

Repository: nvidia-holoscan/holohub

Length of output: 1547


🏁 Script executed:

# Check the broader context of how solve_regularized_system is called
rg 'solve_regularized_system' applications/bci_visualization/ --type=py -B 3 -A 3

Repository: nvidia-holoscan/holohub

Length of output: 2170


🏁 Script executed:

# Get more context on how NormalizedSolveBatch is constructed and passed
sed -n '1,120p' applications/bci_visualization/operators/reconstruction/solver_operator.py | cat -n

Repository: nvidia-holoscan/holohub

Length of output: 3630


🏁 Script executed:

# Check if there's any cache clearing or model parameter changes during pipeline execution
rg 'clear|reset|_model_optical_properties|optical_properties' applications/bci_visualization/operators/reconstruction/ --type=py | head -30

Repository: nvidia-holoscan/holohub

Length of output: 1377


Cache key is insufficient—batch-relative wavelength indices will cause stale Hessian reuse across batches with different Jacobians.

The cache uses only wavelength_idx as the key, which is enumerated per batch (0, 1, 2, ...). When multiple batches are processed with different optical models or configurations, identical wavelength indices across batches will incorrectly reuse cached Hessians computed from prior batches' Jacobians. This produces incorrect reconstruction results.

Include the Jacobian shape or a hash of data_jacobians in the cache key to ensure correctness:

Example fix
cache_key = (wavelength_idx, data_jacobians.shape, hash(data_jacobians.data.ptr))
data_hessian_reg = _HESSIAN_CACHE.get(cache_key)

Alternatively, clear the cache between batches if Jacobians are guaranteed to be identical within a processing session.

🤖 Prompt for AI Agents
In @applications/bci_visualization/utils/reconstruction/reg_inv.py around lines
76 - 92, The cache currently uses only wavelength_idx which is not unique across
batches; change the lookup and store to use a cache_key that includes
batch-unique Jacobian info (e.g. cache_key = (wavelength_idx,
data_jacobians.shape, int(data_jacobians.data.ptr)) or a hash of data_jacobians)
so _HESSIAN_CACHE.get(...) and _HESSIAN_CACHE[...] use that key instead of
wavelength_idx, or alternatively ensure the cache is cleared between batches;
update references to _HESSIAN_CACHE, wavelength_idx, and data_jacobians
accordingly so cached Hessians cannot be reused across different Jacobians.

After downloading and extracting the dataset, your `data/bci_visualization` folder should have this structure:

```
data/bci_visualization/
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@techops-kernel Can you help review if the below description look correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: upload a better resolution gif


@dataclass(frozen=True)
class Assets:
# float32 array (channels, features, wavelengths, voxels, simulation_types)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@techops-kernel Could we add docstrings clarifying the semantics and shapes of these fields?

Copy link
Contributor

@tbirdso tbirdso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed updated README

Comment on lines +53 to +55
To run the application you need a streaming Kernel Flow data source. This can be either:
- Kernel Flow hardware and SDK
- Downloaded `.snirf` files for use with the included data replayer. Example data can be found on [OpenNeuro](https://openneuro.org/datasets/ds006545) and copied locally to be run through the replayer.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: not rendering as bullet points in the GitHub markdown preview, maybe a spacing issue?

Suggested change
To run the application you need a streaming Kernel Flow data source. This can be either:
- Kernel Flow hardware and SDK
- Downloaded `.snirf` files for use with the included data replayer. Example data can be found on [OpenNeuro](https://openneuro.org/datasets/ds006545) and copied locally to be run through the replayer.
To run the application you need a streaming Kernel Flow data source. This can be either:
- Kernel Flow hardware and SDK
- Downloaded `.snirf` files for use with the included data replayer. Example data can be found on [OpenNeuro](https://openneuro.org/datasets/ds006545) and copied locally to be run through the replayer.

Comment on lines +63 to +65
### 1. Download Required Data

Download the example dataset from [Google Drive](https://drive.google.com/drive/folders/1RpQ6UzjIZAr90FdW9VIbtTFYR6-up7w2) and extract it to `data/bci_visualization` in your holohub directory. The dataset includes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we update to automatically download and cache sample data as part of the build step? See Endoscopy Tool Tracking CMakeLists.txt for an example

Can keep as-is for now and automate to support testing in a subsequent testing update

1. **Base layer** (white, 50% opacity): Shows overall brain structure (gray + white matter) throughout the full range [0, 1]
2. **Blue overlay** (100% opacity): Highlights low values [0, 0.4] representing decreased hemoglobin.
3. **Red overlay** (100% opacity): Highlights high values [0.6, 1] representing increased hemoglobin.
4. **Neutral range** [0.4, 0.6]: Only shows the white base layer (no significant change) No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition to inlined hyperlinks, could we please add a References section with papers and reference links for further reading?

Should we cite the related NeurIPS 2025 demo?

@bhashemian
Copy link
Member

Hi @techops-kernel @gabelerner-kernel,
The dataset for this BCI Visualization application is currently hosted on Google Drive, which isn’t ideal for our use case. Google Drive has limited support for handling large datasets — the API is incompatible, and downloading via the web interface splits the data into multiple zip files that need to be manually reassembled.

It would be great if we could host the dataset elsewhere. I suggest using a platform like Hugging Face, but any reliable host with an accessible API for large data downloads would work well. Thanks

@bhashemian bhashemian moved this to In Progress in Holohub Jan 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

6 participants