Skip to content
Open
Show file tree
Hide file tree
Changes from 10 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions applications/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,9 @@ add_holohub_application(async_buffer_deadline)
add_holohub_application(basic_networking_ping DEPENDS
OPERATORS basic_network)

add_holohub_application(bci_visualization DEPENDS
OPERATORS volume_renderer)

add_holohub_application(body_pose_estimation DEPENDS
OPERATORS OPTIONAL dds_video_subscriber dds_video_publisher)

Expand Down
22 changes: 22 additions & 0 deletions applications/bci_visualization/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Update copyright year.

The copyright header must include the current year (2026).

🔎 Proposed fix
-# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

As per pipeline failure logs.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
🧰 Tools
🪛 GitHub Actions: Check Compliance

[error] 1-1: Copyright header incomplete: current year not included in the header.

🤖 Prompt for AI Agents
In applications/bci_visualization/CMakeLists.txt around line 1, the SPDX
copyright header currently lists the year 2025; update the year to 2026 so the
header reads "... Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights
reserved." and ensure formatting/spelling of the SPDX line remains unchanged.

# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

cmake_minimum_required(VERSION 3.20)
project(bci_visualization LANGUAGES NONE)

find_package(holoscan 2.0 REQUIRED CONFIG
PATHS "/opt/nvidia/holoscan" "/workspace/holoscan-sdk/install")

add_subdirectory(operators)
32 changes: 32 additions & 0 deletions applications/bci_visualization/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# syntax=docker/dockerfile:1

# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


ARG BASE_IMAGE
FROM ${BASE_IMAGE} AS base

ARG DEBIAN_FRONTEND=noninteractive

# Install curl for downloading data files
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*

ENV HOLOSCAN_INPUT_PATH=/workspace/holohub/data/bci_visualization

# Install Python dependencies
COPY applications/bci_visualization/requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt --no-cache-dir

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Trailing whitespace should be removed

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

232 changes: 232 additions & 0 deletions applications/bci_visualization/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,232 @@
# Kernel Flow BCI Real-Time Reconstruction and Visualization

<p align="center">
<img src="docs/brain_activity_example.gif" alt="Example output for BCI Visualization" width="400"><br>
<em>Example 3D visualization</em>
</p>

## Overview

This Holohub application demonstrates how to perform real-time source reconstruction and visualization of streaming functional brain data from the Kernel Flow 2 system. The application was developed and tested on an NVIDIA Jetson Thor paired with a Kernel Flow 2 headset. To lower the barrier to entry, we also provide recorded datasets and a data replayer, enabling developers to build and experiment with visualization and classification pipelines within the Holoscan framework without requiring access to the hardware.

This example processes streaming [moments from the distribution of time-of-flight histograms](https://doi.org/10.1117/1.NPh.10.1.013504). These moments can originate either from the [Kernel Flow SDK](https://docs.kernel.com/docs/kernel-sdk-install) when connected to the Kernel hardware, or from the included [shared near-infrared spectroscopy format (SNIRF)](https://github.com/fNIRS/snirf) replayer. The moments are then combined with the sensors' spatial geometry and an average anatomical head model to produce source-reconstructed outputs similar to what was [published in previous work](https://direct.mit.edu/imag/article/doi/10.1162/imag_a_00475/127769/A-Compact-Time-Domain-Diffuse-Optical-Tomography).

To visualize the reconstructed 3D volumes, this application utilizes both the [VolumeRendererOp](../../holohub/volume_renderer.py) operator and HolovizOp for real-time 3D rendering and interactive visualization.

For optimal efficiency and smooth user experience, we employ an event-based scheduler that decouples the reconstruction and visualization pipelines. This allows each stage to run on separate threads, resulting in higher rendering quality and more responsive interaction.

## Background

Kernel Flow is a multimodal non-invasive brain measurement system. It combines the relatively high resolution of time-domain functional near-infrared spectroscopy (TD-fNIRS) with the fast temporal resolution of electroencephalography (EEG)
into a compact and scalable form factor that enables a new class of non-invasive Brain-Computer Interface (BCI) applications.

The differentiating technology underlying the performance of the Kernel Flow system is the time-resolved detectors and high-speed laser drivers. Short (~100ps) pulses of near-infrared laser light (690nm & 905nm) are emitted into the user's head with a repetition rate of 20 MHz. The photons in these laser pulses scatter through the scalp, skull, and cerebrospinal fluid before reaching the brain and then scattering back out. When the photons emerge from
the scalp, we use single-photon sensitive detectors to timestamp exactly how much time the photon took to traverse through the head. The amount of time photons take to reach the detector is proportional to the path length traveled by the photon and the average depth it was able to reach.

This simulation shows the relationship between photon scattering paths (black lines) and the measured time of flight (blue sections).

<p align="center">
<img src="docs/photon_simulation.gif" alt="Monte Carlo simulation of photon scattering events." width="400"><br>
<em>The relationship between photon path lengths and measured time</em>
</p>

As you can see, later times correspond to photons that have travelled farther into the tissue. In a given second, we are timestamping over 10 billion individual photons, which generates an enormous amount of data. After compression, the data production rate of Kernel Flow is ~1GB/min.

As the photons scatter through the tissue, many of the photons are absorbed by cells and molecules in the tissue. In particular, the wavelengths we use are particularly sensitive to hemoglobin and its two states: oxyhemoglobin and deoxyhemoglobin, which allow us to follow the locations in the brain that are demanding and consuming oxygen and is an indirect measure of neuronal activity. These same biophysical principles are behind the pulse oximeters that are found in smart watches and finger-clip sensors! For more detailed information about the biophysics, [see this review article](https://www.mdpi.com/2076-3417/9/8/1612).

With the Kernel Flow headset we have combined 120 laser sources and 240 of our custom sensors to collect over 3000 measurement paths that criss-cross the head with a frame rate of 4.75Hz. When visualized, these paths resemble this:

<p align="center">
<img src="docs/flow_channel_map.png" alt="Flow's 3000+ channels" width="400"><br>
<em>The 3000+ measurements that are made with a Kernel Flow</em>
</p>

We call each of these measurement paths a "channel" and the measurement is made in "sensor space" (i.e. from the perspective of the detector). In order to have a more anatomical representation of the data, it is common to transform the
sensor-space data into source-space (i.e. where the changes in hemoglobin concentrations likely occurred in the brain, based on what was observed at the sensor) by solving an inverse problem, commonly called source reconstruction. This inverse problem requires complex modeling that is computationally expensive but highly parallelizable.

In this Holohub application, we demonstrate a real-time source reconstruction pipeline that runs on a Jetson Thor at the native framerate of the Kernel Flow system (4.75 Hz) and visualizes the data in 3D using Holoviz. We did this by X, Y,
and Z (@Gabe or @Mimi to add high-level).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incomplete sentence with placeholder text (@Gabe or @Mimi to add high-level) needs to be completed before merging.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

incomplete sentence - "We did this by X, Y, and Z" needs actual content or should be removed


## Requirements

This application was developed to run on an NVIDIA Jetson Thor Developer kit. Any Holoscan SDK supported platform should work.

To run the application you need a streaming Kernel Flow data source. This can be either:
- Kernel Flow hardware and SDK
- Downloaded `.snirf` files for use with the included data replayer. Example data can be found on [OpenNeuro](https://openneuro.org/datasets/ds006545) and copied locally to be run through the replayer.
Comment on lines +53 to +55
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: not rendering as bullet points in the GitHub markdown preview, maybe a spacing issue?

Suggested change
To run the application you need a streaming Kernel Flow data source. This can be either:
- Kernel Flow hardware and SDK
- Downloaded `.snirf` files for use with the included data replayer. Example data can be found on [OpenNeuro](https://openneuro.org/datasets/ds006545) and copied locally to be run through the replayer.
To run the application you need a streaming Kernel Flow data source. This can be either:
- Kernel Flow hardware and SDK
- Downloaded `.snirf` files for use with the included data replayer. Example data can be found on [OpenNeuro](https://openneuro.org/datasets/ds006545) and copied locally to be run through the replayer.


```bash
wget -0 data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Flag should be -O (capital letter O for output), not -0 (zero)

Suggested change
wget -0 data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
wget -O data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo in wget command flag.

The flag -0 should be -O (uppercase letter O, not zero) for specifying the output filename.

Proposed fix
-   wget -0 data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
+   wget -O data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
wget -0 data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
wget -O data/examples/data.snirf "https://s3.amazonaws.com/openneuro.org/ds006545/sub-bed8fefe/ses-1/nirs/sub-bed8fefe_ses-1_task-audio_nirs.snirf?versionId=sYFJNjlNNlf8xVOMsIde5hpWZE2clsiu"
🤖 Prompt for AI Agents
In @applications/bci_visualization/README.md at line 59, Fix the typo in the
wget command in the README: change the flag `-0` to the uppercase `-O` in the
line starting with `wget -0 data/examples/data.snirf
"https://s3.amazonaws.com/...` so the output filename is correctly specified
(ensure the rest of the command, including the output path
`data/examples/data.snirf` and the quoted URL, remains unchanged).

```
Comment on lines +58 to +59
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo in wget command flag

Line 58: -0 should be -O (capital O, not zero) for output file specification in wget command.


## Quick Start

### 1. Download Required Data

Download the example dataset from [Google Drive](https://drive.google.com/drive/folders/1RpQ6UzjIZAr90FdW9VIbtTFYR6-up7w2) and extract it to `data/bci_visualization` in your holohub directory. The dataset includes:
Comment on lines +63 to +65
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we update to automatically download and cache sample data as part of the build step? See Endoscopy Tool Tracking CMakeLists.txt for an example

Can keep as-is for now and automate to support testing in a subsequent testing update

- **SNIRF data file** (`data.snirf`): Recorded brain activity measurements
- **Anatomy masks** (`anatomy_labels_*.nii.gz`): Brain tissue segmentation (skin, skull, CSF, gray matter, white matter)
- **Reconstruction matrices**: Pre-computed Jacobian and voxel information
- **Volume renderer config** (`config.json`): 3D visualization settings

### 2. Run the Application
```bash
./holohub run bci_visualization
```

### Expected Data Folder Structure

After downloading and extracting the dataset, your `data/bci_visualization` folder should have this structure:

```
data/bci_visualization/
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@techops-kernel Can you help review if the below description look correct?

├── anatomy_labels_high_res.nii.gz # Brain segmentation
├── config.json # Volume renderer configuration
├── data.snirf # SNIRF format brain activity data
├── extinction_coefficients_mua.csv # Absorption coefficients for HbO/HbR
├── flow_channel_map.json # Sensor-source channel mapping
├── flow_mega_jacobian.npy # Pre-computed sensitivity matrix (channels → voxels)
└── voxel_info/ # Voxel geometry and optical properties
├── affine.npy # 4x4 affine transformation matrix
├── idxs_significant_voxels.npy # Indices of voxels with sufficient sensitivity
├── ijk.npy # Voxel coordinates in volume space
├── mua.npy # Absorption coefficient per voxel
├── musp.npy # Reduced scattering coefficient per voxel
├── resolution.npy # Voxel resolution (mm)
├── wavelengths.npy # Measurement wavelengths (690nm, 905nm)
└── xyz.npy # Voxel coordinates in anatomical space (mm)
```

## Pipeline Overview

The application consists of two main pipelines running on separate threads:

### Reconstruction Pipeline
Transforms sensor-space measurements into 3D brain activity maps:

```mermaid
graph LR
A[SNIRF Stream] --> B[Stream Operator]
B --> C[Build RHS]
C --> D[Normalize]
D --> E[Regularized Solver]
E --> F[Convert to Voxels]
F --> G[Voxel to Volume]

style A fill:#e1f5ff
style G fill:#ffe1f5
```

**Key Steps:**
1. **Stream Operator**: Reads SNIRF data and emits time-of-flight moments
2. **Build RHS**: Constructs the right-hand side of the inverse problem using channel mapping and Jacobian
3. **Normalize**: Normalizes measurements for numerical stability
4. **Regularized Solver**: Solves the ill-posed inverse problem with Tikhonov regularization
5. **Convert to Voxels**: Maps solution to 3D voxel coordinates with HbO/HbR conversion
6. **Voxel to Volume**: Resamples to match anatomy mask, applies adaptive normalization

### Visualization Pipeline
Renders 3D brain volumes with real-time interaction:

```mermaid
graph LR
G[Voxel to Volume] --> H[Volume Renderer]
H --> I[Color Buffer Passthrough]
I --> J[HolovizOp]
J --> H

style G fill:#ffe1f5
style J fill:#e1ffe1
```

**Key Steps:**
1. **Volume Renderer**: GPU-accelerated ray-casting with ClaraViz (tissue segmentation + activation overlay)
2. **Color Buffer Passthrough**: Queue management with POP policy to prevent frame stacking
3. **HolovizOp**: Interactive 3D display with camera controls (bidirectional camera pose feedback)

## Volume Renderer Configuration

The `config.json` file in the data folder configures the ClaraViz volume renderer. For detailed documentation, see the [VolumeRenderer operator documentation](../../operators/volume_renderer/) and [ClaraViz proto definitions](https://github.com/NVIDIA/clara-viz/blob/main/src/protos/nvidia/claraviz/cinematic/v1/render_server.proto).

### Key Configuration Parameters

#### 1. Rendering Quality
```json
{
"timeSlot": 100
}
```
- **`timeSlot`** (milliseconds): Rendering time budget per frame
- Higher values = better quality
- Lower values = faster rendering

#### 2. Transfer Functions
The transfer function maps voxel values to colors and opacity. This application uses **three components**.

##### Component 1: Brain Tissue Base (Gray/White Matter)
```json
{
"activeRegions": [3, 4],
"range": { "min": 0, "max": 1 },
"opacity": 0.5,
"opacityProfile": "SQUARE",
"diffuseStart": { "x": 1, "y": 1, "z": 1 },
"diffuseEnd": { "x": 1, "y": 1, "z": 1 }
}
```
- **`activeRegions`**: Tissue types to render
- `0`: Skin, `1`: Skull, `2`: CSF, `3`: Gray matter, `4`: White matter, `5`: Air
- Here: `[3, 4]` = gray and white matter only
- **`range`**: `[0, 1]` = full normalized value range
- **`opacity`**: `0.5` = semi-transparent base layer
- **`opacityProfile`**: `"SQUARE"` = constant opacity throughout range
- **`diffuseStart/End`**: `[1, 1, 1]` = white base color

##### Component 2: Negative Activation / Deactivation (Blue)
```json
{
"activeRegions": [3, 4],
"range": { "min": 0, "max": 0.4 },
"opacity": 1.0,
"opacityProfile": "SQUARE",
"diffuseStart": { "x": 0.0, "y": 0.0, "z": 1.0 },
"diffuseEnd": { "x": 0.0, "y": 0.0, "z": 0.5 }
}
```
- **`range`**: `[0, 0.4]` = lower 40% of normalized range (deactivation)
- **`opacity`**: `1.0` = fully opaque
- **`opacityProfile`**: `"SQUARE"` = constant opacity
- **`diffuseStart/End`**: `[0, 0, 1]` → `[0, 0, 0.5]` = bright blue to dark blue gradient

##### Component 3: Positive Activation (Red)
```json
{
"activeRegions": [3, 4],
"range": { "min": 0.6, "max": 1 },
"opacity": 1.0,
"opacityProfile": "SQUARE",
"diffuseStart": { "x": 0.5, "y": 0.0, "z": 0.0 },
"diffuseEnd": { "x": 1.0, "y": 0.0, "z": 0.0 }
}
```
- **`range`**: `[0.6, 1]` = upper 40% of normalized range (activation)
- **`opacity`**: `1.0` = fully opaque
- **`opacityProfile`**: `"SQUARE"` = constant opacity
- **`diffuseStart/End`**: `[0.5, 0, 0]` → `[1, 0, 0]` = dark red to bright red gradient

#### 3. Blending
```json
{
"blendingProfile": "BLENDED_OPACITY"
}
```
- **`blendingProfile`**: How overlapping components combine

### Visualization Strategy

The three-component approach creates a layered visualization:

1. **Base layer** (white, 50% opacity): Shows overall brain structure (gray + white matter) throughout the full range [0, 1]
2. **Blue overlay** (100% opacity): Highlights low values [0, 0.4] representing decreased hemoglobin.
3. **Red overlay** (100% opacity): Highlights high values [0.6, 1] representing increased hemoglobin.
4. **Neutral range** [0.4, 0.6]: Only shows the white base layer (no significant change)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition to inlined hyperlinks, could we please add a References section with papers and reference links for further reading?

Should we cite the related NeurIPS 2025 demo?

Loading
Loading