Summary
download_bulk_data in client.py currently loads the entire HTTP response into memory via response.content. For rosbag files near or exceeding the current 100 MB cap (max_bag_size_mb), this can cause high memory usage.
Current behavior
response = await client.get(bulk_data_uri, timeout=httpx.Timeout(300.0))
# entire file loaded into memory
return response.content, filename
Proposed solution
Use httpx streaming to write directly to disk in chunks:
async with client.stream("GET", bulk_data_uri, timeout=httpx.Timeout(300.0)) as response:
async for chunk in response.aiter_bytes(chunk_size=8192):
file.write(chunk)
This would also require refactoring download_bulk_data to accept an output path (or return a streaming iterator), and updating save_bulk_data_file and download_rosbags_for_fault accordingly.
Context
Location
src/ros2_medkit_mcp/client.py, line ~1052
Summary
download_bulk_datainclient.pycurrently loads the entire HTTP response into memory viaresponse.content. For rosbag files near or exceeding the current 100 MB cap (max_bag_size_mb), this can cause high memory usage.Current behavior
Proposed solution
Use httpx streaming to write directly to disk in chunks:
This would also require refactoring
download_bulk_datato accept an output path (or return a streaming iterator), and updatingsave_bulk_data_fileanddownload_rosbags_for_faultaccordingly.Context
max_bag_size_mbin gateway config) — manageable in memory but not idealLocation
src/ros2_medkit_mcp/client.py, line ~1052