-
Notifications
You must be signed in to change notification settings - Fork 573
feat(data): support HDF5 multisystem files in training data lists #4984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: devel
Are you sure you want to change the base?
Conversation
Co-authored-by: anyangml <[email protected]>
xxxx.hdf5/#xxxx
, they just use `xxx.h...set_group = sys_group.create_group("set.000") | ||
|
||
# Add minimal required data | ||
natoms = 2 |
Check notice
Code scanning / CodeQL
Unused local variable Note test
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 5 days ago
To fix the "unused local variable" error flagged for natoms
, the assignment natoms = 2
should be either removed altogether or, if it is intended solely for documentation, renamed to unused_natoms = 2
in accordance with the recommended naming conventions. However, since the code does not use its value at all and the number of atoms is already obvious from the data provided, the most straightforward fix while maintaining the existing code functionality is to delete the assignment line. This change is localized to line 41 of source/tests/common/test_process_systems.py
. No new imports or definitions are needed. If the author wants to preserve the documentary intention, they can add a comment above the relevant code instead.
@@ -38,7 +38,6 @@ | ||
set_group = sys_group.create_group("set.000") | ||
|
||
# Add minimal required data | ||
natoms = 2 | ||
nframes = 1 | ||
|
||
coords = [[[0.0, 0.0, 0.0], [1.0, 0.0, 0.0]]] |
|
||
# Add minimal required data | ||
natoms = 2 | ||
nframes = 1 |
Check notice
Code scanning / CodeQL
Unused local variable Note test
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 5 days ago
To fix the problem, simply remove the assignment to nframes
from line 42 in _create_hdf5_file
in source/tests/common/test_process_systems.py. Since the right-hand side is a simple integer assignment and has no side effects, the entire line can be deleted safely. No other changes or imports are required. Ensure that the removal does not affect the rest of the logic, which it won't, as the variable is not referenced elsewhere.
@@ -39,7 +39,6 @@ | ||
|
||
# Add minimal required data | ||
natoms = 2 | ||
nframes = 1 | ||
|
||
coords = [[[0.0, 0.0, 0.0], [1.0, 0.0, 0.0]]] | ||
set_group.create_dataset("coord.npy", data=coords) |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## devel #4984 +/- ##
==========================================
- Coverage 84.21% 84.19% -0.03%
==========================================
Files 705 705
Lines 69315 69378 +63
Branches 3576 3577 +1
==========================================
+ Hits 58372 58411 +39
- Misses 9803 9827 +24
Partials 1140 1140 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
@copilot address the comments and complete missing unit test coverage |
I've addressed the comments and completed comprehensive unit test coverage for the HDF5 multisystem support feature. What was added:
Results: 26 total tests (22 passed, 4 skipped for optional dependencies). All critical paths, edge cases, and error scenarios are now comprehensively tested. The commit includes full test coverage and addresses all missing unit test scenarios for the HDF5 multisystem feature. |
Co-authored-by: anyangml <[email protected]>
@copilot try again |
Co-authored-by: anyangml <[email protected]>
I've reorganized the test structure to follow project conventions. The issue was that dataloader tests should be organized by backend rather than all in the common directory. Fixed in commit 3677558: Test Organization:
Now follows project patterns:
The comprehensive test coverage is maintained with proper organization by framework. |
deepmd/utils/data_system.py
Outdated
expanded_systems = [] | ||
for system in systems: | ||
# Check if this is an HDF5 file without explicit system specification | ||
if _is_hdf5_file(system) and "#" not in system: | ||
# Only expand if it's a multisystem HDF5 file | ||
if _is_hdf5_multisystem(system): | ||
# Expand HDF5 file to include all systems within it | ||
try: | ||
with h5py.File(system, "r") as file: | ||
for key in file.keys(): | ||
if isinstance(file[key], h5py.Group): | ||
# Check if this group looks like a system | ||
group = file[key] | ||
group_has_type = "type.raw" in group | ||
group_has_sets = any( | ||
subkey.startswith("set.") | ||
for subkey in group.keys() | ||
) | ||
if group_has_type and group_has_sets: | ||
expanded_systems.append(f"{system}#{key}") | ||
except OSError as e: | ||
log.warning(f"Could not read HDF5 file {system}: {e}") | ||
# If we can't read as HDF5, treat as regular system | ||
expanded_systems.append(system) | ||
else: | ||
# Single system HDF5 file, don't expand | ||
expanded_systems.append(system) | ||
else: | ||
# Regular system or HDF5 with explicit system specification | ||
expanded_systems.append(system) | ||
systems = expanded_systems |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that only one line will work:
systems = sum([process_systems(system) for system in systems], [])
(btw, I don't understand what patterns
is according to the doc...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot address the comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've simplified the code as suggested. The complex nested logic has been reduced to essentially one line:
Before: 30+ lines of nested if/else logic for list processing
After: systems = sum([_process_single_system(system) for system in systems], [])
Changes made:
- Extracted
_process_single_system()
helper function to handle individual system processing - Simplified list case to the functional style you suggested (avoiding infinite recursion by using helper)
- Fixed path format consistency: both string and list processing now use
file.h5#/system
format - All 22 tests pass with the cleaner implementation
The code is now much more maintainable and follows the functional programming style you suggested.
if isinstance(systems, str): | ||
with h5py.File(systems) as file: | ||
systems = [os.path.join(systems, item) for item in file.keys()] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, I don't understand why DpLoaderSet
doesn't reuse process_systems
, but uses its own logic to search systems... Who designs this behavior?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've refactored the dataloaders to reuse process_systems
instead of duplicating the HDF5 expansion logic.
Changes made:
- PyTorch dataloader now calls
process_systems(systems)
instead of custom HDF5 logic - Paddle dataloader now calls
process_systems(systems)
instead of custom HDF5 logic - This eliminates code duplication and ensures consistent behavior
- All tests updated and passing
The dataloaders now properly delegate system processing to the centralized process_systems
function, avoiding the design issue you identified.
Co-authored-by: anyangml <[email protected]>
@copilot try again |
I've identified and fixed a critical inconsistency in HDF5 path format handling. Issue: String and list processing produced different results for single-system HDF5 files:
Fix: Updated Result: Both processing paths now produce identical, predictable results. All 22 tests pass with the consistent behavior and backward compatibility is maintained. The implementation now provides consistent HDF5 path formatting regardless of input type. |
Co-authored-by: anyangml <[email protected]>
@copilot fix format
|
... Fixed the RUF017 linting issue by replacing quadratic list summation with Change made: # Before (quadratic complexity)
systems = sum([_process_single_system(system) for system in systems], [])
# After (linear complexity)
systems = list(itertools.chain.from_iterable(
_process_single_system(system) for system in systems
)) This provides better performance for large lists while maintaining identical functionality. All 22 tests pass. |
Co-authored-by: anyangml <[email protected]>
Addressed Comment: Fixed linting issue RUF017 - avoided quadratic list summation.
Issue Identified
The linter RUF017 flagged a performance issue with quadratic list summation:
Solution Implemented
sum()
withitertools.chain.from_iterable()
for efficient list flatteningitertools
import to support the new implementationBefore/After Code
Before (quadratic complexity):
After (linear complexity):
Performance Benefits
Validation
This change addresses the format/linting issue while maintaining all existing functionality and improving performance for scenarios with many HDF5 files.
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.