Skip to content
Open
Show file tree
Hide file tree
Changes from 20 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
69aa46b
Add ase_interface with support for optimization and md
JunnHuo Jul 10, 2025
deea169
Keep the from_structures func in base_calculator
JunnHuo Jul 10, 2025
37ed942
Add predict and tasks modules; update ASE interface and property pred…
JunnHuo Jul 15, 2025
1c75f1d
Add example usage
JunnHuo Jul 15, 2025
06c3c3e
WIP: save local changes
JunnHuo Aug 29, 2025
ee55f16
modify err
JunnHuo Aug 29, 2025
0226096
Remove tests from tracking
JunnHuo Aug 29, 2025
c32fabd
Save local changes before rebase
JunnHuo Sep 1, 2025
6023fe5
Sync missing files from upstream/develop
JunnHuo Sep 1, 2025
0d3354f
modify train config
JunnHuo Sep 4, 2025
9a2dc78
revise train config (2nd version)
JunnHuo Sep 5, 2025
f8b5710
revise train config (3rd version)
JunnHuo Sep 7, 2025
24d6b70
Update training config (4th version)
JunnHuo Sep 8, 2025
a34fef1
Update training config (5th version)
JunnHuo Sep 8, 2025
59614f5
Update training config
JunnHuo Sep 8, 2025
6eeafb6
small fix
JunnHuo Sep 18, 2025
ab05190
small fix: correct structure generation
JunnHuo Sep 18, 2025
52f050a
save local changes before merge upstream
JunnHuo Sep 28, 2025
d841f0d
merge upstream develop
JunnHuo Sep 28, 2025
77d164f
change configs and README
JunnHuo Oct 15, 2025
e8ce0be
resolve config bug in predict.py
JunnHuo Oct 16, 2025
3ddeb0c
keep previous content
JunnHuo Oct 17, 2025
a83c8c4
keep previous content
JunnHuo Oct 17, 2025
e2ea426
keep previous content
JunnHuo Oct 17, 2025
578522a
keep previous content
JunnHuo Oct 17, 2025
b556ff1
fix: correct commit message for ppmatSim README
JunnHuo Oct 20, 2025
623ffc0
revert
JunnHuo Oct 24, 2025
70bca7f
remove experiments directory for PR review
JunnHuo Oct 24, 2025
d713a86
correct
JunnHuo Oct 24, 2025
743d006
correct
JunnHuo Oct 24, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions README.md
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里请还原

Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,10 @@
🔥 **2025.07.01**: The **Suzhou Laboratory** has established a novel framework based on PaddleMaterials, combining an active learning workflow with conditional-diffusion-based structure generation, thereby achieving unprecedented expansion of two-dimensional material databases. For more information, please refer to [ML2DDB](./research/ML2DDB/README.md).

## 📑 Task
- [MLIP-Machine Learning Interatomic Potential](interatomic_potentials/README.md)
- [PP-Property Prediction](property_prediction/README.md)
- [SG-Structure Generation](structure_generation/README.md)
- [SE-Spectrum Elucidation](spectrum_elucidation/README.md)
- [MLIP-Machine Learning Interatomic Potential](experiments/interatomic_potentials/README.md)
- [PP-Property Prediction](experiments/property_prediction/README.md)
- [SG-Structure Generation](experiments/structure_generation/README.md)
- [SE-Spectrum Elucidation](experiments/spectrum_elucidation/README.md)

## 🔧 Installation

Expand Down
18 changes: 9 additions & 9 deletions docs/multi_device.md
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个地方还原吧

Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,14 @@ Paddle ecosystem relies on the contributions of developers and users. We warmly

| Task Type | Model Name | NVIDIA | KUNLUNXIN | HYGON | Tecorigin | MetaX |
|-----------|------------|------------|-----------|-------|-----------|-----------|
| MLIP(Machine Learning Interatomic Potential) | [CHGNet](../interatomic_potentials/configs/chgnet/README.md) | ✅ | | | | |
| MLIP(Machine Learning Interatomic Potential) | [MatterSim](../interatomic_potentials/configs/mattersim/README.md) | ✅ | | | | |
| PP(Property Prediction) | [MEGNet](../property_prediction/configs/megnet/README.md) | ✅ | | | | |
| PP(Property Prediction) | [DimeNet++](../property_prediction/configs/dimenet++/README.md) | ✅ | | | | |
| PP(Property Prediction) | [ComFormer](../property_prediction/configs/comformer/README.md) | ✅ | | | | |
| SG(Structure Generation) | [DiffCSP](../structure_generation/configs/diffcsp/README.md) | ✅ | | | | |
| SG(Structure Generation) | [MatterGen](../structure_generation/configs/mattergen/README.md) | ✅ | | | | |
| SE(Spectrum Elucidation) | [DiffNMR](../spectrum_elucidation/configs/diffnmr/README.md) | ✅ | | | | |
| MLIP(Machine Learning Interatomic Potential) | [CHGNet](../experiments/interatomic_potentials/configs/chgnet/README.md) | ✅ | | | | |
| MLIP(Machine Learning Interatomic Potential) | [MatterSim](../experiments/interatomic_potentials/configs/mattersim/README.md) | ✅ | | | | |
| PP(Property Prediction) | [MEGNet](../experiments/property_prediction/configs/megnet/README.md) | ✅ | | | | |
| PP(Property Prediction) | [DimeNet++](../experiments/property_prediction/configs/dimenet++/README.md) | ✅ | | | | |
| PP(Property Prediction) | [ComFormer](../experiments/property_prediction/configs/comformer/README.md) | ✅ | | | | |
| SG(Structure Generation) | [DiffCSP](../experiments/structure_generation/configs/diffcsp/README.md) | ✅ | | | | |
| SG(Structure Generation) | [MatterGen](../experiments/structure_generation/configs/mattergen/README.md) | ✅ | | | | |
| SE(Spectrum Elucidation) | [DiffNMR](../experiments/spectrum_elucidation/configs/diffnmr/README.md) | ✅ | | | | |


## 2. How to Contribute
Expand Down Expand Up @@ -46,4 +46,4 @@ e.Machine environment details used for validating model accuracy, including but
## 3. More Referenced Documents
* [PaddleUserGuide(ch)](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/guides/index_cn.html)
* [PaddleSupportedHardware(ch)](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/hardware_support/index_cn.html)
* [PaddleCustomDevice](https://github.com/PaddlePaddle/PaddleCustomDevice)
* [PaddleCustomDevice](https://github.com/PaddlePaddle/PaddleCustomDevice)
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Machine-learning interatomic potentials (MLIP) bridge the gap between quantum-le

## 2.Models Matrix

| **Supported Functions** | **[CHGNet](./configs/chgnet/README.md)** | **[MatterSim](./configs/mattersim//README.md)** |
| **Supported Functions** | **[CHGNet](./configs/task_chgnet/README.md)** | **[MatterSim](./configs/task_mattersim//README.md)** |
| ----------------------------------- | ---------------------------------------- | ----------------------------------------------- |
| **Forward Prediction** | | |
|  Energy | ✅ | ✅ |
Expand All @@ -30,6 +30,6 @@ Machine-learning interatomic potentials (MLIP) bridge the gap between quantum-le
|  ASE | ✅ | ✅ |
| **Dataset** | | |
|  MPtrj | ✅ | 🚧 |
| **ML2DDB🌟** | ✅ | - |
| **ML2DDB🌟** | ✅ | - |

**Notice**:🌟 represent originate research work published from paddlematerials toolkit
55 changes: 55 additions & 0 deletions experiments/interatomic_potentials/configs/Dataset/alex_mp20.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# =========================================================
# Base dataset parameters (shared by train/val/test)
# =========================================================
dataset_base_params: &dataset_base_params
build_structure_cfg:
format: cif_str
primitive: True
niggli: True
canocial: False
num_cpus: 10
transforms:
- __class_name__: LatticePolarDecomposition
__init_params__: {}

# =========================================================
# Sampler configuration (reusable aliases)
# =========================================================
train_sampler: &train_sampler
__class_name__: BatchSampler
__init_params__:
shuffle: True
drop_last: False
batch_size: 64 # 16 for 4 GPUs, total batch size = 16 * 4 = 64

val_sampler: &val_sampler
__class_name__: BatchSampler
__init_params__:
shuffle: False
drop_last: False
batch_size: 32


# =========================================================
# Train / Validation / Test dataset configuration
# =========================================================
train:
dataset:
__class_name__: AlexMP20MatterGenDataset
__init_params__:
<<: *dataset_base_params
path: "${Run.data_dir}/alex_mp_20/train.csv"
cache_path: "${Run.data_dir}/alex_mp_20_chemical_system_cache/train"
loader:
num_workers: 0
use_shared_memory: False
sampler: *train_sampler

val:
dataset:
__class_name__: AlexMP20MatterGenDataset
__init_params__:
<<: *dataset_base_params
path: "${Run.data_dir}/alex_mp_20/val.csv"
cache_path: "${Run.data_dir}/alex_mp_20_chemical_system_cache/val"
sampler: *val_sampler
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# =========================================================
# Base dataset parameters (shared by train/val/test)
# =========================================================
dataset_base_params: &dataset_base_params
energy_key: ${Experiment.energy_key}
force_key: ${Experiment.force_key}
stress_key: ${Experiment.stress_key}
build_structure_cfg:
format: ase_atoms
primitive: False
niggli: False
num_cpus: 10
build_graph_cfg: ${Graph_converter}
filter_unvalid: False

# =========================================================
# Sampler configuration (reusable aliases)
# =========================================================
train_sampler: &train_sampler
__class_name__: BatchSampler
__init_params__:
shuffle: True
drop_last: True
batch_size: 2 # 16 for 4 GPUs, total batch size = 16 * 4 = 64

val_sampler: &val_sampler
__class_name__: BatchSampler
__init_params__:
shuffle: False
drop_last: False
batch_size: 16

test_sampler: &test_sampler
__class_name__: BatchSampler
__init_params__:
shuffle: False
drop_last: False
batch_size: 16

# =========================================================
# Train / Validation / Test dataset configuration
# =========================================================
train:
dataset:
__class_name__: HighLevelWaterDataset
__init_params__:
<<: *dataset_base_params
path: "${Run.data_dir}/high_level_water/high_level_water.xyz"
num_workers: 0
use_shared_memory: False
sampler: *train_sampler

val:
dataset:
__class_name__: HighLevelWaterDataset
__init_params__:
<<: *dataset_base_params
path: "${Run.data_dir}/high_level_water/high_level_water.xyz"
num_workers: 0
use_shared_memory: False
sampler: *val_sampler

test:
dataset:
__class_name__: HighLevelWaterDataset
__init_params__:
<<: *dataset_base_params
path: "${Run.data_dir}/high_level_water/high_level_water.xyz"
num_workers: 0
use_shared_memory: False
sampler: *test_sampler
48 changes: 48 additions & 0 deletions experiments/interatomic_potentials/configs/Dataset/jarvis.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# =========================================================
# Dataset configuration
# =========================================================
dataset:
__class_name__: JarvisDataset
__init_params__:
path: "${Run.data_dir}/jarvis"
jarvis_data_name: ${Experiment.jarvis_data_name}
property_names: ${Global.label_names}
build_structure_cfg:
format: jarvis
num_cpus: 10
build_graph_cfg: ${Graph_converter}
cache_path: "${Run.data_dir}/jarvis"
num_workers: 4
use_shared_memory: False

# =========================================================
# Dataset splitting ratios
# =========================================================
split_dataset_ratio:
train: 0.8
val: 0.1
test: 0.1

# =========================================================
# Data samplers for batching
# =========================================================
train_sampler:
__class_name__: BatchSampler
__init_params__:
shuffle: True
drop_last: False
batch_size: 128

val_sampler:
__class_name__: BatchSampler
__init_params__:
shuffle: False
drop_last: False
batch_size: 128

test_sampler:
__class_name__: BatchSampler
__init_params__:
shuffle: False
drop_last: False
batch_size: 128
61 changes: 61 additions & 0 deletions experiments/interatomic_potentials/configs/Dataset/mp20.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# =========================================================
# Base dataset parameters (shared by train/val/test)
# =========================================================
dataset_base_params: &dataset_base_params
build_structure_cfg:
format: cif_str
num_cpus: 10

# =========================================================
# Sampler configuration (reusable aliases)
# =========================================================
train_sampler: &train_sampler
__class_name__: BatchSampler
__init_params__:
shuffle: True
drop_last: False
batch_size: 256 # 16 for 4 GPUs, total batch size = 16 * 4 = 64

val_sampler: &val_sampler
__class_name__: BatchSampler
__init_params__:
shuffle: False
drop_last: False
batch_size: 128

test_sampler: &test_sampler
__class_name__: DistributedBatchSampler
__init_params__:
shuffle: False
drop_last: False
batch_size: 128

# =========================================================
# Train / Validation / Test dataset configuration
# =========================================================
train:
dataset:
__class_name__: MP20Dataset
__init_params__:
<<: *dataset_base_params
path: "${Run.data_dir}/mp_20/train.csv"
loader:
num_workers: 0
use_shared_memory: False
sampler: *train_sampler

val:
dataset:
__class_name__: MP20Dataset
__init_params__:
<<: *dataset_base_params
path: "${Run.data_dir}/mp_20/val.csv"
sampler: *val_sampler

test:
dataset:
__class_name__: MP20Dataset
__init_params__:
<<: *dataset_base_params
path: "${Run.data_dir}/mp_20/test.csv"
sampler: *test_sampler
65 changes: 65 additions & 0 deletions experiments/interatomic_potentials/configs/Dataset/mp2018.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# =========================================================
# Base dataset parameters (shared by train/val/test)
# =========================================================
dataset_base_params: &dataset_base_params
property_names: ${Global.label_names}
build_structure_cfg:
format: cif_str
num_cpus: 10
build_graph_cfg: ${Graph_converter}

# =========================================================
# Sampler configuration (reusable aliases)
# =========================================================
train_sampler: &train_sampler
__class_name__: BatchSampler
__init_params__:
shuffle: True
drop_last: False
batch_size: 16 # 16 for 4 GPUs, total batch size = 16 * 4 = 64

val_sampler: &val_sampler
__class_name__: BatchSampler
__init_params__:
shuffle: False
drop_last: False
batch_size: 32

test_sampler: &test_sampler
__class_name__: BatchSampler
__init_params__:
shuffle: False
drop_last: False
batch_size: 64

# =========================================================
# Train / Validation / Test dataset configuration
# =========================================================
train:
dataset:
__class_name__: MP2018Dataset
__init_params__:
<<: *dataset_base_params
path: "${Run.data_dir}/mp2018_train_60k/mp.2018.6.1_train.json"
cache_path: "${Run.data_dir}/mp2018_train_60k/mp.2018.6.1_train"
num_workers: 4
use_shared_memory: False
sampler: *train_sampler

val:
dataset:
__class_name__: MP2018Dataset
__init_params__:
<<: *dataset_base_params
path: "${Run.data_dir}/mp2018_train_60k/mp.2018.6.1_val.json"
cache_path: "${Run.data_dir}/mp2018_train_60k/mp.2018.6.1_val"
sampler: *val_sampler

test:
dataset:
__class_name__: MP2018Dataset
__init_params__:
<<: *dataset_base_params
path: "${Run.data_dir}/mp2018_train_60k/mp.2018.6.1_test.json"
cache_path: "${Run.data_dir}/mp2018_train_60k/mp.2018.6.1_test"
sampler: *test_sampler
Loading