Skip to content

Commit

Permalink
deploy: 341a16b
Browse files Browse the repository at this point in the history
  • Loading branch information
zulissimeta committed Apr 15, 2024
1 parent 18cc064 commit 8c3179d
Show file tree
Hide file tree
Showing 29 changed files with 2,430 additions and 2,214 deletions.
667 changes: 330 additions & 337 deletions _downloads/5fdddbed2260616231dbf7b0d94bb665/train.txt

Large diffs are not rendered by default.

18 changes: 9 additions & 9 deletions _downloads/819e10305ddd6839cd7da05935b17060/mass-inference.txt
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
2024-04-15 17:18:22 (INFO): Project root: /home/runner/work/ocp/ocp
2024-04-15 21:17:17 (INFO): Project root: /home/runner/work/ocp/ocp
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch/cuda/amp/grad_scaler.py:126: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.
warnings.warn(
2024-04-15 17:18:24 (WARNING): Detected old config, converting to new format. Consider updating to avoid potential incompatibilities.
2024-04-15 17:18:24 (INFO): amp: true
2024-04-15 21:17:19 (WARNING): Detected old config, converting to new format. Consider updating to avoid potential incompatibilities.
2024-04-15 21:17:19 (INFO): amp: true
cmd:
checkpoint_dir: ./checkpoints/2024-04-15-17-18-56
commit: d5c3826
checkpoint_dir: ./checkpoints/2024-04-15-21-17-52
commit: 341a16b
identifier: ''
logs_dir: ./logs/tensorboard/2024-04-15-17-18-56
logs_dir: ./logs/tensorboard/2024-04-15-21-17-52
print_every: 10
results_dir: ./results/2024-04-15-17-18-56
results_dir: ./results/2024-04-15-21-17-52
seed: 0
timestamp_id: 2024-04-15-17-18-56
timestamp_id: 2024-04-15-21-17-52
dataset:
a2g_args:
r_energy: false
Expand Down Expand Up @@ -122,7 +122,7 @@ test_dataset:
trainer: ocp
val_dataset: null

2024-04-15 17:18:24 (INFO): Loading dataset: ase_db
2024-04-15 21:17:19 (INFO): Loading dataset: ase_db
Traceback (most recent call last):
File "/home/runner/work/ocp/ocp/main.py", line 89, in <module>
Runner()(config)
Expand Down
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
2 changes: 1 addition & 1 deletion _sources/core/fine-tuning/fine-tuning-oxides.md
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ yml = generate_yml_config(checkpoint_path, 'config.yml',
update={'gpus': 1,
'task.dataset': 'ase_db',
'optim.eval_every': 1,
'optim.max_epochs': 50,
'optim.max_epochs': 10,
'optim.batch_size': 4,
'logger':'tensorboard', # don't use wandb!
# Train data
Expand Down
4 changes: 1 addition & 3 deletions _sources/core/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Boes, J. R., Groenenboom, M. C., Keith, J. A., & Kitchin, J. R. (2016). Neural n
You can retrieve the dataset below. In this notebook we learn how to do "mass inference" without an ASE calculator. You do this by creating a config.yml file, and running the `main.py` command line utility.

```{code-cell} ipython3
! [ ! -f data.db ] && wget https://figshare.com/ndownloader/files/11948267 -O data.db
! wget https://figshare.com/ndownloader/files/11948267 -O data.db
```


Expand Down Expand Up @@ -53,7 +53,6 @@ You have to choose a checkpoint to start with. The newer checkpoints may require
```{code-cell} ipython3
from ocpmodels.models.model_registry import available_pretrained_models
print(available_pretrained_models)
```

```{code-cell} ipython3
Expand All @@ -67,7 +66,6 @@ checkpoint_path
We have to update our configuration yml file with the dataset. It is necessary to specify the train and test set for some reason.

```{code-cell} ipython3
from ocpmodels.common.tutorial_utils import generate_yml_config
yml = generate_yml_config(checkpoint_path, 'config.yml',
delete=['cmd', 'logger', 'task', 'model_attributes',
Expand Down
2 changes: 1 addition & 1 deletion _sources/tutorials/advanced/fine-tuning-in-python.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ yml = generate_yml_config(checkpoint_path, 'config.yml',
update={'gpus': 1,
'task.dataset': 'ase_db',
'optim.eval_every': 1,
'optim.max_epochs': 5,
'optim.max_epochs': 10,
'optim.batch_size': 4,
'logger': 'tensorboard', # don't use wandb unless you already are logged in
# Train data
Expand Down
6 changes: 3 additions & 3 deletions core/fine-tuning/fine-tuning-oxides.html
Original file line number Diff line number Diff line change
Expand Up @@ -769,7 +769,7 @@ <h1>Fine tuning a model<a class="headerlink" href="#fine-tuning-a-model" title="
warnings.warn(
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Elapsed time 68.6 seconds.
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Elapsed time 67.7 seconds.
</pre></div>
</div>
<img alt="../../_images/92bd7f94dd548c8cfc2744eb5890cd23fada1ff98e8dc907657e2eb109af0402.png" src="../../_images/92bd7f94dd548c8cfc2744eb5890cd23fada1ff98e8dc907657e2eb109af0402.png" />
Expand Down Expand Up @@ -921,7 +921,7 @@ <h2>Setting up the configuration yaml file<a class="headerlink" href="#setting-u
<span class="n">update</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;gpus&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="s1">&#39;task.dataset&#39;</span><span class="p">:</span> <span class="s1">&#39;ase_db&#39;</span><span class="p">,</span>
<span class="s1">&#39;optim.eval_every&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="s1">&#39;optim.max_epochs&#39;</span><span class="p">:</span> <span class="mi">50</span><span class="p">,</span>
<span class="s1">&#39;optim.max_epochs&#39;</span><span class="p">:</span> <span class="mi">10</span><span class="p">,</span>
<span class="s1">&#39;optim.batch_size&#39;</span><span class="p">:</span> <span class="mi">4</span><span class="p">,</span>
<span class="s1">&#39;logger&#39;</span><span class="p">:</span><span class="s1">&#39;tensorboard&#39;</span><span class="p">,</span> <span class="c1"># don&#39;t use wandb!</span>
<span class="c1"># Train data</span>
Expand Down Expand Up @@ -1075,7 +1075,7 @@ <h2>Setting up the configuration yaml file<a class="headerlink" href="#setting-u
load_balancing: atoms
loss_energy: mae
lr_initial: 0.0005
max_epochs: 50
max_epochs: 10
mode: min
num_workers: 2
optimizer: AdamW
Expand Down
24 changes: 12 additions & 12 deletions core/gotchas.html
Original file line number Diff line number Diff line change
Expand Up @@ -929,7 +929,7 @@ <h1>I get wildly different energies from the different models<a class="headerlin
warnings.warn(
</pre></div>
</div>
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>1.6805717945098877
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>1.6857712268829346
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1433,7 +1433,7 @@ <h1>To tag or not?<a class="headerlink" href="#to-tag-or-not" title="Link to thi
warnings.warn(
</pre></div>
</div>
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>-0.4297374486923218
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>-0.42973706126213074
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1483,17 +1483,17 @@ <h1>Stochastic simulation results<a class="headerlink" href="#stochastic-simulat
warnings.warn(
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>1.2139867544174194 1.6526724649070138e-06
1.213986873626709
1.2139854431152344
1.2139849662780762
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>1.2139865159988403 1.6765759937047162e-06
1.2139840126037598
1.2139897346496582
1.213986873626709
1.213989019393921
1.2139887809753418
1.213986873626709
1.2139856815338135
1.2139849662780762
1.2139854431152344
1.2139873504638672
1.2139854431152344
1.2139885425567627
1.2139840126037598
1.2139875888824463
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1536,7 +1536,7 @@ <h1>The forces don’t sum to zero<a class="headerlink" href="#the-forces-don-t-
warnings.warn(
</pre></div>
</div>
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>array([ 0.00848247, 0.01409575, -0.05882883], dtype=float32)
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>array([ 0.00847937, 0.01409653, -0.05882907], dtype=float32)
</pre></div>
</div>
</div>
Expand All @@ -1549,7 +1549,7 @@ <h1>The forces don’t sum to zero<a class="headerlink" href="#the-forces-don-t-
</div>
</div>
<div class="cell_output docutils container">
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>array([1.10827386e-07, 4.62168828e-08, 2.38418579e-07], dtype=float32)
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>array([ 7.7532604e-08, -7.8813173e-08, 0.0000000e+00], dtype=float32)
</pre></div>
</div>
</div>
Expand Down
Loading

0 comments on commit 8c3179d

Please sign in to comment.