To ensure all submodules are loaded, be sure to run:
$ git submodule init
$ git submodule update
These steps are designed to be performed by the mtiller/book-builder
image.
To run this on an M1 Mac, run:
$ docker run -it --platform=linux/amd64 -v `pwd`:/opt/MBE/ModelicaBook mtiller/book-builder
or
$ docker run -it --platform=linux/amd64 -v `pwd`:/opt/MBE/ModelicaBook mtiller/flat-book-builder
...if you get lots of warnings about long file names.
Dependencies: text/specs.py
, text/spec-hash
and Python
Image: mtiller/book-builder
or python:2.7.12
+ pip install jinja2
Artifacts: text/results/Makefile
, text/results/json/*.json
,
text/results/*.mos
and text/plots/*.py
Job: make specs
In this step, we collect information about the different "cases" we need to
present in the book. The actual cases are outlined in text/specs.py
. This is
a largely declarative listing of the use cases.
Execution of this script produces the following files:
- The
text/results/Makefile
which defines how to build all simulations and their associated results. - A
json
encoded representation of each case intext/results/json/<CaseId>-case.json
. - A script to build and simulation each individual case in
text/results/<CaseId>.mos
. - A Python script to generate the plots for each case in
text/plots/<CaseId>.py
.
NB: The hash of the text/specs.py
file is stored in the file
text/spec-hash
and is used in the Makefile to determine if this step can be
skipped because the generated files out be identical to a previous run.
Dependencies: Artifacts from running text/specs.py
, text/result-hash
and omc
Image: mtiller/book-builder
(works on M1 if you run over and over) or perhaps openmodelica/openmodelica:v1.17.0-minimal
(but not on M1)
Artifacts: text/results/{executables,*_info.json,*_init.xml/*_res.mat}
Job: make results
This step uses the OpenModelica compiler to build the following files for each case defined in step 1.
- An executable for each case in
text/results/<CaseId>
. - An "info" file generated by OpenModelica about each case in
text/results/<CaseId>_info.json
- An initialization file in
text/results/<CaseId>_init.xml
- A simulation result in
text/results/<CaseId>_res.mat
The executables and init files are stored in exes.tar.gz
(which is also used
by the simulation server API).
NB: The hash of the ModelicaByExample
directory combined with the hash of
the text/specs.py
file are stored in the file text/results-hash
and is used
in the text/Makefile
to determine if this step can be skipped because the
generated files out be identical to a previous run.
Dependencies: text/results/*.mos
and text/plots/*.py
(guess)
Image: mtiller/book-builder
or sphinxdoc/sphinx
+ pip install matplotlib
Artifacts: text/build/json
Job: make json
or just sphinx-build -b json -d build/doctrees -q source build/json
This generats the JSON output for the book. This includes HTML embedded in the JSON. These files are required for the next step which is to translate the JSON data into the book site.
Dependencies: text/results/*.mos
and text/plots/*.py
(guess)
Image: mtiller/book-builder
or sphinxdoc/sphinx
+ pip install matplotlib
Artifacts: text/build/latex
Job: make latex
(?) or just sphinx-build -b latex -d build/doctrees -q source build/latex
I'm using now
to do the site deployment. Much simpler than all that mucking
about with AWS S3, IAM, permissions, keys, etc.
It can also be published as a simple Docker image. All that is required is to generate the files from Next and then wrap them in an NGINX container.
Dependencies: text/results/exes.tar.gz
Image: node
and docker
Artifacts: Docker image
Job: (see .gitlab-ci.yaml
)