Skip to content
This repository has been archived by the owner on Jan 3, 2019. It is now read-only.

Generation 4 QLKNN: Integrated Modelling testing

Latest
Compare
Choose a tag to compare
@Karel-van-de-Plassche Karel-van-de-Plassche released this 21 Dec 10:45
· 1 commit to master since this release

This update includes improvements to the filtering of the dataset, as well as improvements to the training process itself. The networks trained with this pipeline were found to be of sufficient quality to be included in integrated modelling frameworks. Currently gen4 QLKNN-10D is being tested in JETTO and RAPTOR.

Major changes:

  • Increased the filter version number to 10 in 70c5c29, after fixing a bug in filtering. Points should only be thrown if septot is violated on the heat fluxes, as the particle fluxes can be negative, and as such can cancel eachother out and so have a higher separate flux than total flux.
  • Added models to work with the new 'rotdiv' networks. These are networks trained on an extra dataset which includes rotation. These networks can than be combined with the original (9D) networks to form the canonical QLKNN-10D.
  • Added a new cost-function that punishes 'popback', or unstable flux predictions by the network in the stable region. For more information, see EPS2018 and TTF 2018 posters.
  • Added models to load and run Keras models.

Misc:

  • Pandas was causing some slowdown in training, replaced by pure numpy where needed
  • Optimizations for RAM usage for training. Can be optimized more.
  • Simplified NNDB structure. Easier querying
  • Split off dataset-specific and general handling of datasets in separate scripts
  • Hypercube folding and conversion to pandas can now be done OOC. Filtering still needs to be done
  • Added multi-epoch TensorFlow performance tracing
  • Added scripts for simple hyperparameter scans, using native Python instead of the Luigi framework + NNDB