Oasis Release v2.4.3
Docker Images (Platform)
- coreoasis/api_server:2.4.3
- coreoasis/model_worker:2.4.3
- coreoasis/model_worker:2.4.3-debian
- coreoasis/piwind_worker:2.4.3
Docker Images (User Interface)
Components
Changelogs
OasisPlatform Changelog - 2.4.3
- #1189 - Add internal worker to Helm charts
- #1193 - Update base deb image to p3.12
- #781 - Add kubernetes CI/CD testing
- #1181 - Data files and analyses settings issues in 2.4.x release
- #1174 - Missing output when V2 input gen fails
- #1201 - Udate name in workflow so Job type is clear in GitHub UI
- #969 - Autoscale V1 model deployments
OasisLMF Changelog - 2.4.3
- #1667 - add keys in error for loc_id not modelled
- #1669 - Pytools Optimisations
- #1672 - eltpy sum loss and sum loss square need to be float64 to avoid to much precision error
- #1613 - Support output of results grouped as Property Damage (PD)
- #1671 - Vulnerability blending optimisation
- #1685 - Fixed health check exception on api connect
Release Notes
OasisPlatform Notes
Fix needed for debian image build - (PR #1193)
- Update base deb image to p3.12 (same as default model_worker image)
CI testing for Minikube - (PR #1194)
- Added Helm deployment testing
- Added an Oasis api run test for the Helm charts
Fix User-data file loading - (PR #1198)
Fix so that the user-data
dir is corrected loaded with data files if any are attached to an analyses
Store failed output failed V2 runs - (PR #1199)
- The partial output will be stored in either
input_files
oroutput_files
, when sub-taskspre-analysis-hook
,write-input-files
orgenerate-losses-output
raise an error - Fixed issue when
subtask_retry_log
can be called without valid references to a failed sub-task
Autoscaling v1 - (PR #1183)
- Adds functionality to autoscaler to work with v1 models
- Adds validation for scaling on v1
OasisLMF Notes
Fix error when no peril is modelled from peril covered - (PR #1667)
When checking that all loc_id where processed after lookup, an error was arising when an exposure had only peril covered that was not modelled.
This change adds the loc_id to the error key file with message:
perils_covered + " have no perils modelled"
feature/pytools-optimisation - (PR #1668)
Optimisations to pytools functions to improve runtime or memory.
Increase eltpy precision to avoid too much rounding error in standard deviation - (PR #1672)
Standard deviation is calculated using the difference between two squared sum. This exacerbate the relative error, and more precision is needed on intermediary values to keep the rounding error to a more predictable value.
For example, in our Piwind example one of the absolute error was 369 monetary unit
12786.774414 vs 13155.693359
May impact Eltpy standard deviation result
Support group by is_property_damage - (PR #1673)
Add the notion of calculated summary column to allow user to group summary id by is_property_damage
is_property_damage is supported in
- model run : adding is_property_damage in the summary description of an analysis file "oed_fields": ["is_property_damage"],
- exposure run: using cli parameter --extra-summary-cols is_property_damage
reorganize gulmc - (PR #1676)
reorganize gulmc to make the code simpler to understand and improve the order of each step of the calculation. Found several issues in gulmc that are being fixed
it now:
- take into account intensity adjustment before the effective damageability cdf is calculated.
- apply the intensity adjustment on aggregate vulnerability
- use equi-weight as default for aggregate vulnerability if no vuln_id is present in the weight file for this aggregate
- fix an issue with aggregate vulnerability sampled loss when some vuln id have no probability of loss
- fix gulmc hazard cdf when both hazard and vuln corellation are present
- improve performance of get_corr_rval using precomputed factor
this change improve the performance of gulmc in particular for correlated run:
-> it cut ~25% of total time on pywind 10K loc
-> gulmc with correlation on is almost 2x faster (54s-> 29s)
-> gulmc with correlation off is around 20% faster (33s-> 27s)
This PR has an impact on the loss for several reason
- the "aggregate vulnerability fix" will reduce the loss of aggregate vulnerability when one of the sub vulnerability has no damage
- due to the issue related to "gulmc hazard cdf fix", the hazard cdf was equal to the vuln cdf if both had correlation impacting losses
- intensity adjustment is now applied on effective damageability cdf impactinf mean loss and all type 1 loss in ord_output
Fixed check exception on api connect - (PR #1685)
Minor Bug fix for API client, only prompt for password and re-autehnticate on http 401