Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Epi Scenario 2: Improving Forecasts Through Model Updates #72

Open
djinnome opened this issue Mar 25, 2024 · 4 comments
Open

Epi Scenario 2: Improving Forecasts Through Model Updates #72

djinnome opened this issue Mar 25, 2024 · 4 comments

Comments

@djinnome
Copy link
Contributor

Scenario 2: Improving Forecasts Through Model Updates

Estimated % of time: Baseline 50%; Workbench 40%

It is the end of 2022, and you are supporting a decision maker who is preparing for a winter Covid wave. The winter Covid wave caused by the original Omicron variant just a year earlier (end of 2021 and early 2022), was, at the US country level, the largest of the pandemic so far. Fearing another similar winter wave, the decision maker asks you to do a retrospective analysis of the prior winter. In particular, they want to you try and develop the most accurate model of the original Omicron wave, explore various interventions in the model and assess their effects. For your retrospective analysis, consider the time period of December 1st, 2021, to March 1st, 2022, with the first month (December 1st – 31st, 2021) as the training period, and the remaining time as the test period.

 

Starting Model: Begin with the following SIRHD model structure (Figure 1) and set of differential equations. For workbench modelers, a version of this may already exist in the workbench; if not, create it. For baseline modelers, see accompanying code in supplementary materials. The general form/structure of the model is below.

 

@djinnome
Copy link
Contributor Author

djinnome commented Mar 25, 2024

Note: Although many compartmental models include an Exposed compartment, we are omitting it for this scenario for simplification reasons. $p_{state_1 \rightarrow state_2}$ represents the probability of moving between the indicated states. $r_{state_1\rightarrow state_2}$ represent rates for how long processes take (i.e. 1/average time to move between states, e.g. 1/ incubation period, etc.). For this starting model, use the following values as initial guesses for parameter values:

·    $\beta=0.18$ new infections per infected person/day

·    $r_{I\rightarrow R}=0.07$/day

·    $r_{I\rightarrow H}=0.07$/day

·    $r_{H\rightarrow R}=0.07$/day

·    $r_{H\rightarrow D}=0.3$/day

·    $p_{I\rightarrow R}=0.9$

·    $p_{I\rightarrow H}=0.1$

·    $p_{H\rightarrow R}=0.87$

·    $p_{H\rightarrow D}=0.13$

·      To compensate for the fact that we don’t have an Exposed compartment in this model, we lower the total population N to 150e6 people, rather than use the actual total population of the United States. This is meant to approximate the situation where some individuals were exercising caution during the winter of 2021-2022, and were not exposed to Covid-19.

 

For initial conditions please pull values from the gold standard cases and deaths data from the Covid-19 ForecastHub, and HHS hospitalization data from https://healthdata.gov/Hospital/COVID-19-Reported-Patient-Impact-and-Hospital-Capa/g62h-syeh.

image

@djinnome
Copy link
Contributor Author

djinnome commented Mar 25, 2024

1.     Model Calibration: Using the given parameter values as initial guesses, calibrate the starting model, with data from the first month of the retrospective analysis: December 1st, 2021, through December 31st, 2021. You may decide which parameter values you are confident about and don’t need to calibrate, and the min/max ranges for the ones you would like to calibrate. Include plots of your calibrated model outputs, compared to actual data, for this time period.

 

2.     Single Model Forecast:

a.     Using your calibrated model, forecast cases, hospitalizations, and deaths, for the test period (January 1st, 2022 – March 1st, 2022).

b.     Plot your forecast against actual observational data from this time period, and calculate Absolute Error.

c.     How does your forecast’s Absolute Error metric over the first 4 weeks of this time period, compare against forecasts during this time period, from other compartmental models in the Covid-10 ForecastHub? Compare specifically against the UCLA-SuEIR and BPagano models. You can find forecast data and error scores for these two models, in the supplementary materials. All model forecasts in the ForecastHub are located here: https://github.com/reichlab/covid19-forecast-hub/tree/master/data-processed

 

3.   Ensemble Forecast: You hypothesize that the $\beta$ parameter should actually be time-varying and reflects interaction and transmission processes that change considerably throughout the course of the analysis period. A crude way to account for a time-varying parameter is to create an ensemble from different configurations of the same model that sufficiently explores the range of a time-varying parameter.

a.   Create 3 different configurations of the model from Q1, each with a different value of $\beta$that is constant. Combine these configurations into an ensemble. You can parameterize each configuration, or the combined ensemble model, using any approach you’d like, including weighting coefficients that change over time.

b.     Forecast cases, hospitalizations, and deaths, for the test period (January 1st, 2022 – March 1st, 2022).

c.     For each outcome (cases, hospitalizations, deaths), plot your forecast against actual observational data from this time period, and calculate Absolute Error.

d.     How does your forecast’s Absolute Error metric over the first 4 weeks of this time period compare against one of the ForecastHub ensembles (e.g. ‘COVIDhub-4_week_ensemble’). You can find forecast data and error scores for this ensemble, in the supplementary materials. All forecast data from the ForecastHub ensembles are here: https://github.com/reichlab/covid19-forecast-hub/tree/master/data-processed

e.     How does your forecast performance compare against the results of Q2?

 

4.     Model Update: Now update your model to include vaccination. Ensure that this is done in a way that can support interventions around vaccinations (e.g. incorporate a vaccination policy or requirement that increases rate of vaccination). For this question, only consider one vaccine type and assume one dose of this vaccine is all that’s required to achieve ‘fully vaccinated’ status. You will consider multiple doses in a later question.

 

5.     Find Parameters: Your updated model from Q3 should have additional variables and new parameters. What is the updated parameter table that you will be using? As with scenario 1, you may include multiple rows for the same parameter (e.g. perhaps you find different values from different reputable sources), with a ‘summary’ row indicating the final value or range of values you decide to use. If there are required parameters for your model that you can’t find sources for in the literature, you may find data to calibrate your model with, or make reasonable assumptions on what sensible values could be (with rationale). You may use any sources, including the following references on vaccine efficacy for Moderna, Pfizer, and J&J vaccines.

·      Estimates of decline of vaccine effectiveness over time https://www.science.org/doi/10.1126/science.abm0620

·      CDC Vaccine Efficacy Data https://covid.cdc.gov/covid-data-tracker/#vaccine-effectiveness

·      Vaccination data sources https://data.cdc.gov/Vaccinations/COVID-19-Vaccinations-in-the-United-States-Jurisdi/unsk-b7fc

Parameter

Parameter Definition

Parameter Units

Parameter Value or Range

Uncertainty Characterization

Sources

Modeler Assessment on Source Quality

 

6.     Model Checks: Implement common sense checks on the model structure and parameter space to ensure the updated model and parameterization makes physical sense. Explain the checks that were implemented. For example, under the assumption that the total population is constant for the time period considered:

a.     Demonstrate that population is conserved across all compartments

b.     Ensure that the total unvaccinated population over all states in the model, can never increase over time, and the total vaccinated population over all states in the model, can never decrease over time.

c.     What other common-sense checks did you implement?  Are there other checks you would have liked to implement but it was too difficult to do so?

 

7.     (Optional) Single Model Forecast: Using your updated model, forecast cases, hospitalizations, and deaths, for the test period (January 1st, 2022 – March 1st, 2022).

a.     Plot your forecast against actual observational data from this time period, and calculate Absolute Error.

b.     How does your forecast’s Absolute Error metric over the first 4 weeks of this time period, compare against forecasts during this time period, from other compartmental models in the Covid-10 ForecastHub? Compare specifically against the UCLA-SuEIR and BPagano models.

c.     How does your forecast performance compare with the one in Q2? If the forecast performance has improved or gotten worse, why do you think this is?

 

8.     Model Update: During this time period, access to at-home testing was vastly expanded, through distribution of free antigen tests and requirements for insurance to cover at-home tests for free. Update your model from Q4 to incorporate testing by modifying the $\beta$ parameter between a susceptible and infected person by the following factor:      $(1-\text{test\_access}*(1-\text{test\_dec\_transmission}))$, , where test_dec_transmission is defined as the net decrease testing has on transmission, and test_access is the percentage of the general population who is likely to take a test after suspected exposure, due to increased accessibility or lowered costs, etc. For this question, assume testing has the effect of decreasing transmission between susceptible and infected populations by 25%, due to infected people choosing not to gather or interact with others, based on test outcomes confirming their infection status. Assume test_access increases linearly from 25% of the total population at the start of the retrospective analysis period, to 50% by the end of the period.

 

9.     Model Stratification: The decision maker you’re supporting is exploring targeted vaccination campaigns to boost vaccination rates for specific subpopulations. To support these questions, you decide to further extend the model from Q8, by considering several demographic subgroups, as well as vaccination dosage. Stratify the model by the following dimensions:

·      Vaccination dosage (1 or 2 doses administered)

·      Age group

·      Sex

·      Race or Ethnicity

To inform initial conditions and rates of vaccination, efficacy of vaccines, etc., consider the subset of vaccination datasets from the starter kit listed in ‘Scenario2_VaccinationDatasets.xlsx’ (in the supplementary materials). Where initial conditions are not available for a specific subgroup, make a reasonable estimate based on percentages from Census sources (e.g. https://www.census.gov/quickfacts/fact/table/US/PST045223). Where parameters for specific subgroups are unavailable, generalize based on the ones that are available. Choose the number of age and race/ethnicity groups based on the data that is available.

 

10.  Model Checks: Implement common sense checks on model structure and parameter space to ensure the updated model and parameterization from Q9 is structurally sound and makes physical sense. Explain the checks that were implemented. For example, under the assumption that the total population is constant for the time period considered:

a.     Demonstrate that population is conserved across all disease compartments, and within each demographic group (age, sex, race/ethnicity).

b.     Ensure that the total unvaccinated population and unvaccinated population within each age group, can never increase over time, and the total vaccinated population and vaccinated population within each age group, can never decrease over time.

c.     What other common-sense checks did you implement?  Are there others you would have liked to implement but were too difficult?

 

11.  Single (Stratified) Model Forecast: Using your updated model from Q9, forecast cases, hospitalizations, and deaths, for the test period (January 1st, 2022 – March 1st, 2022).

a.     Plot your forecast against actual observational data from this time period, and calculate Absolute Error. Use observational data aggregated to the general population as well as granular data for individual demographic groups. Plot outcomes for individual demographic groups, as well as the total population.

b.     How does your forecast’s Absolute Error metric over the first 4 weeks of this time period, compare against forecasts during this time period, from other compartmental models in the Covid-10 ForecastHub? Compare specifically against the UCLA-SuEIR and BPagano models.

c.     How does your forecast performance compare with the one in Q2? If the forecast performance has improved or worsened, why do you think this is?

 

12.  Interventions: Now that you have a model that can support targeted interventions, the decision maker you support asks you to explore what would have happened during the retrospective analysis period, had these interventions been implemented at that time.

a.     With respect to your forecast from Q11, which demographic group had the worst outcomes during the retrospective period, and therefore should be targeted with interventions such as vaccine campaigns, or increased community outreach to make testing more widely available and encouraged?

b.     Implement an intervention that targets testing-related parameters (e.g. programs to increase access to tests, distribution of free tests, etc.) at the start of the forecast period, and redo the forecast from Q11. For a 1% increase in a test-related parameter (that has a net positive impact), what’s the impact of the intervention on the forecast trajectory, for the affected demographic group identified in Q12a, as well as for the overall population?

c.     Implement another intervention that targets vaccination rate(s), at the start of the forecast period, and redo the forecast from Q11. For a 1% increase in vaccination rate, what’s the impact of the intervention on the forecast trajectory, for the affected demographic group identified in Q12a, as well as for the overall population?

 

@djinnome
Copy link
Contributor Author

Scenario 2 Summary Table

Question

Inputs

Tasks

Outputs

Q1

·    Model code OR model in workbench

·    Parameter value initial guesses

·    Training dataset for given date range

Calibrate model with the data

·  Calibrated model

·  Plot of calibrated model results and training data

·  Time to calibrate model

Q2, Q7, Q11

·    Model from Q1 (or Q6 or Q9)

·    Test dataset

·    Forecast data from ForecastHub models

·  Create forecast of cases, hospitalizations, and deaths

·  Plot forecast against test dataset

·  Calculate Absolute Error

·  Get Absolute Error for ForecastHub models

·  Plot of forecast against test data

·  Absolute Error metric for your forecast and comparison with Absolute Error from ForecastHub models

·  Time to generate forecast

 

Q3

Calibrated model from Q1

 

·  Create ensemble forecast from 3 different configurations of Q1 model

·  Plot forecast against test dataset

·  Calculate Absolute Error metrics

·  Calculate Absolute Error for ForecastHub ensemble model

·  Plot of ensemble forecast against test data

·  Absolute Error metric for your ensemble model and comparison with Absolute Error for ForecastHub ensemble

·  Time to generate ensemble forecast

 

Q4

Calibrated model from Q1

 

Update model to include vaccination

·  Updated model

·  Time to make model updates

Q5

Updated model from Q4, that includes vaccination

Find new or updated parameters and fill out table

·  Completed parameter table

·  Time to complete table

Q6

·    Updated model from Q4

·    Parameters from Q5

·  Set parameters in model

·  Implement model checks

·  Results from model checks

·  Time to execute model checks

Q8

Parameterized model from Q6

Update model to include testing

·  Updated model

·  Time to make model updates

Q9

Updated model from Q8

·  Stratify model

·  Find initial conditions and new parameters from suggested datasets

·  Stratified model

·  Time to stratify model

·  Time to parameterize updated model

Q10

Stratified model from Q9

Implement model checks

·  Results from model checks

·  Time to execute model checks

Q12

·    Model from Q10

·    Forecast from Q11

·  Identify target demographic group that had the worst outcomes

·  Implement intervention targeting testing-related parameters and redo forecast

·  Implement intervention targeting vaccination rates ad redo forecast

·  Simulations with each intervention separately, and comparison with Q11 forecast

·  Time to implement interventions

·  Time to simulate updated forecasts with interventions

 

@djinnome
Copy link
Contributor Author

Decision-maker Panel Questions

1.     What is your confidence that the modeling team developed an appropriate model and associated parameter space to sufficiently explore the scenario/problem? Select score on a 7-point scale.

1.     Very Low

2.     Low

3.     Somewhat Low

4.     Neutral

5.     Somewhat High

6.     High

7.     Very High

 

Explanation: The scenario involves updating or modifying a model, and decision makers will evaluate whether this was done in a sensible way and whether the final model can support all the questions asked in the scenario.

 

The decision-maker confidence score should be supported by the answers to the following questions:

·      Did modelers clearly explain the changes being made and key differences between the original and updated models? Did the modifications/extensions the modelers made make sense and were they reasonable to you?

·      Are you confident that the starting model was updated in ways that make sense? Is the final model structurally sound?

·      As the model was update, was the parameter space being explored reasonable and broad enough/complete enough to support the questions required by the scenario?

 

2.     What is your confidence in understanding model results and tradeoff between potential interventions? Select score on a 7-point scale.

1.     Very Low

2.     Low

3.     Somewhat Low

4.     Neutral

5.     Somewhat High

6.     High

7.     Very High

 

Explanation: Determine your confidence in your ability to do the following, based on the information presented to you by the modelers: assess model performance, assess effectiveness of all interventions considered in the scenario, and understand how uncertainty factors into all of this.

 

This score should be supported by the answers to the following questions:

·      Did modelers communicate the impacts of interventions on trajectories? Was the effectiveness of interventions communicated?

·      Did models help you to understand what would have happened had a different course of action been taken in the past?

·      Where relevant to the question, was it clear how to interpret uncertainty in the results? Were key drivers of uncertainty in the results, communicated?



 

@djinnome djinnome changed the title Scenario 2: Improving Forecasts Through Model Updates Epi Scenario 2: Improving Forecasts Through Model Updates Mar 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant