Why do mathematical models over-project COVID-19 epidemics?
During the COVID-19 pandemic, mathematical modeling has been widely adopted as a research tool for public health, and modelling results have played an important role informing public health responses to COVID-19. Through two years of highly intensified modeling activities, it has also become apparent that mathematical models tend to over-project the size of an epidemic wave; often over 60% of the susceptible population would be infected during a single wave. While many reasons for the over-projection were considered, from a lack of data and information to the assumptions that no public health interventions were incorporated in the projections, counter arguments abound. This has not only impacted public’s confidence in the modeling in general, but also caused much confusion and frustration among modelers, leading to claims that our standard epidemic models are great for studying underlying dynamics but simply have no predictive power.
Are there mathematical insights that can explain the observed over-projections? Indeed, I will show that the well-known final-size formula, which links the basic reproduction number R0 to the final size of an epidemic, predicts that over 60% of the population will be infected during an epidemic with R0 around 2. Then the question becomes how to improve standard epidemic models to produce accurate long-term predictions for real-world epidemics?
I will touch on several pitfalls in the modeling process that can impact the accuracy and reliability of model projections, including mismatching model outputs to data, mistaking model calibration with model validation, and unidentifiability issues in model parameter estimation. In the last part of the talk, I will demonstrate that incorporating human behavioural changes into epidemic models can improve model accuracy and reliability of model projections.