Public Policy

Lack of Data Hampers COVID Predictions, But Models Still Matter

April 16, 2020 2191

The question everyone in the world wants answered is how far the new coronavirus will spread and when the pandemic will begin to ebb. To know that, epidemiologists, public health authorities and policymakers rely on models. Biomedical mathematician Lester Caudill, who is currently teaching a class focused on COVID-19 and modeling, explains the limitations of models and how to better understand them.

What are infectious disease models?

Mathematical models of how infections spread are simplified versions of reality. They are designed to mimic the main features of real-world disease spread well enough to make predictions which can, at least partly, be trusted enough to make decisions. The COVID-19 model predictions reported in the media come from mathematical models that have been converted into computer simulations. For example, a model might use a variety of real world data to predict a date (or range of dates) for a city’s peak number of cases.

Why is modeling the spread of COVID-19 challenging?

In order for a model’s predictions to be trustworthy, the model must accurately reflect how the infection progresses in real life. To do this, modelers typically use data from prior outbreaks of the same infection, both to create their model, and to make sure its predictions match what people already know to be true.

The Conversation logo
This article by Lester Caudill originally appeared at The Conversation, a Social Science Space partner site, under the title “Lack of data makes predicting COVID-19’s spread difficult but models are still vital”

This works well for infections like influenza, because scientists have decades of data that help them understand how flu outbreaks progress through different types of communities. Influenza models are used each year to make decisions regarding vaccine formulations and other flu-season preparations.

By contrast, modeling the current COVID-19 outbreak is much more challenging, simply because researchers know very little about the disease. What are all the different ways it can be transferred between people? How long does it live on door knobs or Amazon boxes? How much time passes from the moment the virus enters a person’s body until that person is able to transmit it to someone else? These, and many other questions, are important to incorporate into a reliable model of COVID-19 infections. Yet people simply do not know the answers yet, because the world is in the midst of the first appearance of this disease, ever.

Why do different models have different predictions?

The best modelers can do is assume some things about COVID-19, and create models that are based on these assumptions. Some current COVID-19 models assume that the virus behaves like influenza, so they use influenza data in their models. Other COVID-19 models assume that the virus behaves like SARS-CoV, the virus that caused the SARS epidemic in 2003.

Other models may make other assumptions about COVID-19, but they must all assume something, in order to make up for information that they need, but that simply does not yet exist. These different assumptions are likely to lead to very different COVID-19 model predictions.

How can people make sense of the different – sometimes conflicting – model predictions?

This question gets at, perhaps, the most important thing to know about mathematical model predictions: They are only useful if you understand the assumptions that the model is based on.

Ideally, model predictions like, “We expect 80,000 COVID-related deaths in the U.S.” would read more like, “Assuming that COVID-19 behaves similar to SARS, we expect 80,000 COVID-related deaths in the U.S.” This helps place the model’s prediction into context, and helps remind everyone that model predictions are not, necessarily, glimpses into an inevitable future.

UW model of coronavirus spread
An oft-cited model from Institute for Health Metrics and Evaluation at the University of Washington has a wide range of projections for deaths from COVID-19. They vary based on different underlying assumptions and how they change, such as the effect of social distancing or widespread testing. (Image: Institute for Health Metrics and Evaluation at the University of Washington)

It may also be useful to use predictions from different models to establish reasonable ranges, rather than exact numbers. For instance, a model that assumes COVID-19 behaves like influenza might predict 50,000 deaths in the U.S. Rather than trying to decide which prediction to believe – which is an impossible task – it may be more useful to conclude that there will be between 50,000 and 80,000 deaths in the U.S.

Why do the same models seem to predict different outcomes today than they did yesterday?

As COVID-19 data becomes available – and there are many good people working tirelessly to gather data and make it available – modelers are incorporating it so that, each day, their models are based a little more on actual COVID-19 information, and a little less on assumptions about the disease. You can see this process unfold in the news, where the major predictive COVID-19 models provide almost daily revisions to their prior estimates of case numbers and deaths.

Can a model that’s (probably) not accurate at predicting the future still be useful?

While models of infections can provide insights into what the future might hold, they are far more valuable when they help answer, “How can policies alter that future?”

For instance, a baseline model for predicting the future number of COVID-19 cases might be adapted to incorporate the effects of, say, a stay-at-home order. By running model simulations with the order, and comparing to model simulations without the order, public health authorities may learn something about how effective the order is expected to be. That can be especially useful when comparing the associated costs, not only in terms of disease burden, but in economic terms, as well.

One step further, this same model could be used to predict the consequences of ending the order on, say, June 10 – the current target date for the stay-at-home order in Virginia – and compare them to model predictions for ending the order on, say, May 31 or June 30. Here, as in many other settings, models prove to be most useful when they’re used to generate different scenarios which are compared to each other. This is different than comparing model predictions to reality.

Lester Caudill is professor of mathematics at the University of Richmond. He is interested in biological and medical applications of mathematical modeling. His current research is focused on using mathematical and computational models to simulate infection spread in hospital wards, with particular focus on the rise and spread of antibiotic resistance in human pathogens. The long-term goal is to create a simulation tool that can be used to test different strategies for minimizing the risk of antibiotic-resistant infections.

View all posts by Lester Caudill

Related Articles

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture
Event
October 10, 2024

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

Read Now
‘Settler Colonialism’ and the Promised Land
International Debate
September 27, 2024

‘Settler Colonialism’ and the Promised Land

Read Now
Daron Acemoglu on Artificial Intelligence
Social Science Bites
September 3, 2024

Daron Acemoglu on Artificial Intelligence

Read Now
Crafting the Best DEI Policies: Include Everyone and Include Evidence
Public Policy
August 30, 2024

Crafting the Best DEI Policies: Include Everyone and Include Evidence

Read Now
The Public’s Statistics Should Serve, Well, the Public

The Public’s Statistics Should Serve, Well, the Public

Paul Allin sets out why the UK’s Royal Statistical Society is launching a new campaign for public statistics.

Read Now
Why, and How, We Must Contest ‘Development’

Why, and How, We Must Contest ‘Development’

Why is contestation a better starting point for studying and researching development than ‘everyone wants the same thing’?

Read Now
New SSRC Project Aims to Develop AI Principles for Private Sector

New SSRC Project Aims to Develop AI Principles for Private Sector

The new AI Disclosures Project seeks to create structures that both recognize the commercial enticements of AI while ensuring that issues of safety and equity are front and center in the decisions private actors make about AI deployment.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments