Where We Go From Here: Health Misinformation on Social Media

Falsehoods have been shown to spread faster and farther than accurate information,1 and research suggests that misinformation can have negative effects in the real world, such as amplifying controversy about vaccines2 and propagating unproven cancer treatments.3 Health misinformation on social media, therefore, urgently requires greater action from those working in public health research and practice. We define “health misinformation” as any health-related claim of fact that is false based on current scientific consensus. Many other types of information pose a challenge for health communication, including contradictory or conflicting findings, changing evidence, and information that involves a high degree of uncertainty; however, these issues are outside the scope of this editorial, which focuses on information that is patently false.

Responding to misinformation is challenging for many reasons. For example, psychological factors, including emotions and cognitive biases, may render straightforward efforts to counter misinformation by providing accurate information ineffective. This may be why interventions, such as recommending articles with corrective information, have shown mixed efficacy.4 Another issue concerns the difficulty of identifying and reaching those who are exposed to misinformation. The diversity and volume of social media facilitate the creation and maintenance of information silos by making it easy for users to self-curate their feeds and find similar content through automated algorithms. These features reduce the likelihood that individuals who are part of a group in which misinformation is circulating will be exposed to content that contradicts the prevailing view of their network.

As accumulating evidence indicates, in domains as varied as childhood vaccines and COVID-19, widespread health misinformation can have potentially devastating consequences, and responses need to be timely, strategic, and evidence based. We outline five understudied research areas that need to be addressed to improve policy and practice in response to health misinformation ( ).

An external file that holds a picture, illustration, etc.
Object name is AJPH.2020.305905f1.jpgOpen in a separate window

ENHANCE SURVEILLANCE

Much of the research conducted to date has relied on cross-sectional content analysis of social media data,5 and although these types of studies are important, the field needs to move toward a more comprehensive understanding of the social media misinformation environment. For example, many studies have focused on Twitter, but other popular platforms, such as WeChat, Tumblr, Reddit, and Pinterest, remain understudied. Additionally, research is needed to better understand nontextual content, including images, memes, and videos found on platforms such as Instagram, TikTok, and YouTube, taking advantage of computer-assisted visual analysis.

Surveillance efforts also need to account for the complex processes affecting diffusion by systematically exploring the spatial, temporal, network, and cross-platform dynamics of misinformation spread. This knowledge would help us identify critical platform, content, and network characteristics that enable or impede the dissemination of misinformation.

UNDERSTAND PSYCHOLOGICAL DRIVERS

We also need to draw on theoretical frameworks from political science, psychology, communication, and other social sciences to examine the role of emotion, cognition, and identity in relation to misinformation and use this knowledge to inform interventions. For example, the human tendency toward confirmation bias may render debunking efforts ineffective, as corrective information may be viewed as inconsistent with a preferred narrative and therefore ignored or denied. In situations in which a strong confirmation bias exists, interventions based on value affirmation might be more effective.

Similarly, although the impact of emotion on misinformation processing has been studied in the context of politics,6 less is known about the role emotions play when it comes to health misinformation—even though health topics can generate strong emotions, including fear and anxiety. A deeper understanding of the psycho-socio-emotional drivers of misinformation acceptance and sharing, and how they differ across various domains (e.g., political vs health information), will be crucial for designing successful interventions.

ASSESS CONSEQUENCES OF MISINFORMATION

Little evidence is available regarding the extent to which misinformation exposure online affects health-related behaviors, attitudes, knowledge, and outcomes at the individual or population level, or how exposure to misinformation intersects with existing health disparities.

Moreover, misinformation may have additional consequences that—although difficult to observe—are equally insidious. For example, misinformation could create the impression that no consensus exists on a topic or that official sources of information are not credible, which could generate feelings of apathy, confusion, and mistrust. This could then lead individuals to disengage from health information seeking, avoid health care, or make decisions that are detrimental to their health. Although there are challenges to linking online activity with offline behavior, theoretically informed empirical research is needed to elucidate the full extent of the real-world consequences of misinformation exposure.

FOCUS ON THE MOST VULNERABLE

Research indicates that most people are susceptible to misinformation in some contexts and that typical sociodemographic predictors of health disparities may not govern vulnerability to misinformation. For instance, highly educated individuals may be equally vulnerable to misinformation when it comes to topics that are central to their identity.7 Identifying factors that may increase susceptibility to misinformation (e.g., conspiracy mindset, lack of access to evidence-based health information) would enable better targeting of resources and better tailoring of strategies.

Once we identify who is most vulnerable, methods for strategically intervening with these groups will be needed. For example, interventions could use sources of information that are deemed credible by a particular vulnerable community to increase the likelihood of message acceptance. Research is also needed to understand whether interventions should target the most influential individuals in these vulnerable groups or focus on those who might be less integrated into the group and still amenable to change.

DEVELOP AND TEST EFFECTIVE RESPONSES

An approach centered on simply providing evidence-based health messages or broadly debunking misinformation will likely be insufficient. Interdisciplinary research is needed to develop additional strategies and identify the optimal timing, manner, and forum for responding to misinformation. Although a reactive response will be effective in one situation, a proactive response (such as inoculation) may be vital in a different context. It is also possible that the best response is no response at all, for example, if acknowledging the falsehood in a correction would only “give it oxygen” and further its spread.

Additionally, targeted approaches for reaching misinformed individuals with corrective information are needed. Public health practitioners and health care providers could attempt to identify and penetrate online information silos where misinformation is rampant to offer evidence-based information, direct users to credible sources, or provide countermessaging.

Finally, system-level preventive efforts are also needed, such as legislation requiring social media platforms to remove potentially harmful misinformation or incentives to increase their adoption of practices that make it more difficult for users to find and share misinformation. Increasing the public’s health, science, and media literacy to decrease vulnerability to misinformation is another important prevention strategy. Such efforts could raise awareness of the techniques (e.g., cherry-picking data) used by agents of misinformation and increase the public’s understanding of the inherent uncertainty and complexity of health and science information to induce a healthy skepticism toward claims that are overly simplistic or sensational.

USING RESEARCH TO INFORM POLICY AND PRACTICE

The research priorities we have outlined should inform and improve policy and practice aimed at addressing health misinformation on social media, such as content moderation standards used by platforms and rumor mitigation efforts undertaken by public health agencies. As these policies and interventions are implemented, further research will be needed to evaluate their impact.

CONFLICTS OF INTEREST

The authors have no conflicts of interest to disclose.

Footnotes

See also Chou and Gaysynsky, p. S270, and Walsh-Buhi, p. S292.