Limited Role of Bots in Spreading Vaccine-Critical Information Among Active Twitter Users in the United States: 2017-2019

University of Sydney (Dunn, Dalmazzo, Leask, Dey); Macquarie University (Surian, Steffens, Dyda, Coiera); Swinburne University of Technology (Rezazadegan); Boston Children's Hospital (Mandl)
"By focusing investigations only on counting what bots, trolls, and malicious users post without looking at what people potentially see and engage with, there is the risk of unnecessarily amplifying that content and could make it seem much more important than it really is." - Adam G. Dunn, PhD
The Twitter information ecosystem is a mix of human and nonhuman actors (e.g., software agents - bots) posting or engaging with information for a range of purposes, including the malicious spread of misinformation. This study sought to examine the role that bots play in spreading vaccine information on Twitter by measuring exposure and engagement among active users from the United States (US).
The researchers sampled 53,188 US Twitter users and examined who they follow and retweet across 21 million vaccine-related tweets (January 12 2017 through December 3 2019). They found that the median number of potential exposures to vaccine-related tweets per user was 757, of which 27 were vaccine critical, and none originated from bots. Furthermore, 36.7% of users retweeted vaccine-related content, 4.5% retweeted vaccine-critical content, and 2.1% retweeted vaccine content from bots. Compared with other users, the 5.8% for whom vaccine-critical tweets made up most exposures more often retweeted vaccine content (62.9%), vaccine-critical content (35.0%), and bots (8.8%). This finding means that "Exposure to and engagement with vaccine-critical content tend to be most heavily concentrated in a relatively small subgroup of users who are more engaged with the topic overall."
In short, only a small proportion of vaccine-critical information that reaches active US Twitter users comes from bots. Thus, the researchers suggest that resources that are being invested by social media platforms and policymakers for controlling bots and trolls might be more effectively used on interventions to educate and improve media literacy. Education interventions may help to create a protective barrier around the small anti-vaccine groups to stop misinformation from spreading. Associate Professor Dunn said in an interview: "I think the best tools that social media platforms have for stopping misinformation are those that can empower their users to spot it and add friction to passing it along. For public health organisations and researchers, the tools we need are those that can prioritise resources by signalling when the benefits of tackling misinformation outweigh the risks of unintentionally amplifying it by engaging with it."
In addition to those implications for practice, research implications include a call for further studies to better understand the gaps between what can be observed about people online and the decisions they make about vaccination offline. Also, and in conclusion: "Researchers studying health information consumption should consider measuring exposure and sharing in representative populations to better understand the potential effect of what is being posted. Rather than focusing efforts on bots, social media platforms, policymakers, and public health agencies should continue to focus on the known factors influencing vaccination-related behaviors."
American Journal of Public Health (AJPH), Supplement 3, 2020, Vol 110, No. S3 https://doi.org/10.2105/AJPH.2020.305902 - sourced from "Influence of bots on spreading vaccine information not as big as you think", October 2 2020 - accessed on October 14 2020. Image credit: Public Domain Pictures
- Log in to post comments











































