Pros and Cons of Chatbot Counselling for Vaccine Hesitancy
Author: Anastasiya Nurzhynska, June 30 2020 - Vaccine hesitancy is listed as a top threat to global health (World Health Organization (WHO), 2019). Despite changes in social media consumption, healthcare providers are still perceived as trusted messengers. Though identified as the most trustworthy source of information on vaccination, some health workers are not proactive and vocal enough in promoting vaccination due to limited resources and weak counselling capacities. In this context, it is important to understand whether the trusted messenger role, fulfilled by doctors, may be played by chatbots. Notably, the number of customer service interactions moderated by chatbots is expected to increase. Their conversational abilities are improving quickly (Zhang et al., 2018), and public interest is growing (Markoff & Mozur, 2015; Romeo, 2016). It is therefore important to understand whether chatbot-facing disclosures differ from other types. Yet few studies describe chatbot applications that target vaccination-related behaviour.
Healthcare professionals and vaccine trust
Belief in vaccine safety is tied to trust in the scientific community and health professions (Wellcome Global Monitor, 2019). Individuals who report high levels of trust in doctors and nurses also rate vaccine safety higher.
Several studies link vaccination-decision behaviour to provider recommendations (Pandolfi et al., 2012; Wiley & Leask, 2013; Reiter, Gilkey, & Brewer, 2013; McRee et al., 2013). Providers must have the resources to devote clinical time to rapport and recommendations (Mollema et al., 2012; McCarthy et al., 2012). Provider recommendations may involve personal views on safety, benefits, and alternatives, which vary by practitioner (Knorr-Cetina, 1999). And not all doctors seek vaccination themselves (Socan et al., 2013; Wicker et al., 2012). United States (US) pediatricians reported success in having additional time but found it difficult to quantify (Cooper et al., 2008). Several studies recommend that clinicians combine current vaccine evidence and communication that targets patients' anxieties (Casper and Carpenter, 2008; Waller et al., 2006). Yet, some healthcare professionals lack vaccine awareness or communicative confidence (Betsch & Wicker, 2012). Furthermore, "decision fatigue" is quite common among doctors. This "human factor" results in a lower rate of orders, e.g. for cancer screenings or flu vaccinations (Futurity, 2019). When the effect of doctor workload on patient-related decisions was assessed in a study of vaccine orders, it was noted that doctors ordered 20% less influenza vaccine during the second half of the working day (Futurity, 2019).
So, from one side, the healthcare provider is the most trusted source of information on vaccination. From another, these personnel have limited time and weak counseling skills. One possible solution is to use technologies such as a medical chatbot.
The internet and vaccine hesitancy
The internet has become the major source of pediatric health data for parents. Patients regard information from public search engines as more reliable than that found on official healthcare websites. Concurrently, social media sites place greater emphasis on worst-scenario consequences, including poor safety, ineffectiveness, and even death. Additionally, negative articles, comments, and testimonials seem to be more popular and influential among people than positive ones (Keelan et al., 2007; Keelan et al., 2010), owing to the emotional expressiveness people use to share information (Wolfe et al., 2002). Parents who obtain information about vaccination through the internet have been found to show greater hesitance toward immunization or even refusal to vaccinate their child and to not have confidence in the safety of vaccines. Moreover, 10 minutes' exposure to negative vaccine information was shown to lead to a rise in perceived vaccination dangers and a decline in intentions to immunize (Betsch et al., 2010).
More than one in 10 news websites accessed by Americans feature misinformative health information, such as false information about the risks of vaccines ("More than 10% of the news websites Americans rely on spread misinformation about health issues such as vaccines" - NewsGuard, 2019).
From another side, there is a lack of online presence and reliable information from health professionals. A study conducted in the US aimed at analyzing the accuracy of the content of health-related blogs discovered that: 32% had not been updated since 2014; 30% did not address immunization; and two provided misleading, inaccurate, and immunization-disclaiming information that may negatively influence parents (Bryan et al., 2018).
In this situation, the chatbot can potentially fill the gap in evidence-based information on vaccination online, providing advice and information.
Chatbots and new technologies in health counselling
Chatbot research is a new field. Few studies have addressed the influence of chatbots on perception of vaccine efficacy or safety (Laranjo, 2018). The use of conversational agents is an emerging area of investigation, though studies have mostly been based on quasi-experimental research designs and seldom assess efficacy or safety (Laranjo, 2018). However, 40% of customers don't have preferences as to the real human over the chatbot, as long as they receive counseling (HubSpot, 2017). Over 85% of customer service interactions are expected to be chatbot-moderated by 2020. Machine learning is expected to result in rapid development (Risdon, 2017). The medical chatbot market is projected to value at US$1.23 billion by 2025, and 5 companies claim diagnostic ability (Sienkiewicz, 2019).
Studies of interventions designed to address vaccine hesitancy suggest that effective messages should integrate qualitative counseling delivered by health providers, socially accepted and presumptive norms, and different types of interventions focused on the intention-behaviour gap. Chatbots may potentially consolidate these interventions. They are in use for influenza and human papillomavirus (HPV) vaccine education, and international studies report similar efficacy (Hsu, 2019). They have proven effective in China as reminder tools (Chen et al., 2016), provided workable immunisation solutions in hard-to-reach parts of Bangladesh (Kolff, 2018), and promoted measles, mumps, and rubella (MMR) vaccine education in Italy (Fadda et al., 2018). Smartphone apps that employ chatbot systems are popular with US parents, patients, pharmacists, and other healthcare providers (Bednarczyk et al., 2017).
For example, ELIZA, a computer program that simulated the work of a psychotherapist, was the first successful attempt to replace the human physician; its development dates back to 1966 (Techlabs, 2018). Since then, other chatbots like Meet Molly, Eva, Ginger, Replika, Florence, and Izzy have come into use. They are able to manage appointments, provide information, and guide patients through symptoms. Yet, it is unclear whether a chatbot is suitable for counseling a vaccine-hesitant patient.
Some successful social-media-based chatbots are used to educate parents about vaccination currently. One of them is an artificial intelligence (AI) chatbot, which was launched by Taiwan's Centers for Disease Control (CDC) in 2017. It was integrated into the social messaging app LINE and communicates information and responses to questions about flu immunization (Focus Taiwan, 2017). Another (Facebook Messenger Chatbot) was launched by the American Cancer Society (ACS) to communicate with users about the importance of HPV immunization (MySocietySource, 2018).
In 2019, IRD Global, in Singapore, announced a plan to develop a vaccination-related educational chatbot powered by AI and natural language processing to improve vaccine coverage in Pakistan. The bot will provide a wide range of information concerning immunization locations, schedules, and due dates. It will be designed to address post-vaccination concerns and assist health workers (Global Grand Challenges, 2019). The Conversation Coach chatbot is also worth mentioning, as it trains the person to discuss vaccines with their skeptical partners or counterparts through the simulation of a real conversation. These tools appear useful in preparing people to discuss sensitive issues and promote vaccination awareness (Spiegel, 2019).
One of the few research studies around the use of chatbots as tools for dealing with vaccine hesitancy examined the language that people use in social media to express their thoughts about HPV immunization through the application of natural language processing (NLP) techniques (McGregor & Whicker, 2017).
In addition, experiments involving Health Advice Chatbots have explored the theoretical, methodological, and practical implications of chatbot interventions. These experiments tested differences in the experience of advice about a delicate personal problem when delivered in writing or through interaction with a chatbot. In addition, the messages (content) were based on 3 types of empathic expression: sympathy, cognitive empathy, and affective empathy. With regard to people's perceptions of the latter, the data indicated that the participants preferred sympathy and empathy to unemotional advice, which supports the paradigm of 'Computers [as] Social Actors (CASA)' (Liu & Sundar, 2018).
Between 2011 and 2018, the University of Chicago in the US ran the Data Science for Social Good project. The project, which aimed to predict the likelihood for immunization by the end of the first-grade school year in Croatian children, involved the Croatian Institute of Public Health and researchers from France, Portugal, and the US. The machine was trained using learning algorithms based on the electronic health records of 48,000 first-grade schoolchildren. The study found that chatbots were effective in educating hesitant parents (Hsu, 2019).
Chatbots are also effective for issuing reminders. In Sichuan Province, China, a smartphone app designed to increase vaccination rates for children was tested. A cluster randomized controlled trial indicated comparatively higher increases in immunization rates in the experimental group (17% vs 10%) that were, however, not statistically significant (p = 0.164). Village general practitioners (GPs) also mentioned convenience and time efficiency as benefits for the use of the EPI (Expanded Programme on Immunization) to manage child immunization (Chen et al., 2016).
Several descriptive studies highlight successful chatbot use. A mobile smartphone app called ReadyVax became a hot commodity in 102 different countries (52% of downloads were based in the US) among parents, adult patients, pharmacists, and healthcare providers (Bednarczyk et al., 2017). Furthermore, the EPI app intervention designed for Chinese village GPs raised immunization rates dramatically (Chen et al., 2016). A phone app used by providers in Bangladesh became a workable solution for dealing with immunization issues in hard-to-reach areas (Kolff, 2018). The use of a Pan-Canadian immunization app influenced patients' conscious decision to vaccinated on time by 32% (HRIPortal, 2018). Another app, called MorbiQuiz, increased vaccination knowledge and empowerment for MMR vaccinations in Italy (Fadda et al., 2018).
Pros and Cons
Healthcare stakeholders associate chatbot use with psychological, behavioural, health, and administrative benefits. Physicians view chatbots as "value for money" but express concern about their ability to recognise patients' emotions and respond appropriately (Palanica, 2019).
Human-chatbot disclosure may be preferable to human-human interactions, and it has been suggested that this could alter the nature of disclosure and its outcomes (Lucas et al., 2014). People often avoid disclosing to others out of a fear of negative evaluation. Because chatbots do not think or form judgments on their own, people may feel more comfortable disclosing to a chatbot compared to a person, changing the nature of disclosure and its outcomes (Lucas et al., 2014). On the other hand, people assume that chatbots are worse at emotional tasks than humans (Madhavan, Wiegmann, & Lacson, 2006), which may negatively impact emotional disclosure with chatbots.
The CASA framework predicts that people instinctively perceive, react to, and interact with computers as they do with other people, without consciously intending to do so (Reeves & Nass, 1996). This tendency is so pervasive that it is a foundational component of theoretical thinking about interactions between humans and computerized agents, to the extent that it is thought to be "unlikely that one will be able to establish rules for human-agent/robot-interaction which radically depart from what humans know from and use in their everyday interactions" (Krämer, von der Pütten, & Eimler, 2012). This framework suggests that disclosure processes and outcomes will be similar, regardless of whether the partner is a person or a chatbot.
A plethora of studies have found that people form perceptions of computerized agents and humans in the same way, even though people consciously know that computers are machines that do not have human personalities. In one of the studies (Ho et al., 2018), it was found that chatbots and humans were equally effective at creating emotional, relational, and psychological benefits.
For instance, people perceive a computerized agent to be as inspired, strong, or afraid as another person is (von der Putten, Krämer, Gratch, & Kang, 2010). This occurs not just when the partner is actually a computer, but also when the partner is believed to be a computer (von der Putten et al., 2010). The tendency for individuals to judge and react to computers as they do to other people has been observed across different kinds of computerized agents, from embodied conversational agents to robots and text-only chatbots (Eyssel & Hegel, 2012).
Identifying parental views on immunization might help healthcare providers communicate with parents appropriately. Chatbots can be applied to analyze personality through several methods. The Meaning Extraction Method (MEM) is a text-analytic method commonly used by social psychologists (Chung and Pennebaker, 2008). MEM is reliable in displaying the aspects by which people think about themselves or about specific problems. Mitra et al. (2016) used MEM to define the aspects of thoughts across groups with different perceptions of vaccines.
Although studies have found success in use of chatbots as a new counseling service technology to improve the situation with patients (and vaccine-hesitant ones as well), there have been identified weaknesses of chatbot-based interventions. It is suggested that vaccination is an emotional topic that chatbots treat with inappropriate rationality (Opoku-Agyemang, 2018). Chatbots also risk being erroneously considered a replacement for medical advice from a trained professional (Yadav et al., 2019). Primary concern factors include a lack of accuracy, security, empathy, and technological maturity (Nadarzynski et al., 2019).
One of the main criticisms of chatbots is that they are not capable of empathy, notably to recognize users' emotional states and tailor responses reflecting these emotions. This is because of the fact that chatbots use AI technology to deal with the matter, which is based on rationality. One suggestion is to adopt models that are less rational to understand how to apply AI in high-stakes scenarios like immunization (Opoku-Agyemang, 2018). Behavioural science techniques can potentially resolve this issue.
Another potential concern is the trust from society. The chatbot may increase the strength of the anti-vaccination movement and generate accusations in propaganda.
Finally, the ethical aspect of bot usage must be explored - in particular, around possible medical mistakes and responsibility for inappropriate advice. Chatbot content should be carefully verified by experts. There is also a challenge in people perceiving the advice of the chatbot as a final medical recommendation or developing wrong conclusions about their health condition (Yadav et al., 2019). It is important that the chatbot has a disclaimer with a clear scope of services and availability of a real doctor's advice in addition to the chatbot. Health chatbots should be a supplementary service rather than a replacement of professional health expertise.
References
Adams, Terrence. "AI-powered social bots." arXiv preprint arXiv:1706.05143 (2017).
Bednarczyk, Robert A., Paula M. Frew, Daniel A. Salmon, Ellen Whitney, and Saad B. Omer. "ReadyVax: A new mobile vaccine information app." Human vaccines & Immunotherapeutics 13, no. 5 (2017): 1149-1154.
Boyd, Ryan L., Steven R. Wilson, James W. Pennebaker, Michal Kosinski, David J. Stillwell, and Rada Mihalcea. "Values in words: Using language to evaluate and understand personal values." In Ninth International AAAI Conference on Web and Social Media. 2015.
Bryan, Mersine A., Hailey Gunningham, and Megan A. Moreno. "Content and accuracy of vaccine information on pediatrician blogs." Vaccine 36, no. 5 (2018): 765-770.
CDC launches LINE 'chatbot' to provide flu vaccination info. (2017). Focus Taiwan.
Chen, Li, Xiaozhen Du, Lin Zhang, Michelle Helena van Velthoven, Qiong Wu, Ruikan Yang, Ying Cao et al. "Effectiveness of a smartphone app on improving immunization of children in rural Sichuan Province, China: a cluster randomized controlled trial." BMC Public Health 16, no. 1 (2016): 909.
Chung, Cindy K., and James W. Pennebaker. "Revealing dimensions of thinking in open-ended self-descriptions: An automated meaning extraction method for natural language." Journal of Research in Personality 42, no. 1 (2008): 96-132.
Deepika Yadav, Prerna Malik, Kirti Dabas, and Pushpendra Singh. 2019. Feedpal: Understanding Opportunities for Chatbots in Breastfeeding Education of Women in India. Procedings of the ACM on Human-Computer Interaction 3, CSCW, Article 170 (November 2019).
Doctors order fewer cancer screenings after 5 p.m. (2019). Futurity.
Fadda, Marta, Elisa Galimberti, Maddalena Fiordelli, and Peter Johannes Schulz. "Evaluation of a Mobile Phone-Based Intervention to Increase Parents’ Knowledge About the Measles-Mumps-Rubella Vaccination and Their Psychological Empowerment: Mixed-Method Approach." JMIR mHealth and uHealth 6, no. 3 (2018): e59.
Ho, Annabell, Jeff Hancock, and Adam S Miner, "Psychological, Relational, and Emotional Effects of Self-Disclosure After Conversations With a Chatbot." Journal of Communication 2018 Aug; 68(4): 712-733. doi: 10.1093/joc/jqy026
Hsu, Jeremy. (2019). Machine Learning Predicts Kids at Risk of Not Getting Vaccinated. IEEE Spectrum.
Innovative chatbot technology implemented to increase HPV vaccine awareness. (2018). MySocietySource [no longer available].
Keelan, Jennifer, Vera Pavri-Garcia, George Tomlinson, and Kumanan Wilson. "YouTube as a source of information on immunization: a content analysis." JAMA 298, no. 21 (2007): 2482-2484.
Keelan, Jennifer, Vera Pavri, Ravin Balakrishnan, and Kumanan Wilson. "An analysis of the Human Papilloma Virus vaccine debate on MySpace blogs." Vaccine 28, no. 6 (2010): 1535-1540.
Kolff, Chelsea A., Vanessa P. Scott, and Melissa S. Stockwell. "The use of technology to promote vaccination: A social ecological model based framework." Human Vaccines & Immunotherapeutics 14, no. 7 (2018): 1636-1646.
Kramer, Adam DI, and Cindy K. Chung. "Dimensions of self-expression in facebook status updates." In Fifth International AAAI Conference on Weblogs and Social Media. 2011.
Laranjo, Liliana, Adam G. Dunn, Huong Ly Tong, Ahmet Baki Kocaballi, Jessica Chen, Rabia Bashir, Didi Surian et al. "Conversational agents in healthcare: a systematic review." Journal of the American Medical Informatics Association 25, no. 9 (2018): 1248-1258.
Liu, Bingjie, and S. Shyam Sundar. "Should machines express sympathy and empathy? Experiments with a health advice chatbot." Cyberpsychology, Behavior, and Social Networking 21, no. 10 (2018): 625-636.
McGregor, Kyle Aaron, and Margaret E. Whicker. "Natural language processing approaches to understand HPV vaccination sentiment." Journal of Adolescent Health 62, no. 2 (2018): S27-S28.
Mitra, Tanushree, Scott Counts, and James W. Pennebaker. "Understanding anti-vaccination attitudes in social media." In Tenth International AAAI Conference on Web and Social Media. 2016.
Nadarzynski, T., Miles, O., Cowie, A., & Ridge, D. (2019). "Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study." Digital Health.
Opoku-Agyemang, K. (2018). "The Potential for Human-Computer Interaction and Behavioral Science." Behavioral Scientist.
Palanica, Adam, Peter Flaschner, Anirudh Thommandram, Michael Li, and Yan Fossat. "Physicians' perceptions of chatbots in health care: cross-sectional web-based survey." Journal of Medical Internet Research 21, no. 4 (2019): e12887.
Risdon, C. (2017). "Scaling Nudges with Machine Learning." Behavioral Scientist.
Say Hello to Bablibot (Babybot): A Vaccines Chatbot (2019). Global Grand Challenges.
Spiegel, B. (2019). Our Conversation Coach Can Help. VeryWell Health.
Techlabs, M. (2018). "Is Conversational AI the future of Healthcare?" Chatbots Magazine.
Wegner, Daniel M. "Précis of the illusion of conscious will." Behavioral and Brain Sciences 27, no. 5 (2004): 649-659.
Standardizing digital vaccination records in Canada: team behind CANImmunize launches Canadian Vaccine Catalogue (2018). HRIPortal.
Sienkiewicz, A. (2019). Chatbot trends and stats in 2019. Tidio Blog.
Wellcome Global Monitor (2019). How does the world feel about science and health?
Wolfe, Robert M., Lisa K. Sharp, and Martin S. Lipsky. "Content and design attributes of antivaccination web sites." JAMA 287, no. 24 (2002): 3245-3248.
Image credit: Photo by Eleventh Wave on Unsplash.com
As with all of the blogs posted on our website, the content above does not imply the endorsement of The CI or its Partners and is from the perspective of the writer alone. We do not check facts and strive to retain the writer's voice, as is detailed in our Editorial Policy.
- Log in to post comments











































