New Delhi: Artificial intelligence (AI) chatbot, ChatGPT, could help to increase vaccine uptake by debunking myths around the safety of jabs, according to a study.
In the study published in the journal Human Vaccines and Immunotherapeutics, the researchers asked the top 50 most frequently-asked Covid-19 vaccine questions. They included queries based on myths and fake stories such as the vaccine causing Long Covid.
The researchers from the University of Santiago de Compostela in Spain found that ChatGPT scored nine out of 10 on average for accuracy. The rest of the time it was correct but left some gaps in the information provided.
The AI tool is a “reliable source of non-technical information to the public,” especially for people without specialist scientific knowledge, they said.
However, the findings do highlight some concerns about the technology such as ChatGPT changing its answers in certain situations.
“Overall, ChatGPT constructs a narrative in line with the available scientific evidence, debunking myths circulating on social media,” said study lead author Antonio Salas, a professor at the University of Santiago de Compostela.
“Thereby, it potentially facilitates an increase in vaccine uptake. ChatGPT can detect counterfeit questions related to vaccines and vaccination. The language this AI uses is not too technical and therefore easily understandable to the public but without losing scientific rigor,” Salas said.
The researchers acknowledge that the present-day version of ChatGPT cannot substitute an expert or scientific evidence, but the results suggest it could be a reliable source of information to the public.
In 2019, the World Health Organisation (WHO) listed vaccine hesitancy among the top 10 threats to global health.
During the pandemic, misinformation spread via social media contributed to public mistrust of Covid-19 vaccination.
The researchers tested ChatGPT’s ability to get the facts right and share accurate information around Covid vaccine safety in line with current scientific evidence.
ChatGPT enables people to have human-like conversations and interactions with a virtual assistant. The technology is very user-friendly which makes it accessible to a wide population, the researchers noted.
However, many governments are concerned about the potential for ChatGPT to be used fraudulently in educational settings such as universities, they said.
The study was designed to challenge the chatbot by asking it the questions most frequently received by the WHO collaborating center at the university.
The queries covered three themes. The first was misconceptions around safety such as the vaccine causing Long Covid. Next was false contraindications – medical situations where the jab is safe to use such as in breastfeeding women.
The questions also related to true contraindications – a health condition where the vaccine should not be used – and cases where doctors must take precautions e.g. a patient with heart muscle inflammation.
The experts analysed the responses and then rated them for veracity and precision against current scientific evidence, and recommendations from WHO and other international agencies.
The authors say this was important because algorithms created by social media and internet search engines are often based on an individual’s usual preferences. This may lead to ‘biased or wrong answers’, they added.
Results showed that most of the questions were answered correctly with an average score of nine out of 10 which is defined as ‘excellent’ or ‘good’. The responses to the three question themes were on average 85.5 per cent or 14.5 per cent but with gaps in the information provided by ChatGPT.
The chatbot provided correct answers to queries that arose from genuine vaccine myths, and to those considered in clinical recommendation guidelines to be false or true contraindications.
However, the research team does highlight ChatGPT’s downsides in providing vaccine information.
“Chat GPT provides different answers if the question is repeated ‘with a few seconds of delay,” Salas said.
“Another concern we have seen is that this AI tool, in its present version, could also be trained to provide answers not in line with scientific evidence,” the researcher added.