Study warns patients not to rely on AI chatbots for drug information

by IANS |

New Delhi, Oct 11 (IANS) Artificial Intelligence (AI) powered search engines and chatbots may not always provide accurate and safe information on drugs, and patients shouldn’t rely on these, warned a study on Friday.


Researchers from Belgium and Germany conducted the study after finding many answers were wrong or potentially harmful.


In the paper, published in the journal BMJ Quality and Safety, they said that the complexity of the answers provided by the AI chatbot may be difficult to understand and might require degree-level education.With the introduction of AI-powered chatbots search engines in 2023 underwent a significant shift thanks. The renewed versions offered enhanced search results, comprehensive answers, and a new type of interactive experience.


While the chatbots -- trained on extensive datasets from the entire internet -- can answer any healthcare-related queries, they are also capable of generating disinformation and nonsensical or harmful content, said the team from the Friedrich-Alexander-Universitat Erlangen-Nurnberg in Germany.“In this cross-sectional study, we observed that search engines with an AI-powered chatbot produced overall complete and accurate answers to patient questions,” they write.


“However, chatbot answers were largely difficult to read and answers repeatedly lacked information or showed inaccuracies, possibly threatening patient and medication safety,” they add.For the study, the researchers explored the readability, completeness, and accuracy of chatbot answers for queries on the top 50 most frequently prescribed drugs in the US in 2020. They used Bing copilot, a search engine with AI-powered chatbot features.Just half of the 10 questions were answered with the highest completeness. Further, chatbot statements didn’t match the reference data in 26 per cent of answers and were fully inconsistent in over 3 per cent of cases.


About 42 per cent of these chatbot answers were considered to lead to moderate or mild harm, and 22 per cent to death or severe harm.


The team noted that a major drawback was the chatbot’s inability to understand the underlying intent of a patient question.“Despite their potential, it is still crucial for patients to consult their healthcare professionals, as chatbots may not always generate error-free information,” the researchers said.

Latest News
Allu Arjun arrest: Varun Dhawan comes out in support of Telugu superstar Fri, Dec 13, 2024, 05:06 PM
Former Agricultural Bank of China VP arrested for suspected bribery Fri, Dec 13, 2024, 05:05 PM
Gurugram: Personal Banker of DBS bank arrested for cyber fraud Fri, Dec 13, 2024, 05:02 PM
Integrate yoga in national diabetes prevention policies: Dr. Jitendra Singh Fri, Dec 13, 2024, 05:00 PM
CCI issues cease and desist order against Table Tennis Federation of India, affiliate bodies Fri, Dec 13, 2024, 04:58 PM
Brazil expects agricultural output to increase seven per cent Fri, Dec 13, 2024, 04:50 PM
Travellers in Australia warned of flight delays as Qantas engineers go on strike Fri, Dec 13, 2024, 04:49 PM
Over 110 acres of poppy plantation destroyed in Manipur Fri, Dec 13, 2024, 04:47 PM
Karnataka BJP MLAs stage walkout from Assembly for not allowing discussion on Waqf row Fri, Dec 13, 2024, 04:44 PM
Sensex closes at 82,133 after 2,000 pts rally from day low Fri, Dec 13, 2024, 04:17 PM
Nobody praises Kejriwal's ten-year tenure: Sandeep Dikshit Fri, Dec 13, 2024, 04:13 PM
CCPA issues notices to 17 entities for violating direct selling rules Fri, Dec 13, 2024, 04:12 PM
Celebration of India's cultural heritage, collective spirit: PM Modi on Maha Kumbh Fri, Dec 13, 2024, 04:05 PM
SMAT: Rahane's stellar 98 leads Mumbai to final Fri, Dec 13, 2024, 04:03 PM
TB patients suffer high costs due to lost productivity, hospitalisation: ICMR study Fri, Dec 13, 2024, 03:55 PM