Chatbots are increasingly being used as virtual healthcare advisors, but experts warn that this technology poses significant challenges to the accuracy of advice and trust in medical care.
The proliferation of chatbots, such as ChatGPT Health, has created vast demand for health information, with over 230 million people globally relying on these platforms each week. However, concerns have been raised about the reliability of AI-driven healthcare recommendations, with clinicians warning that hallucinations and erroneous information can be widespread.
Saurabh Gombar, a clinical instructor at Stanford Health Care, notes that chatbots may provide sensible answers in everyday situations but lack the nuance to fully understand individual health needs. "It's like getting a recipe for spaghetti," he says. "If you ask for a 10-fold increase of an ingredient, it might be correct, but if you're asking about symptoms of heart attacks or other serious conditions, it may not provide accurate information."
Gombar also highlights the potential risks of relying on chatbots to diagnose and treat patients, particularly when they generate false or misleading information. "If a patient comes in convinced that they have a rare disorder based on a simple symptom after chatting with an AI, it can erode trust when a human doctor seeks to rule out more common explanations first," he warns.
Another pressing concern is the handling of sensitive health data by these companies. Experts argue that while encryption algorithms may be secure, the protection of data from unauthorized use and disclosure is still a significant issue. Alexander Tsiaras, founder and CEO of StoryMD, believes that even with robust security measures in place, users should be cautious about entrusting their personal data to AI giants.
Tsiaras points to the growing risk of exploitation by profit-driven companies seeking to monetize sensitive health information through personalized advertising. "Especially as OpenAI moves to explore advertising as a business model, it's crucial that the separation between this sort of health data and memories that ChatGPT captures from other conversations is airtight," he cautions.
The industry is also grappling with the potential risks associated with non-protected health data voluntarily inputted by users. Nasim Afsar, a physician and advisor to the White House and global health agencies, notes that profit-driven companies have eroded trust in the healthcare space. "Anyone else who builds a system has to go in the opposite direction of spending a lot of time proving that we're there for you and not about abusing what we can get from you," she emphasizes.
While some experts see chatbots as an early step toward more intelligent healthcare, others caution that this technology is far from complete. "A.I. can now explain data and prepare patients for visits, but transformation happens when intelligence drives prevention, coordinated action, and measurable health outcomes, not just better answers inside a broken system," Afsar notes.
Ultimately, the deployment of AI-driven chatbots in healthcare raises fundamental questions about accuracy, trust, and data privacy that need to be addressed. As these technologies continue to evolve, it is crucial for policymakers, regulators, and industry leaders to prioritize transparency, accountability, and safety above profit-driven agendas.
The proliferation of chatbots, such as ChatGPT Health, has created vast demand for health information, with over 230 million people globally relying on these platforms each week. However, concerns have been raised about the reliability of AI-driven healthcare recommendations, with clinicians warning that hallucinations and erroneous information can be widespread.
Saurabh Gombar, a clinical instructor at Stanford Health Care, notes that chatbots may provide sensible answers in everyday situations but lack the nuance to fully understand individual health needs. "It's like getting a recipe for spaghetti," he says. "If you ask for a 10-fold increase of an ingredient, it might be correct, but if you're asking about symptoms of heart attacks or other serious conditions, it may not provide accurate information."
Gombar also highlights the potential risks of relying on chatbots to diagnose and treat patients, particularly when they generate false or misleading information. "If a patient comes in convinced that they have a rare disorder based on a simple symptom after chatting with an AI, it can erode trust when a human doctor seeks to rule out more common explanations first," he warns.
Another pressing concern is the handling of sensitive health data by these companies. Experts argue that while encryption algorithms may be secure, the protection of data from unauthorized use and disclosure is still a significant issue. Alexander Tsiaras, founder and CEO of StoryMD, believes that even with robust security measures in place, users should be cautious about entrusting their personal data to AI giants.
Tsiaras points to the growing risk of exploitation by profit-driven companies seeking to monetize sensitive health information through personalized advertising. "Especially as OpenAI moves to explore advertising as a business model, it's crucial that the separation between this sort of health data and memories that ChatGPT captures from other conversations is airtight," he cautions.
The industry is also grappling with the potential risks associated with non-protected health data voluntarily inputted by users. Nasim Afsar, a physician and advisor to the White House and global health agencies, notes that profit-driven companies have eroded trust in the healthcare space. "Anyone else who builds a system has to go in the opposite direction of spending a lot of time proving that we're there for you and not about abusing what we can get from you," she emphasizes.
While some experts see chatbots as an early step toward more intelligent healthcare, others caution that this technology is far from complete. "A.I. can now explain data and prepare patients for visits, but transformation happens when intelligence drives prevention, coordinated action, and measurable health outcomes, not just better answers inside a broken system," Afsar notes.
Ultimately, the deployment of AI-driven chatbots in healthcare raises fundamental questions about accuracy, trust, and data privacy that need to be addressed. As these technologies continue to evolve, it is crucial for policymakers, regulators, and industry leaders to prioritize transparency, accountability, and safety above profit-driven agendas.