Elderly Man's Health Crisis Sparks Debate: Can AI Be Trusted for Medical Advice?

A 60-year-old man in New York was hospitalized after following a ChatGPT-generated diet plan involving sodium bromide. This raises concerns about AI’s role in medical advice.

Elderly Man Hospitalized After Following AI Diet Advice
ChatGPT's health advice led to hospitalization for a 60-year-old man, raising alarms over AI's accuracy in medical recommendations and its potential risks to public health. Image: CH


New York, USA — August 13, 2025:

A 60-year-old man from New York’s recent health crisis has ignited a critical conversation around the reliability of AI-generated health advice. The man was hospitalized after following a diet plan recommended by ChatGPT, which involved replacing salt with sodium bromide, a compound primarily used in industrial settings, not in food. The incident highlights the growing concerns over the use of artificial intelligence in medical and health-related decision-making.

The man, whose identity has not been revealed, approached ChatGPT with a request for a diet plan that would eliminate salt due to its alleged harmful effects. ChatGPT, without considering the man’s medical history or providing a disclaimer, suggested using sodium bromide as a substitute. Although sodium bromide is similar to common table salt in appearance, it is chemically distinct and poses significant health risks, including neurological and skin problems when consumed in large amounts.

The man, an amateur nutrition enthusiast, bought sodium bromide online and adhered to this alternative for three months. During this period, his health began to deteriorate, with symptoms like severe thirst, lack of coordination, and hallucinations. The situation worsened after hospitalization, where doctors noted his previously clean medical history. He required fluids, electrolytes, and antipsychotic treatment before being moved to a psychiatric ward due to behavioral issues. After three weeks, the man was discharged, but the damage had been done.

This case raises important questions about the role of AI in healthcare. While AI chatbots like ChatGPT can provide vast amounts of information in seconds, this incident underscores the dangers of relying on such tools for medical advice. ChatGPT’s lack of context—failing to ask the man for his medical history or issuing appropriate warnings about the use of sodium bromide—demonstrates a key flaw in AI's ability to offer safe, tailored health recommendations.

Though OpenAI, the company behind ChatGPT, has stated that its AI systems may not always produce accurate or safe information, many users remain unaware of these limitations. AI’s rapid expansion into various sectors, including healthcare, raises concerns about its role in public health decision-making. Can an AI be trusted with health advice, or should it simply remain a supplementary tool rather than a decision-maker?

In the context of this case, the man’s experience serves as a stark reminder that the use of AI tools for medical purposes must be handled with caution. Despite their sophistication, AI systems are not equipped to replace professional medical advice. The risks involved in disregarding expert guidance for AI-generated suggestions can lead to severe, sometimes irreversible, consequences.

While this incident reflects the potential dangers of AI in health advice, it also emphasizes the growing role of technology in medical practice. Experts agree that AI can complement traditional healthcare, especially in administrative tasks and data processing. However, using AI as the primary source for critical health decisions may lead to unforeseen complications, particularly if the technology cannot understand the full scope of an individual’s health profile.

As AI technology evolves, so too must the standards and regulations around its use, particularly in sensitive fields like healthcare. Professionals stress that AI should always be used as a supplementary tool rather than a replacement for human expertise, particularly in situations where human judgment, experience, and medical context are critical.

Ultimately, the challenge will be ensuring that users are fully informed about the limitations of AI and the potential risks involved in trusting it with their health.

Post a Comment

Previous Post Next Post

Contact Form