Growing reliance on AI tools for medical advice sparks concern among healthcare professionals
In a worrying trend, patients are increasingly turning to artificial intelligence platforms like ChatGPT for medical guidance, often bypassing professional consultation. A Hyderabad-based government psychiatrist has raised serious concerns about the risks associated with AI-generated medical reports.
Patient Brings 10-Page AI Report for Diagnosis
Dr. Raghuveer Raju Boosa, Assistant Professor of Psychiatry at the Institute of Mental Health, revealed that a 30-year-old IT professional recently arrived for his first consultation carrying a detailed 10-page report generated using ChatGPT.
The patient had entered his symptoms into the AI tool, answered a series of automated questions, and attempted to determine his own diagnosis based on the generated output.
AI May Misinterpret Symptoms, Warns Expert
Speaking to Deccan Chronicle, Dr. Boosa cautioned that AI tools can both overestimate and underestimate symptoms.
He explained that even a common symptom like a headache might lead the AI to suggest severe conditions such as a brain bleed, creating unnecessary panic.
“The tool over-validates and invalidates many symptoms,” he noted, emphasizing that such outputs can mislead patients without proper clinical context.
Actual Diagnosis: OCD with Health Anxiety
Upon professional evaluation, the patient was diagnosed with Obsessive Compulsive Disorder (OCD), specifically with health-related obsessions and illness anxiety features.
Doctors observed that the patient frequently worried about having serious illnesses, interpreting normal physical sensations—like headaches or body aches—as signs of severe medical conditions.
AI Suggested Multiple Unrelated Conditions
In contrast to the clinical diagnosis, the AI-generated report listed possibilities such as electrolyte imbalance, endocrine disorders, and nutritional deficiencies—none of which were central to the patient’s condition.
Following proper treatment, the patient is now reportedly showing improvement.
Privacy Concerns Over Sharing Medical Data
Doctors have also raised alarms about data privacy. According to psychiatrists, many patients are uploading prescriptions and personal medical details into AI platforms without understanding how that data may be used or stored.
“We often see patients dropping prescriptions into ChatGPT. This is sensitive personal information, and we don’t know how it is handled,” experts warned.
Disturbing Case Highlights Dangers of Online Medical Misinformation
In a separate shocking incident, a 38-year-old woman reportedly lost her life after two individuals posing as doctors attempted a surgical procedure by following tutorials on YouTube.
The case has sparked outrage and further highlighted the dangers of relying on unverified online medical content.
Experts Urge Balance Between Technology and Medical Supervision
While AI continues to transform healthcare, experts stress that it should only be used as a supportive tool—not a replacement for qualified medical advice.
Healthcare professionals urge patients to consult licensed practitioners for diagnosis and treatment, rather than depending solely on AI-generated information.
