Medical Chatbot Hacked Into Giving Dangerous Health Advice

Category: Cyber Security | Published: 2026-03-11

Healthcare AI System Hacked Through Prompt Injection

Security researchers have successfully hacked a healthcare AI chatbot currently being trialled in a US medical programme, forcing it to produce dangerous and misleading health advice. The findings raise serious questions about AI security in safety-critical environments - and offer a stark warning for any UK business deploying AI tools.

How the Chatbot Was Hacked

The platform in question, Doctronic, is a US telehealth system built around an AI medical assistant designed to help patients understand symptoms, manage conditions and connect with licensed doctors. It operates as a first point of contact, gathering patient information and preparing clinical summaries for human clinicians.

AI security firm Mindgard examined the system and found it could be hacked using a technique known as prompt injection. Large language models rely on hidden internal instructions - known as system prompts - to govern their behaviour. Mindgard's researchers tricked the chatbot into revealing those instructions by manipulating the conversation context, effectively convincing the system its session had not yet started.