Category: Cyber Security | Published: 2026-03-11
Healthcare AI System Hacked Through Prompt Injection
Security researchers have successfully hacked a healthcare AI chatbot currently being trialled in a US medical programme, forcing it to produce dangerous and misleading health advice. The findings raise serious questions about AI security in safety-critical environments - and offer a stark warning for any UK business deploying AI tools.
How the Chatbot Was Hacked
The platform in question, Doctronic, is a US telehealth system built around an AI medical assistant designed to help patients understand symptoms, manage conditions and connect with licensed doctors. It operates as a first point of contact, gathering patient information and preparing clinical summaries for human clinicians.
AI security firm Mindgard examined the system and found it could be hacked using a technique known as prompt injection. Large language models rely on hidden internal instructions - known as system prompts - to govern their behaviour. Mindgard's researchers tricked the chatbot into revealing those instructions by manipulating the conversation context, effectively convincing the system its session had not yet started.
Once the internal rules were exposed, the researchers introduced fabricated regulatory bulletins and policy updates that the hacked system treated as legitimate. This allowed them to push the AI towards producing unsafe outputs, including altered medication guidelines and fabricated medical guidance.
What the Hacked System Was Made to Do
The results were alarming. Once hacked, the chatbot could be manipulated into spreading vaccine conspiracy theories, recommending methamphetamine as a treatment, generating altered clinical guidance, and even advising users how to manufacture illegal substances.
As Mindgard noted: _"System prompts are the keys to the kingdom when it comes to chatbots."_ When those keys are compromised, the entire system becomes vulnerable to exploitation.
Why This Hack Is More Serious Than a Typical AI Error
What makes this incident particularly concerning is that Doctronic sits inside an active healthcare workflow. The system generates structured medical summaries known as SOAP notes, which clinicians review as part of patient consultations. If a hacked session produces manipulated information, that data can flow directly into legitimate clinical documentation.
Mindgard warned that this could _"actively undermine the human professionals who might trust its authoritative-looking output."_ In busy healthcare environments, clinicians rely on these summaries to interpret cases quickly. A hacked AI producing convincing but false information creates a risk that goes far beyond a simple chatbot glitch.
The Wider Evidence on AI Hacking Risks in Healthcare
This is not an isolated concern. A major study led by the University of Oxford earlier this year examined how people interact with AI systems when seeking medical advice. Researchers found that users were no better at identifying appropriate courses of action when using AI chatbots compared with traditional online searches. In some cases, the mixture of correct and incorrect AI-generated advice left users more confused.
The study concluded that strong performance on medical knowledge benchmarks does not translate into safe real-world interactions with patients. Systems intended for healthcare must be evaluated under real conditions with actual users before widespread deployment.
Lessons for UK Businesses Using AI
Whilst this particular incident involved a US healthcare pilot, the underlying vulnerabilities apply to any organisation deploying AI-powered tools. Prompt injection attacks are not limited to medical chatbots - they can target customer service bots, internal knowledge assistants, legal AI tools and financial advisory systems.
For UK businesses, the key takeaways are clear:
- AI systems can be hacked - even those with safety guardrails in place. Treating AI tools as inherently secure is a dangerous assumption.
- Prompt security matters - system prompts must be hardened against extraction and manipulation. If an attacker can read your AI's internal instructions, they can subvert its behaviour.
- Human oversight is essential - AI should support decision-making, not replace it. Outputs must be reviewed by qualified professionals, especially in high-stakes environments.
- Regular security testing is critical - AI systems should be subjected to adversarial testing, just like any other part of your IT infrastructure.
The Doctronic case is a reminder that deploying AI without robust cyber security protections creates risks that can extend well beyond the digital world. As AI tools become more embedded in business operations, ensuring they cannot be hacked or manipulated must be a fundamental part of any organisation's security strategy.