What to look out for if you are a doctor using Generative AI in clinical practice
Generative AI is becoming part of daily life, and its use by doctors is growing. However, as with all new technologies, using Gen AI comes with risks.
As stated in a previous post, unlike more conventional medical devices, not many large language models (LLMs) have been through formal regulatory processes, meaning they fall into a grey area when deployed in clinical practice. This blog explores how often doctors are turning to LLMs, the risks involved, and how healthcare professionals should approach this evolving technology.
How many doctors already use AI?
Doctors in the NHS are already using ChatGPT and similar tools in their day-to-day work. A survey published in BMJ Health & Care Informatics found that around 20% of UK GPs had used generative AI systems like ChatGPT to draft notes, letters, or even suggest potential diagnoses.
The GMC’s 2025 study showed a similar pattern, with 29% of doctors reporting AI use in the past year. Much of this is self-initiated, with little formal guidance.
Practical examples include drafting clinic notes, preparing teaching materials and, in some cases, exploring differentials. It’s clear that a large proportion of doctors find immense value in this technology: it might help them save time, summarise high volumes of information, and in some cases even guide their clinical thinking.
Risks and Dangers
While generative AI offers efficiency, its risks in healthcare are substantial. Most importantly, LLMs often hallucinate, providing plausible but incorrect information. In clinical settings, this could translate into unsafe diagnostic or prescribing advice. Additionally, inputting patient data into non-compliant platforms risks breaching GDPR and NHS information governance standards.
What’s important for healthcare professionals to understand is that AI systems themselves cannot be held accountable. The Medical Defence Union (MDU) states that “an AI system is not a legal entity and cannot be sued or convicted, which means any claim, regulatory action or prosecution would either fall against the developer, the user or both”. This means that healthcare professionals may be on the hook for liability if anything goes wrong and they are using unregulated products.
In fact, NHS England has already warned against the use of unregulated ambient voice technologies (AI transcription), after reports of widespread use, citing risks to data safety and quality.
Practical Tips for the safe use of AI in clinical practice
Some NHS trusts have published guidance to help healthcare professionals understand the principles of safe AI use. For example, Yorkshire and the Humber ICB advises that:
No personal or business-sensitive data should be entered into generative AI tools.
These apps should only be used for non-clinical purposes unless formally approved.
Information Governance teams must be informed if AI is used routinely.
All outputs must be verified before use.
AI tools should be accessed only on corporate devices, and software installation requires IT approval.
NHS Transformation Directorate adds that staff must comply with data protection law, verify accuracy, and raise concerns about unsafe AI use with their organisations (NHS Transformation Directorate, 2024).
Additionally, healthcare professionals should look out for appropriate regulatory clearance for AI tools that are performing medical tasks - as these products must be regulated medical devices.
National policy is lagging behind real-world use
Formal regulation is lagging behind practice. The GMC has not published specific AI use regulations for doctors but the MDU has inferred that Good Medical Practice (2024) standards still apply: “doctors must provide a good standard of care, protect patient data, and document decisions appropriately.” It will be important for doctors to maintain their professional standards when considering the use of AI tools to improve efficiency.
The British Medical Association has also issued principles highlighting safety, accountability, and fairness. There is growing consensus that the NHS needs a national AI code of practice to give clinicians clarity and protect patients.
Key takeaways
AI tools are already part of NHS practice, often outside formal regulation. Around a quarter of doctors have experimented with generative AI, mainly for documentation and admin, but some for clinical reasoning. While the benefits include reduced administrative burden and faster outputs, the dangers (ranging from inaccuracy to breaches of confidentiality) are significant.
The bottom line is, until stronger governance is in place, clinicians must use LLMs cautiously, confine them to low-risk non-clinical tasks (e.g. admin tasks) unless the tools they use have the appropriate evidence and regulatory clearances in place. This is in order to prioritise patient safety. Policymakers, meanwhile, need to accelerate the development of clear national guidance, and regulators should offer more clarity on what AI functionalities qualify as medical devices.
Hardian Health is a clinical digital consultancy focused on leveraging technology into healthcare markets through clinical evidence, market strategy, scientific validation, regulation, health economics and intellectual property.