As artificial intelligence (AI) become increasingly embedded in routine healthcare - supporting tasks such as triage, documentation, interpretation of investigation, diagnosis and patient communication - it introduces new patient safety risks through incorrect outputs (“hallucinations”) that should be treated as safety errors rather than technical glitches. In our article in the Journal of Patient Safety , we argue that primary care must extend its established safety culture to AI by systematically detecting, classifying, reporting, and learning from AI-related errors using principles already applied to human error, such as audit, governance, and incident reporting. We highlight evidence that AI-generated clinical text can contain omissions, fabrications, or unsafe recommendations that may not be apparent to clinicians and patients and that risk becoming “silent errors” in electronic health records. These errors can then contribute to cognitive offloading if clinicians over-...
Updates from Imperial College London's Professor of Primary Care & Public Health