As artificial intelligence (AI) become increasingly embedded in routine healthcare - supporting tasks such as triage, documentation, interpretation of investigation, diagnosis and patient communication - it introduces new patient safety risks through incorrect outputs (“hallucinations”) that should be treated as safety errors rather than technical glitches. In our article in the Journal of Patient Safety, we argue that primary care must extend its established safety culture to AI by systematically detecting, classifying, reporting, and learning from AI-related errors using principles already applied to human error, such as audit, governance, and incident reporting.
We highlight evidence that AI-generated clinical text can contain omissions, fabrications, or unsafe recommendations that may not be apparent to clinicians and patients and that risk becoming “silent errors” in electronic health records. These errors can then contribute to cognitive offloading if clinicians over-trust AI outputs. To mitigate these risks, we call for routine AI oversight in practice (including review, sampling, and escalation), explicit clinician accountability for AI-influenced outputs, patient engagement in spotting discrepancies, and closer collaboration with AI developers.
Ultimately, AI errors are inevitable, and that embedding AI safety as a core, proactive design feature - rather than an afterthought - is essential to ensure AI enhances rather than compromises patient safety in primary care.
Comments