Our article in the Journal of the Royal Society of Medicine argues that safe and effective AI in healthcare must incorporate mechanisms that emulate human judgement - down-weighting old, inaccurate or superseded information and prioritising what is recent, clinically relevant and reaffirmed - so that AI supports, rather than disrupts, high-quality patient care.
Clinicians constantly revise, reinterpret and filter past information so that only what is relevant, accurate and timely shapes present-day management decisions; medical records function as dynamic “working tools” rather than fixed archives. By contrast, many AI systems lack this capacity for selective forgetting and often treat all historical data as equally meaningful.
This can lead to outdated or low-confidence diagnoses being repeatedly resurfaced, persistent labels influencing clinical expectations, and irrelevant, long-resolved events cluttering summaries and decision-support outputs. Such indiscriminate recall not only risks misdirecting clinical care, but also adds to information overload, exacerbates cognitive burden and contributes to clinician burnout. Importantly, it can also undermine patient trust when obsolete or stigmatising terms continue to shape interactions with the clinicians and the healthcare system.
Comments