Our article in the Journal of the Royal Society of Medicine argues that safe and effective AI in healthcare must incorporate mechanisms that emulate human judgement - down-weighting old, inaccurate or superseded information and prioritising what is recent, clinically relevant and reaffirmed - so that AI supports, rather than disrupts, high-quality patient care. Clinicians constantly revise, reinterpret and filter past information so that only what is relevant, accurate and timely shapes present-day management decisions; medical records function as dynamic “working tools” rather than fixed archives. By contrast, many AI systems lack this capacity for selective forgetting and often treat all historical data as equally meaningful. This can lead to outdated or low-confidence diagnoses being repeatedly resurfaced, persistent labels influencing clinical expectations, and irrelevant, long-resolved events cluttering summaries and decision-support outputs. Such indiscriminate recall...
The UK government’s forthcoming review of mental health and neurodevelopmental diagnoses presents an opportunity to improve the healthcare and benefits system if the potential risks are averted. Rising rates of conditions such as ADHD, autism, and anxiety disorders have raised questions about whether we are seeing a genuine increase in need or greater awareness and possible over-diagnosis. A thoughtful, evidence-based review could help bring much-needed clarity. But if mishandled, it could deepen inequalities and undermine support for those who need it most. Done well, the review could improve diagnostic quality and reduce the postcode lottery that too often defines access to assessment and treatment. Clearer clinical standards and properly funded services would allow professionals to make more accurate diagnoses, shorten long waiting lists, and better match interventions to individuals’ needs. This is an outcome everyone should welcome. But the review must not become a vehicle for re...