Highlights
- AI-rewritten radiology reports found almost twice as easy to understand compared with original versions.
- Study reviewed 38 pieces of research covering more than 12,000 radiology reports simplified using AI.
- Around one per cent of AI-simplified reports contained errors, highlighting need for clinical oversight.
Researchers found that when X-ray, CT and MRI scan reports were rewritten using advanced AI systems including ChatGPT, patients found them almost twice as easy to understand.
The reading level dropped from "university level" to one more aligned with the comprehension of an 11-13 year old.
The study reviewed 38 pieces of research published between 2022 and 2025, covering more than 12,000 radiology reports simplified using AI.
These rewritten reports were assessed by patients, members of the public and clinicians to evaluate both patient understanding and clinical accuracy.
NHS could benefit
Lead author Dr Samer Alabed, Senior Clinical research fellow at the University of Sheffield, said "The fundamental issue with these reports is they're not written with patients in mind.
They are often filled with technical jargon and abbreviations that can easily be misunderstood, leading to unnecessary anxiety, false reassurance and confusion."
He added that patients with lower health literacy or English as a second language were "particularly disadvantaged," with clinicians frequently spending valuable appointment time explaining terminology instead of focusing on care and treatment.
The findings suggest AI-assisted explanations could become standard companions to medical reports, improving transparency across healthcare systems including the NHS, where patient access to radiology reports has expanded rapidly through initiatives such as the NHS App.
Accuracy and oversight
While doctors reviewing AI-simplified reports found the vast majority accurate and complete, approximately one per cent contained errors including incorrect diagnoses, highlighting the need for careful oversight.
Significantly, none of the 38 studies reviewed were conducted in UK or NHS settings, a gap the research team is now seeking to address.
Dr Samer said the most important priority was "real-world testing in NHS clinical workflows to properly assess safety, efficiency, and patient outcomes."
He stressed the goal was not to replace clinicians but to "support clearer, kinder, and more equitable communication in healthcare" through human-oversight models where clinicians review AI-generated explanations before sharing with patients.





