AI Can Simplify Reports for Patients
Large language models can produce radiology report versions that are significantly more understandable for patients than the original physician-written reports — but critical errors in open-source models raise an important safety concern. A study published in European Radiology tested three different LLMs on simplifying 60 reports across X-ray, CT, MRI, and ultrasound modalities.

The demand is real: patients increasingly want direct access to their imaging reports. But the technical language makes comprehension nearly impossible for laypersons. And asking radiologists to draft a second plain-language version is impractical given current workforce shortages.
What the Study Found
German researchers tested ChatGPT-4o alongside two open-source LLMs (Llama-3-70B and Mixtral-8x22B) deployed on-premises within their hospitals. The models were instructed to generate summaries at an eighth-grade reading level while preserving key clinical information.
Key findings included:
- Original reports scored just 17 on the Flesch readability scale, versus 44-46 for AI-generated versions
- Understandability jumped from 1.5 to 4.1-4.4 on a five-point scale
- The two open-source LLMs showed critical error rates of 8.3% to 10%, while ChatGPT-4o had zero critical errors
- Reading time increased considerably: 15 seconds for originals versus 64-73 seconds for simplified versions
On-Premises versus Cloud: A Real Dilemma
The critical error issue with open-source models is particularly concerning. Many institutions prefer locally deployed LLMs for patient privacy, avoiding sending clinical data to external servers. However, the tested local models demonstrated error rates that could result in patient harm — a trade-off that requires careful evaluation.
For professionals working with DICOM integration in clinical practice, this technology represents an additional processing layer that can be integrated into existing workflows. The key lies in ensuring adequate clinical oversight, especially when using PACS technology with AI.
Looking Ahead
The study points to a future where tasks like report simplification could be delegated to generative AI algorithms — with the caveat that significant work remains to ensure patient safety and privacy before this becomes routine clinical practice.
Source: The Imaging Wire

