{"id":15519,"date":"2026-03-30T05:21:49","date_gmt":"2026-03-30T08:21:49","guid":{"rendered":"https:\/\/rtmedical.com.br\/tmp-en-1774858907084\/"},"modified":"2026-03-30T05:22:19","modified_gmt":"2026-03-30T08:22:19","slug":"deepfake-xrays-fool-radiologists-ai","status":"publish","type":"post","link":"https:\/\/rtmedical.com.br\/en\/deepfake-xrays-fool-radiologists-ai\/","title":{"rendered":"Deepfake X-Rays Fool Radiologists and AI, Study Finds"},"content":{"rendered":"<h2>Radiologists and AI Both Struggle to Spot Synthetic X-Rays<\/h2>\n<p>A peer-reviewed study published in <em>Radiology<\/em>, led by investigators at the Icahn School of Medicine at Mount Sinai in New York, reports that both radiologists and advanced artificial intelligence models struggle to reliably distinguish between authentic and AI-generated X-ray images. The findings raise serious concerns about clinical integrity and cybersecurity in diagnostic imaging environments.<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" class=\"alignright lazyload\" data-src=\"https:\/\/rtmedical.com.br\/wp-content\/uploads\/2026\/03\/deepfake-raio-x-radiologia.jpg\" alt=\"Deepfake X-rays challenge radiologists and AI detection\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 700px; --smush-placeholder-aspect-ratio: 700\/414;\"><figcaption>Study published in Radiology evaluates detection of synthetic X-ray images<\/figcaption><\/figure>\n<p>The research evaluated 17 radiologists from 12 centers across six countries who reviewed 264 images, half of which were synthetic. The dataset included images generated by ChatGPT-based systems as well as RoentGen, a diffusion model developed by Stanford Medicine.<\/p>\n<h2>Only 41% of Radiologists Spotted Fakes Without Warning<\/h2>\n<p>When radiologists were <strong>not informed<\/strong> that synthetic images were included, only 41% identified them without prompting. After disclosure, their average accuracy rose to 75%, with individual performance ranging from 58% to 92%. Notably, experience level did not correlate with detection accuracy, although musculoskeletal subspecialists performed better than other groups.<\/p>\n<p>This result is concerning because it suggests that in routine clinical practice \u2014 where there is no expectation that images might be fabricated \u2014 detection rates would be extremely low. Most radiologists simply would not expect to encounter a synthetic image in their PACS.<\/p>\n<h2>AI Models Also Failed at Detection<\/h2>\n<p>The multimodal large language models evaluated \u2014 GPT-4o, GPT-5, Gemini 2.5 Pro, and Llama 4 Maverick \u2014 achieved detection rates between 57% and 85%, with variability comparable to human radiologists. Most alarmingly, <strong>even the model used to generate some of the images was unable to consistently identify them<\/strong>. This indicates that generation technology has already outpaced the detection capabilities of the generators themselves.<\/p>\n<p>This connects directly to earlier findings about <a href=\"https:\/\/rtmedical.com.br\/ia-detecta-laudos-radiologia-sinteticos\/\">AI detecting AI-generated radiology reports<\/a> \u2014 if text-based reports already present authenticity challenges, diagnostic images represent an even greater risk.<\/p>\n<h2>Clinical and Legal Risks<\/h2>\n<p>Lead author Dr. Mickael Tordjman, a postdoctoral fellow at Mount Sinai, warned of potential misuse. &#8220;This creates a high-stakes vulnerability for fraudulent litigation if, for example, a fabricated fracture could be indistinguishable from a real one,&#8221; he said. He also cautioned about cybersecurity risks if manipulated images were introduced into clinical systems.<\/p>\n<p>Risk scenarios include:<\/p>\n<ul>\n<li><strong>Insurance fraud:<\/strong> synthetic images of nonexistent injuries for fraudulent reimbursement<\/li>\n<li><strong>Fraudulent medical litigation:<\/strong> fabrication of radiological evidence of medical errors<\/li>\n<li><strong>Clinical sabotage:<\/strong> insertion of fake images into medical records to compromise diagnoses<\/li>\n<li><strong>Clinical trial manipulation:<\/strong> contamination of research datasets with synthetic data<\/li>\n<\/ul>\n<h2>Visual Patterns and Proposed Safeguards<\/h2>\n<p>The study identified recurring visual patterns in synthetic images: overly smooth bones, symmetrical lung fields, and unusually uniform vascular structures. While useful as clues, these artifacts are likely to diminish as generative models evolve.<\/p>\n<p>The authors recommend technical safeguards such as <strong>embedded watermarks<\/strong> and <strong>cryptographic signatures<\/strong> at the point of image capture \u2014 essentially ensuring that each image carries a proof of origin that cannot be forged. They also call for expanded training datasets and specialized detection tools as generative models advance toward more complex modalities like CT and MR.<\/p>\n<h2>Implications for PACS and Imaging Workflows<\/h2>\n<p>For imaging system administrators and medical informatics specialists, the study reinforces the importance of image authentication protocols integrated into PACS. Mechanisms such as DICOM Digital Signatures and blockchain-based image traceability, still sparsely adopted, gain relevance in light of the concrete threat of radiological deepfakes. The <a href=\"https:\/\/rtmedical.com.br\/melhores-recursos-radiologia-2026\/\">radiology resource landscape for 2026<\/a> should incorporate authenticity verification tools as an essential component.<\/p>\n<h2>Outlook: A Digital Arms Race<\/h2>\n<p>The study suggests we are at the beginning of an arms race between generation and detection of synthetic medical images. As diffusion models and multimodal LLMs become more sophisticated, the ability to create radiological images indistinguishable from real ones will only increase. The response will need to combine technical solutions (watermarks, cryptography), regulatory measures (mandatory authentication standards), and education (training radiologists to recognize synthetic artifacts).<\/p>\n<p><strong>Source:<\/strong> <a href=\"https:\/\/www.dotmed.com\/news\/story\/66184\" target=\"_blank\" rel=\"noopener\">DOTmed Healthcare Business News<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Radiology study shows radiologists and AI models struggle to distinguish real X-rays from AI-generated synthetic images.<\/p>\n","protected":false},"author":1,"featured_media":15515,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"ngg_post_thumbnail":0,"fifu_image_url":"","fifu_image_alt":"","footnotes":""},"categories":[102,100],"tags":[],"class_list":{"0":"post-15519","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"category-radiology"},"aioseo_notices":[],"rt_seo":{"title":"","description":"Radiology study reveals radiologists and AI struggle to distinguish real X-rays from deepfakes. See the risks and proposed safeguards.","canonical":"","og_image":"","robots":"index,follow","schema_type":"Article","include_in_llms":true,"llms_label":"Deepfake X-Rays Radiology Study","llms_summary":"Study in Radiology shows radiologists and AI models struggle to distinguish authentic X-rays from AI-generated synthetic images, raising clinical and cybersecurity concerns.","faq_items":[],"video":[],"gtin":"","mpn":"","brand":"","aggregate_rating":[]},"_links":{"self":[{"href":"https:\/\/rtmedical.com.br\/en\/wp-json\/wp\/v2\/posts\/15519\/"}],"collection":[{"href":"https:\/\/rtmedical.com.br\/en\/wp-json\/wp\/v2\/posts\/"}],"about":[{"href":"https:\/\/rtmedical.com.br\/en\/wp-json\/wp\/v2\/types\/post\/"}],"author":[{"embeddable":true,"href":"https:\/\/rtmedical.com.br\/en\/wp-json\/wp\/v2\/users\/1\/"}],"replies":[{"embeddable":true,"href":"https:\/\/rtmedical.com.br\/en\/wp-json\/wp\/v2\/comments\/?post=15519"}],"version-history":[{"count":1,"href":"https:\/\/rtmedical.com.br\/en\/wp-json\/wp\/v2\/posts\/15519\/revisions\/"}],"predecessor-version":[{"id":15521,"href":"https:\/\/rtmedical.com.br\/en\/wp-json\/wp\/v2\/posts\/15519\/revisions\/15521\/"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rtmedical.com.br\/en\/wp-json\/wp\/v2\/media\/15515\/"}],"wp:attachment":[{"href":"https:\/\/rtmedical.com.br\/en\/wp-json\/wp\/v2\/media\/?parent=15519"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rtmedical.com.br\/en\/wp-json\/wp\/v2\/categories\/?post=15519"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rtmedical.com.br\/en\/wp-json\/wp\/v2\/tags\/?post=15519"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}