- Veterinary View Box
- Posts
- AI vs. Radiologists: Can AI Match Human Expertise in Veterinary Imaging?
AI vs. Radiologists: Can AI Match Human Expertise in Veterinary Imaging?
Frontiers in Veterinary Science 2025
Yero S. Ndiaye, Peter Cramton, Chavdar Chernev, Axel Ockenfels, Tobias Schwarz
Background
Artificial intelligence (AI) is increasingly being used in veterinary radiology to supplement human expertise and improve diagnostic efficiency. This study aimed to compare the performance of a commercially available AI radiology software (SignalRAY®) with board-certified veterinary radiologists in interpreting canine and feline radiographs. The researchers sought to determine whether AI could match the diagnostic accuracy of human radiologists and identify cases where AI performs best.
Methods
Fifty anonymized radiographic studies from dogs and cats were evaluated by 11 board-certified veterinary radiologists and processed using the AI software. The AI system classified findings as normal or abnormal, and its results were compared with human radiologists' reports. Diagnostic performance metrics, including accuracy, sensitivity, and specificity, were calculated. Additionally, cases were categorized by ambiguity to assess AI performance in different diagnostic scenarios.
Results
AI performed comparably to the best radiologist in overall accuracy and outperformed the median radiologist. AI demonstrated higher specificity than human radiologists, meaning it was better at confirming normal findings. However, it had lower sensitivity, making it less effective at detecting abnormalities. In high-ambiguity cases, AI maintained a high accuracy but struggled with abnormal findings, while in low-ambiguity cases, AI performed exceptionally well.
Limitations
The study included a relatively small number of radiographic cases and did not assess AI's ability to provide differential diagnoses. The consensus of human observers was used as the "ground truth," which may introduce bias, especially in cases with high inter-observer variability. Additionally, the study only evaluated a single AI system, limiting generalizability to other AI tools.
Conclusions
AI software can reliably assist in veterinary radiology and performs best in confirming normal findings and low-ambiguity cases. However, it is less effective at detecting abnormalities and cannot replace human expertise in complex cases. The findings suggest that AI is best used as a complementary tool rather than a replacement for veterinary radiologists.

Example of a case with high-ambiguity radiographic findings. In this abdominal radiographic study [(A) right lateral, (B) left lateral radiograph] of a 7-year-old Golden retriever with stage 5 lymphoma presenting for vomiting. A wide range of radiographic findings were identified by human observers, including a perisplenic soft tissue opacity or mass (observer 1, 2, and 5, blue arrows), small intestinal granular material (observer 1, 3, and 5, arrowhead), and colonic gas distension (observers 1, 2, and 5, red arrow), whereas observer 4 and the AI observer did not report any of these findings.
How did we do? |
Disclaimer: The summary generated in this email was created by an AI large language model. Therefore errors may occur. Reading the article is the best way to understand the scholarly work. The figure presented here remains the property of the publisher or author and subject to the applicable copyright agreement. It is reproduced here as an educational work. If you have any questions or concerns about the work presented here, reply to this email.