- Veterinary View Box
- Posts
- How do AI recommendations affect radiologist interpretation
How do AI recommendations affect radiologist interpretation
Nature Scientific Communications - June 2023
Mohammad H. Rezazade Mehrizi, Ferdinand Mol, Marcel Peter, Erik Ranschaert, Daniel Pinto Dos Santos, Ramin Shahidi, Mansoor Fatehi & Thomas Dratsch
Background: The authors investigate how AI suggestions affect the diagnostic decisions of radiologists in mammography examinations. They examine the role of explainability inputs and attitudinal priming as potential moderators of this effect.
Methods: The authors conduct two quasi-experimental studies with 92 radiologists who are randomly assigned to different groups. In study 1, the groups differ in the type and amount of explainability inputs they receive along with AI suggestions. In study 2, the groups differ in the type of attitudinal priming they receive before the task. The authors measure the accuracy, error type, and decision process of the radiologists on 15 mammogram cases.
Results: The authors find that AI suggestions have a strong and consistent influence on radiologists’ decisions, regardless of the explainability inputs and attitudinal priming. Radiologists tend to follow both correct and incorrect AI suggestions, resulting in lower accuracy and higher over-diagnosis than under-diagnosis. The authors identify different pathways that lead to correct, over-, and under-diagnosis decisions, depending on the type of AI suggestion, the consultation of explainability inputs, and the time spent on the task.
Limitations: The authors acknowledge several limitations of their study, such as the limited generalizability of the findings due to the sample size, the selection of cases, the setting of the experiment, and the design of the AI suggestions and explainability inputs. They also note that their study does not capture the long-term effects of AI suggestions on radiologists’ learning and trust.
Conclusions: The authors conclude that AI suggestions can have a significant impact on radiologists’ decisions, especially when they are incorrect. They suggest that explainability inputs and attitudinal priming are not sufficient to overcome this influence, and that more research is needed to understand how to design and deploy AI tools that can support radiologists in a more effective and ethical way.
Difference between radiologists’ correct diagnosis when receiving correct and incorrect AI suggestions (breast-side level).
Some possible takeaways for veterinary radiologists from this paper are:
AI suggestions can strongly influence diagnostic decisions. The paper shows that radiologists tend to follow both correct and incorrect AI suggestions, regardless of the explainability inputs and attitudinal priming. This suggests that veterinary radiologists should be aware of the potential biases and errors that AI systems may introduce in their diagnoses, and not rely on them blindly.
Explainability inputs and attitudinal priming have limited effects on overcoming AI influence. The paper shows that providing different types of explainability inputs (such as heatmaps and case attributes) or priming different attitudes towards AI (such as positive, negative, or ambivalent) do not significantly change the diagnostic performance of radiologists. This suggests that veterinary radiologists should not expect these interventions to be sufficient for improving their analytical engagement and critical examination of AI suggestions.
Analytical pathways and decision types vary across cases and contexts. The paper shows that there are various pathways that lead radiologists to make correct, over-, or under-diagnosis decisions, depending on the type and size of AI suggestions, the consultation of AI suggestions and explainability inputs, and the time spent on the tasks. This suggests that veterinary radiologists should be flexible and adaptive in their interactions with AI systems, and consider the specific characteristics and challenges of each case and context.
How did we do? |
Disclaimer: The summary generated in this email was created by an AI large language model. Therefore errors may occur. Reading the article is the best way to understand the scholarly work. The figure presented here remains the property of the publisher or author and subject to the applicable copyright agreement. It is reproduced here as an educational work. If you have any questions or concerns about the work presented here, reply to this email.