Poster presented at the European Society of Radiology Congress 2021 | Poster number C-13640
C. Tang, J. C. Y. Seah, Q. Buchlak, C. Jones; Sydney/AU
To raise awareness of the importance of usable AI design, provide examples of model interpretability methods, and to summarise clinician reactions to methods of communicating AI model interpretability in a radiological tool.
In the past decade, the number of AI-enabled tools, especially deep learning solutions, has exploded onto the radiological scene with the promise of revolutionising healthcare. However, these data-driven models are often treated as numerical exercises and black boxes, offering little insight into the reasons for their behaviour. Trust in novel technologies is often limited by a lack of understanding of the decision-making processes behind the technology.
Design Cycle “It’s just aggravating to have to move and shuffle all these windows… shuffle between the list and your [Brand Name] dictation software… [or] Google Chrome or Internet Explorer, to search for something on there. Everything’s just opening on top of each other, which is aggravating.” – UX interview with Interventional Radiologist, USA. The design of the entire user experience of our AI tool has involved radiologists and other clinicians at every step.
The inclusion of interpretability techniques has been well-received through testing in multiple rounds of user interviews, reflecting a demand from the broader radiological community to be able to demystify the black box of AI. Future AI work should involve radiologists at all steps of the design process in order to address workflow and UI concerns, especially as regulatory authorities move towards guidelines that aim to ensure a safer and more interpretable AI future.
All EPOS posters from ECR 2014 onwards are published under an open license. Submitters agreed that they grant the readers of EPOS a worldwide, royalty-free, non-exclusive license to share (copy, distribute and transmit), and to remix (adapt) their poster, on the condition that:
a. their work is attributed, but not in any way that suggests that the author(s) or the work are endorsed,
b. the use is non-commercial, and
c. the work is shared under the same conditions as outlined here.
The full version of the agreement can be found here