AI Safety in Medical Imaging

AI for medical image analysis can improve patient outcomes by facilitating diagnosis and treatment, as well as catalysing medical research. This requires safe and trusted integration of AI-based tools in research and clinical workflows. Towards this, we develop AI-based tools that are reliable (avoid mistakes), transparent (explain decisions to user) and fair (benefit everyone).

Our research includes improving AI-based method generalisation to heterogeneous clinical data (e.g. due to varying demographics or imaging protocols), developing safeguards to mitigate AI mistakes (e.g. uncertainty estimation, outlier detection), developing explainable AI, and alleviating biases (e.g. causal models, fair ML).

 

| Konstantinos Kamnitsas
| Alison Noble
| Vicente Grau
| Abhirup Banerjee