2024-25 Project (Howe & Alonso & Benjamin)
Explainable AI for brain tumour diagnosis
Prof Franklyn Howe at SGUL
Prof Eduardo Alonso at City University
Dr Philip Benjamin at SGUL
This is an exciting opportunity to join a translational MRI research group at St George’s who are working with an Artificial Intelligence and Computer Science group at City University. In this project our combined interests are to develop biomarkers that aid diagnosis and prognosis of brain tumours using Explainable AI applied to MRI data.
Glial tumours are a major challenge to diagnose and treat due to their genetic variability. These tumours have heterogeneous MRI characteristics with complex and infiltrative growth patterns and AI methods are being used to analyse imaging data to aid diagnosis and predict patient outcome. For an AI system to become clinically acceptable for providing diagnostic support it cannot be a “black-box”; there needs to be an understanding of its internal processes to enable appropriate use and assessment of its limitations. It must be Explainable AI.
This project will assess and develop AI systems that delineate and classify glial brain tumours and predict patient survival. Reverse-engineering approaches will be used to investigate what regions and characteristics of the clinical MR images are being used to provide tumour detection and classification. An independent test-set of multiparametric MRI (including diffusion, perfusion and 1H spectroscopy) will be used to provide an interpretation of the physiological and biological properties of the tumour associated with the imaging features and regions that the AI system is focussing on.
You will join a multidisciplinary research team of computer scientists, MRI physicists and mathematicians who collaborate with neurosurgeons and neuroradiologists.
You will have a background in computer science, physics or mathematics, with strong programming skills (preferably in Python) and experience in deep learning models. You will have a desire to develop new technology into practical clinical tools that aid the neuro-radiological and -oncological teams managing the care and treatment of brain tumour patients.
Project Key Words
AI, MRI, Cancer, Neuroradiology
MRC LID Themes
- Global Health = Yes
- Health Data Science = Yes
- Infectious Disease = No
- Translational and Implementation Research = Yes
MRC Core Skills
- Quantitative skills = Yes
- Interdisciplinary skills = Yes
- Whole organism physiology = No
Skills we expect a student to develop/acquire whilst pursuing this project
Develop a strong theoretical understanding and practical skill in implementing AI analysis methods, with an understanding of pattern recognition and image processing methods with their application to neuroimaging data.
Develop a basic understanding of brain tumour biology, radiological and pathological diagnostic methods, treatments and patient outcomes.
Develop skills in statistical methods to evaluate novel MRI data classification algorithms and how to develop image-processing pipelines for MRI data.
Learn how to interact effectively within a multi-disciplinary team of clinical, biomedical and computer science experts.
Understand and comply with ethical and information governance regulations of patient data.
Develop skills to effectively present complex image analysis methodology to general scientific and clinical audiences and presentation of results and preparation of papers for expert peer review
Which route/s is this project available for?
- 1+4 = No
- +4 = Yes
Is this project available for full-time study? Yes
Is this project available for part-time study? No
Particular prior educational requirements for a student undertaking this project
- SGUL’s standard institutional eligibility criteria for doctoral study.
- Minimum 2:1 honours BSc in a scientific discipline with a strong computing as well as good physics and mathematics components: computer science, physics, mathematics, engineering, etc. Ideally with an MSc or other research/industry experience that incorporated AI, medical imaging or machine learning.
Other useful information
- Potential CASE conversion? = No
PROJECT IN MORE DETAIL
Scientific description of this research project
Brain tumours lead to significant mortality and morbidity and radiological interpretation of routine MRI scans is mainly aimed at assessing the tumour type and grade, characteristics important for patient management but limited in terms of predicting individual patient outcome. Brain tumour biopsies are now analysed for genetic subtypes (e.g. IDH1 for glial brain tumours), which improves estimating overall patient survival, but is invasive and is a limited tissue sample. AI methods are now being widely investigated for assessing tumour type and patient outcome.
For transfer into routine clinical practice and their acceptability by radiologists and patients, it is important that the AI/ML methods are interpretable (Eder et al 2022, Natekar et al 2020). With interpretability comes a better understanding of which MRI features the AI uses for diagnosis, which in turn may enable us to further develop MRI techniques that are optimised for AI and so more rapid and reliable than the current approach of using standard clinical multiparametric MRI (mpMRI). Additionally, we may gain insight into the physiological characteristics of brain tumours and how they relate to outcome (Mazurowski et al 2014).
Key aims are:
a) assess published AI methods that delineate and classify glial brain tumours and predict patient survival, and further develop these;
b) apply reverse-engineering approaches to investigate what regions and characteristics of the clinical MR images are being used to provide tumour detection and classification;
c) apply these results to an independent test-set of mpMRI (including diffusion, perfusion and 1H spectroscopy) to provide interpretation of the physiological and biological properties of the tumour associated with the imaging features and regions that the AI system is focussing on.
We will use methodology such as the Grad-CAM algorithm to provide heatmaps of the most relevant image regions that are being used (Esmaeili et al 2021) and SHapley Additive exPlanations (SHAP) (Eder et al 2022), and derive visual interpretations of the filtering processes is inherent to Neural Networks (Natekar et al 2020). Our aim will be to determine what textures and contrasts are being used to provide tumour segmentation, and how much of each of the four different imaging modalities in standard clinical mpMRI contribute to this ability for low- and high-grade gliomas. We will also assess what aspects of the tumour (texture, edge features, dimensions, shape etc) are contributing to the determination of survival, outcome and genetics.
We will use the publicly available BRATS brain tumour MRI data (https://www.med.upenn.edu/cbica/brats/) in relation to tumour segmentation (BRATs challenge data 2020, 2021, 2022), patient survival and progression (BRATs challenge data 2021) and genetic subtype (BRATs data 2021). In addition we will apply the results to data from St George’s multimodal MRI brain tumour projects that include diffusion and perfusion MRI and MR spectroscopy: LGG – longitudinal study with 54 datasets; TIMBO – multiple tumour types of 92 datasets; DiPROG – an ongoing study of glial tumours currently with 42 datasets and target of 170 by mid-2025.
RISKS and MITIGATIONS.
Explainable AI is still in the early stages of development, and particularly for brain tumour studies. Full “explicability” may not be achieved but novel data on neural network functionality will be revealed. References Eder et al 2022. Interpretable Machine Learning with Brain Image and Survival Data. Biomedinformatics 2022, 2, 492–510. https://doi.org/10.3390/biomedinformatics2030031 Esmaeili et al 2021. Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization. J. Pers. Med. 2021, 11, 1213. DOI: 10.3390/jpm11111213 Mazurowski et al. Computer-extracted MR imaging features are associated with survival in glioblastoma patients. J Neurooncol (2014) 120:483–488 Natekar et al 2020. Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis. Front. Comput. Neurosci. 14:6. doi: 10.3389/fncom.2020.00006
(Relevant preprints and/or open access articles)
Additional information from the supervisory team
- The supervisory team has provided a recording for prospective applicants who are interested in their project. This recording should be watched before any discussions begin with the supervisory team.
- https://doi.org/10.1016/j.nicl.2018.101648 https://doi.org/10.1016/j.cmpb.2018.01.003