ENABLING TRACTABLE UNCERTAINTY QUANTIFICATION FOR HIGH-DIMENSIONAL PREDICTIVE AI SYSTEMS IN COMPUTATIONAL MEDICINE

Artificial intelligence (AI) systems are powerful tools in healthcare and medicine. However, it is crucial to understand how much one can trust the AI analyses and predictions, especially when adopting them for decision-making where inappropriate choices may result in dire consequences. In this project, we start by developing the computational and algorithmic foundations for performing uncertainty quantification (UQ) in machine learning (ML) models. We tackle this by creating new computational methods and leveraging high-performance computing, to capture and construct uncertainty distributions for high dimensional deep neural networks (of tens of millions of weight parameters). We focus on medical AI models used for detecting IDH (isocitrate dehydrogenase) gene mutation from MRI (magnetic resonance imagine) brain tumor images. The resulting product will be ML models that produce not only a single output, but a spread of predictions which also reflects its predictive quality and uncertainty.

The DNN model takes in MRI image and predicts the probability of IDH gene mutation. e.g. while predicted probability is 80%, the uncertainty surrounding this estimate is typically never reported.

U-M Researchers

Arvind Rao

Xun Huan