Alfons Hoekstra, Associate Professor, University of Amsterdam
Symposium Description
An essential step in making computational approaches in biomedicine actionable is to ensure that the findings derived from it are indeed robust, and that all relevant elements of uncertainty are quantified and accounted for. This uncertainty can for instance originate from input data sources (e.g., noise or bias in experimental devices) and propagate as the workflow proceeds from one computational or data analysis step to another, leading to confidence intervals in the final result.
In this symposium we are seeking contributions that study uncertainty quantification (UQ) in the context of biomedicine. This includes the study of either epistemic or aleatory uncertainty, intrusive and non-intrusive computational approaches to quantify uncertainty, specific applications of UQ in challenging biomedical settings, and tools and techniques that help incorporate UQ on an application-agnostic level.
Models of electrical activation in cardiac cells and tissue have become accepted as research tools that can be used alongside experiments to gain insights into physiological mechanisms. More recently, there is the prospect that these tools could be used to inform clinical decision making [1] and for in-silico drug safety assessment [2]. As a result, the behaviour of cardiac models under uncertainty in model parameters, initial conditions, and boundary conditions has become an area of active interest [3]. Full Abstract
11:20
Ritabrata Dutta
Pathological Test for Cardio/cerebrovascular diseases: Platelets dynamics and Approximate Bayesian computation
According to World Health Organization (WHO) report on 2015 Cardio/cerebrovascular diseases (CVD) have become one of the major health issue in our societies. But recent studies show the clinical tests to detect CVD are ineffectual as they do not consider different stages of platelet activation or the molecular dynamics involved in platelet interactions. Further they are also incapable to consider inter-individual variability. Recently, Chopard et al., (2017) introduced a physical description of platelets deposition, by integrating fundamental understandings of how platelets interact in a numerical model of platelets deposition, parameterized by 5 parameters (eg. adhesion and aggregation rates). Our main claim is that those parameters are precisely the information needed for a pathological test identifying CVD captured through the numerical model and also these parameters are capable to capture the inter-individual variability. Following this claim, our contribution is two-folds: we devised an inferential scheme for uncertainty quantification of these parameters using Approximate Bayesian Computation and High Performance Computing and finally tested the claim and efficacy of our methodology through an experimental study. Full Abstract
Cerebral vasospasm (CVS) is a life-threatening condition that occurs in a large proportion of those affected by subarachnoid haemorrhage and stroke[1]. CVS manifests itself as the progressive narrowing of intracranial arteries. It is usually diagnosed using Doppler ultrasound, which quantifies blood velocity changes in the affected vessels, but has low sensitivity when CVS affects the peripheral vasculature. The aim of this study was to identify alternative biomarkers that could be used to diagnose CVS. We used a verified and validated 1D modelling approach[2] to describe the properties of pulse waves that propagate through the cardiovascular system (Figure 1), which allowed the effects of different types of vasospasm on waveforms to be characterised at several locations within a simulated cerebral network. A sensitivity analysis empowered by the use of a Gaussian process (GP) statistical emulator was then used to identify waveform features that may have strong correlations with vasospasm. A GP emulator can treat inputs and outputs explicitly as uncertain quantities, and so by determining the proportion of output variance that could be accounted for by each uncertain input we were able to calculate variance-based sensitivity indices for each input and output of the model. This was useful to identify those waveform features that are sensitive to vasospasm (changes in vessel radii) but less sensitive to physiological variations in the other model parameters. Using this approach, we showed that the minimum rate of velocity change can be much more effective than blood velocity for stratifying typical manifestations of vasospasm and its progression[3]. In the wider context, the present study describes the use of sensitivity indices, combined to modelling, as a way to identify effective biomarkers, which is a novel approach that has the potential to result in clinically useful tools.
The same approach has been further developed and applied to the simulation of endovascular removal of blood clots (thrombectomy) as a potential clinical tool to investigate typical clinical scenarios for treatment of ischaemic stroke. Full Abstract
11:50
Peter Challenor
(Invited Speaker)
Safety, Reproducibility, Performance: Accelerating cancer drug discovery with ML and HPC technologies
Numerical models have reached the stage where our simulations are believed to be fairly accurate representations of the real world, and recently the term ‘digital twin’ has been coined to describe such simulators. However it should be remembered that all simulations are models of the real world not the real world itself. The underlying equations of our simulators are the result of good scientific understanding, which may itself be partial. In addition we solve numerical approximations to these equations ,not the equations in a continuum, and parameterise many processes because of discretisation or lack of knowledge. The difference between the simulator and reality is often known as the model discrepancy. In addition there are usually unknown parameters (or other inputs) in the simulators which we need to estimate either from external (expert) knowledge or by fitting the simulators to data using some statistical methodology. We will refer to this problem as calibration (or inverse modelling). Thus our simulator output is always uncertain, in a number of distinct ways and any form of calibration not only needs to estimate the values of the simulator inputs but also the associated uncertainty. Although the quality and quantity of measurement continues to improve, data are also always uncertain. So the calibration problem involves estimating parameters in uncertain models with uncertain data. The simple way of solving such a problem is maximum likelihood (or least squares) or Bayesian calibration. Unfortunately such methods are flawed as they do not take the discrepancy into account. The nearest point to the data on the model manifold is found, even though this may be a long way from the true solution. Even worse the uncertainty on the estimator reduces as the amount of data increases, going to zero as the number of data points goes to infinity, giving a completely false impression of the true accuracy. It is possible to create a better methodology that includes the model discrepancy, for example see Kennedy and O’Hagan (2001), who model the real world as the sum of the simulator and the discrepancy both of which are modelled as Gaussian processes; one representing the simulator, and one representing the discrepancy. The Kennedy and O’Hagan formulation has proved very popular, but suffers from a huge drawback – the two Gaussian processes are not separately identifiable. This isn’t a problem for prediction, where we are only interested in the sum of the two processes, but if we want to gain understanding about the simulator and discrepancy we need to be able to distinguish them. A number of solutions have been proposed, including using strong prior information and restricting the form that the discrepancy can take. We suggest a different approach known as history matching. In history matching rather than trying to find a point estimate for the simulator inputs (or equivalently their joint posterior distribution) we find those sets of inputs that give simulator outputs so far from the data that we can rule them out as implausible. Once we have ruled out all the implausible input values what is left must include the ‘best’ value, if such a value exists. As we will see, it is possible to rule out all possible input values in which case it is not possible to make the simulator and the data agree. Full Abstract