Wednesday 25th September
KELVIN LECTURE THEATRE: Organ Modelling and Simulation
Time | Speaker | Title |
11:00 | Alejandro Liberos (Invited Speaker) |
Use of 3D atrial models to improve signal processing in cardiac electrophysiology Abstract preview Atrial fibrillation (AF) is the most common cardiac arrhythmia, affecting more than 6 million of Europeans. In the recent years new mapping technologies have been developed such as intracardiac mapping by basket catheters, body surface potential mapping by surface electrodes or estimated epicardial mapping by solving the inverse problem of electrocardiology. Together with these mapping technologies, signal processing techniques have been developed in order to identify the mechanism that maintains AF in each individual patient. However, due to the complexity of AF it is difficult to validate such techniques with real patient data. Full Abstract |
11:20 | Robin Richardson | An automated pipeline for real time visualisation of blood flow during treatment of intracranial aneurysms Abstract Preview Imaging and computing technologies have advanced considerably in recent years leading to their increasing use in medical applications. Modern imaging methods allow clinicians to view the geometry of a patient’s vasculature, down to the level of individual vessels, allowing vascular malformations such as intracranial aneurysms to be located and examined. On a related front, the increasing availability and computational power of high performance computing (HPC) infrastructure now allows for detailed haemodynamic simulations to be executed [1]. Indeed, advanced software suites have already been developed for hemodynamic simulation from medical imaging, such as CRIMSON [2], including coupling to heart models [3]. The treatment of intracranial aneurysms is often performed under very short timescales. Our aim in this work is to use data which is already available and routinely collected in the process of interventions to treat patient aneurysms – such as rotational angiogram (RA) data – and combine this with high performance haemodynamics simulation codes to provide clinicians with real time visualisation of the predicted blood flow and associated wall shear stresses in the patient before and after the introduction of a flow diverting stent. We present here the fully integrated, automated pipeline we have developed to segment imaging data, localise aneurysms, simulate blood flow and provide real time visualisation to clinicians and some details of its current performance. Full Abstract |
11:35 | Tamas Jozsa | A cerebral circulation model for in silico clinical trials of ischaemic stroke Abstract Preview The INSIST consortium (www.insist-h2020.eu) set out to accelerate the advancement of stroke treatments by introducing in silico clinical trials which mitigate the need for resource-intensive experiments. The present work aims to contribute to INSIST by developing a cerebral circulation model that captures blood flow in the entire human brain. Progress in this field is complicated by the multi-scale nature of the flow, which stretches from small vessels with characteristic diameters of approximately 5 microns (capillaries) to large arteries with diameters of approximately 5 millimetres (e. g. internal carotid artery). Whereas it has become common practice to account for large arteries using one-dimensional network models, well-established methods are not available for a full description of the microcirculation. Therefore, this study focuses on the development of a cerebral microcirculation model and on bridging the gap between large arteries and the microcirculation. Full Abstract |
11:50 | Britt Van Rooij | Platelet adhesion and aggregation: Cell-resolved simulations and In vitro experiments Abstract Preview High shear thrombosis happens on a thrombogenic surface (e.g. collagen) in a high shear environment and in presence of platelets and von Willebrand factor (vWF). In literature a discussion is going on between the need of an area with high shear [1] or an area with a high shear gradient [2]. However, the in vitro experiments used in these studies are performed in different flow chambers. In those flow chambers the specific flow fields are not the same. Currently, it is not known what specific flow characteristics cause high shear thrombosis. It is known that unfolding of the von Willebrand factor is regulated by elongational flow. However, this does not explain the formation of the platelet aggregates at the apex (high shear) or just behind the apex (high shear rate gradient), where vWF would contract instead of uncoil in the flow direction. However, note that the flow behaviour on the cellular level is hardly known and could provide additional insight [3]. Full Abstract |
12:05 | Remy Petkantchin | A Three-dimensionnal Mesoscopic Model of Thrombolysis Abstract Preview A clot, as observed in a stroke event is made of fibrin strands that are entangled, making a solid structure blocking partially or completely the blood flow in a brain artery. In addition to the fibrin network, the clots are made of other procoagulant factors such as platelets and von Willebrand Factor (vWF) as well as red blood cells (RBC). Thrombosis is the result of the polymerization of the fibrinogen, transported by blood, into fibrin strands under the action of thrombin molecules. Thrombin is typically produced in case of body malfunction, such as injured endothelial cells, vessel walls exposed to low shear rate or hypoxia. In normal physiological conditions, the anti-thrombin that is naturally present in the blood can neutralize the thrombin and prevent from clot formation. Full Abstract |
12:20 | End of Session | |
12:30 | LUNCH |
TURING LECTURE THEATRE: Machine Learning, Big Data & AI
Time | Speaker | Title |
11:00 | James Cole (Invited Speaker) |
Machine learning models of brain ageing in health and disease; Abstract preview The brain changes as we age, and these changes are associated with cognitive decline and an increased risk of dementia (Deary et al., 2009). Neuroimaging can measure these age-related changes, and considerable variability in brain ageing patterns is evident (Raz and Rodrigue, 2006). Equally, rates of age-associated decline affect people very differently. This suggests that the measurement of individual differences in age-related changes to brain structure and function may help establish whether someone’s brain is ageing more or less healthily, with concomitant implications for future health outcomes. To do this, research into biomarkers of the brain ageing process is underway (Cole and Franke, 2017), principally using neuroimaging and in particular magnetic resonance imaging (MRI). Full Abstract |
11:20 | Clint Davis-Taylor | Automated Parameter Tuning for Living Heart Human Model using Machine Leaning and Multiscale Simulations Abstract Preview Living Heart Human Model is a finite element model with realistic three dimensional geometries of four heart chambers, the overall heart responses are driven by sequentially coupled electrical conduction and structural contraction analyses, with blood flow modelled as a closed loop lumped parameter model [1]. It provides a virtual environment to help test medical devices or surgical treatments before they are utilized in human. It is critical to tune the model to a patient or disease state, however this is extremely difficult using traditional varying-one-parameter-a-time approach, as there are a large number of parameters with complex interactions between parameters, and large number of CPU time to perform each analysis. Another popular type of model in cardiovascular research is the Lumped Parameter Network (LPN) model that can approximate the pressure-volume relationship and fluid flow properties. This type of model can be solved in real-time, but it requires some pre-knowledge for the cardiac driving functions, i.e., the time varying pressure and volume relationship of the active chambers [2]. Full Abstract |
11:35 | David Wright | Combining molecular simulation and machine learning to INSPIRE improved cancer therapy Abstract Preview Cancer is the second leading cause of death in the United States (accounting for nearly 25% of all deaths). Targeted kinase inhibitors play an increasingly prominent role in the treatment of cancer and account for a significant fraction of the $37 billion U.S. market for oncology drugs in the last decade. Unfortunately, the development of resistance limits the amount of time patients derive benefits from their treatment. The INSPIRE project is laying the foundations for the use of molecular simulation and machine learning (ML) to guide precision cancer therapy, in which therapy is tailored to provide maximum benefit to individual patients based on genetic information about their particular cancer. It is vital that such an approach is based on predictive methods as the vast majority of clinically observed mutations are rare, rendering catalog-building alone insufficient. Full Abstract |
11:50 | Amanda Minnich | Safety, Reproducibility, Performance: Accelerating cancer drug discovery with ML and HPC technologies Abstract Preview The drug discovery process is costly, slow, and failure-prone. It takes an average of 5.5 years to get to the clinical testing stage, and in this time millions of molecules are tested, thousands are made, and most fail. The ATOM Consortium is working to transform the drug discovery process by utilizing machine learning to pretest many molecules in silico for both safety and efficacy, reducing the costly iterative experimental cycles that are traditionally needed. This consortium is comprised of LLNL, GlaxoSmithKline, NCI’s Frederick National Laboratory for Cancer Research, and UCSF. Through ATOM’s unique combination of partners, machine learning experts are able to use LLNL’s supercomputers to develop models based on proprietary and public pharma data for over 2 million compounds. The goal of the consortium is to create a new paradigm of drug discovery that would drastically reduce the time from identified drug target to clinical candidate, and we intend to use oncology as the first exemplar of the platform. Full Abstract |
12:05 | Fangfang Xia | Deep Medical Image Analysis with Representation Learning and Neuromorphic Computing Abstract Preview Deep learning is increasingly used in medical imaging, improving many steps of the processing chain, from acquisition to segmentation and anomaly detection to outcome prediction. Yet significant challenges remain: (1) Image-based diagnosis depends on the spatial relationships between local patterns, something convolution and pooling often do not capture adequately; (2) data augmentation, the de facto method for learning 3D pose invariance, requires exponentially many points to achieve robust improvement; (3) Labeled medical images are much less abundant than unlabeled ones, especially for heterogenous pathological cases; and (4) Scanning technologies such as magnetic resonance imaging (MRI) can be slow and costly, generally without online learning abilities to focus on regions of clinical interest. To address these challenges, novel algorithmic and hardware approaches are needed for deep learning to reach its full potential in medical imaging. Full Abstract |
12:20 | Rick Stevens | Deep Learning in Cancer Drug Response Prediction Abstract Preview Artificial intelligence and machine learning (ML) specifically is having an increasing significant impact on our lives. Since the early wins in computer vision from deep learning (DL) in the 2010’s, deep neural networks have increasingly been applied to hard problems that have defied previous modeling efforts. This is particularly true in chemistry and drug development where there are dozens of efforts to replace the traditional drug development computational pipelines with machine learning based alternatives. In Cancer drug development and predictive oncology there are several cases where DL is beginning to show significant successes. In our work we are applying deep learning to the problem of predicting tumor drug response for both single drugs and drug combinations. We have developed drug response models for cell lines, patient derived xenograft (PDX) models and organdies that are used in preclinical drug development. Due to the limited scale of available PDX data we have focused on transfer learning approaches to generalize response prediction across biological model types. We incorporate uncertainty quantification into our models to enable us to determine the confidence interval of predictions. Our current approaches leverage work on attention, weight sharing between closely related runs for accelerated training and active learning for prioritization of experiments. Our goal is a broad set of models that can be used to screen drugs during early stage drug development as well as predicting tumor response for pre-clinical study design. Results to date include response classifications that achieve >92% balanced classification accuracy on a pan-cancer collection of tumor models and broad collection of drugs. Our work is part of joint program of investment from the NCI and DOE and is supported in part by the US Exascale Computing Project via the CANDLE project. Full Abstract |
12:35 | LUNCH |
WATSON WATT ROOM: Uncertainty Quantification
Time | Speaker | Title |
11:00 | Richard Clayton (Invited Speaker) |
Sensitivity and uncertainty analysis of cardiac cell models with Gaussian process emulators Abstract preview Models of electrical activation in cardiac cells and tissue have become accepted as research tools that can be used alongside experiments to gain insights into physiological mechanisms. More recently, there is the prospect that these tools could be used to inform clinical decision making [1] and for in-silico drug safety assessment [2]. As a result, the behaviour of cardiac models under uncertainty in model parameters, initial conditions, and boundary conditions has become an area of active interest [3]. Full Abstract |
11:20 | Ritabrata Dutta | Pathological Test for Cardio/cerebrovascular diseases: Platelets dynamics and Approximate Bayesian computation Abstract Preview According to World Health Organization (WHO) report on 2015 Cardio/cerebrovascular diseases (CVD) have become one of the major health issue in our societies. But recent studies show the clinical tests to detect CVD are ineffectual as they do not consider different stages of platelet activation or the molecular dynamics involved in platelet interactions. Further they are also incapable to consider inter-individual variability. Recently, Chopard et al., (2017) introduced a physical description of platelets deposition, by integrating fundamental understandings of how platelets interact in a numerical model of platelets deposition, parameterized by 5 parameters (eg. adhesion and aggregation rates). Our main claim is that those parameters are precisely the information needed for a pathological test identifying CVD captured through the numerical model and also these parameters are capable to capture the inter-individual variability. Following this claim, our contribution is two-folds: we devised an inferential scheme for uncertainty quantification of these parameters using Approximate Bayesian Computation and High Performance Computing and finally tested the claim and efficacy of our methodology through an experimental study. Full Abstract |
11:35 | Alberto Marzo | Use of a Gaussian process emulator and 1D circulation model to characterize cardiovascular pathologies and guide clinical treatment Abstract Preview Cerebral vasospasm (CVS) is a life-threatening condition that occurs in a large proportion of those affected by subarachnoid haemorrhage and stroke[1]. CVS manifests itself as the progressive narrowing of intracranial arteries. It is usually diagnosed using Doppler ultrasound, which quantifies blood velocity changes in the affected vessels, but has low sensitivity when CVS affects the peripheral vasculature. The aim of this study was to identify alternative biomarkers that could be used to diagnose CVS. We used a verified and validated 1D modelling approach[2] to describe the properties of pulse waves that propagate through the cardiovascular system (Figure 1), which allowed the effects of different types of vasospasm on waveforms to be characterised at several locations within a simulated cerebral network. A sensitivity analysis empowered by the use of a Gaussian process (GP) statistical emulator was then used to identify waveform features that may have strong correlations with vasospasm. A GP emulator can treat inputs and outputs explicitly as uncertain quantities, and so by determining the proportion of output variance that could be accounted for by each uncertain input we were able to calculate variance-based sensitivity indices for each input and output of the model. This was useful to identify those waveform features that are sensitive to vasospasm (changes in vessel radii) but less sensitive to physiological variations in the other model parameters. Using this approach, we showed that the minimum rate of velocity change can be much more effective than blood velocity for stratifying typical manifestations of vasospasm and its progression[3]. In the wider context, the present study describes the use of sensitivity indices, combined to modelling, as a way to identify effective biomarkers, which is a novel approach that has the potential to result in clinically useful tools. The same approach has been further developed and applied to the simulation of endovascular removal of blood clots (thrombectomy) as a potential clinical tool to investigate typical clinical scenarios for treatment of ischaemic stroke. Full Abstract |
11:50 | Peter Challenor (Invited Speaker) |
Uncertainty quantification and the calibration of numerical models Abstract Preview Numerical models have reached the stage where our simulations are believed to be fairly accurate representations of the real world, and recently the term ‘digital twin’ has been coined to describe such simulators. However it should be remembered that all simulations are models of the real world not the real world itself. The underlying equations of our simulators are the result of good scientific understanding, which may itself be partial. In addition we solve numerical approximations to these equations ,not the equations in a continuum, and parameterise many processes because of discretisation or lack of knowledge. The difference between the simulator and reality is often known as the model discrepancy. In addition there are usually unknown parameters (or other inputs) in the simulators which we need to estimate either from external (expert) knowledge or by fitting the simulators to data using some statistical methodology. We will refer to this problem as calibration (or inverse modelling). Thus our simulator output is always uncertain, in a number of distinct ways and any form of calibration not only needs to estimate the values of the simulator inputs but also the associated uncertainty. Although the quality and quantity of measurement continues to improve, data are also always uncertain. So the calibration problem involves estimating parameters in uncertain models with uncertain data. The simple way of solving such a problem is maximum likelihood (or least squares) or Bayesian calibration. Unfortunately such methods are flawed as they do not take the discrepancy into account. The nearest point to the data on the model manifold is found, even though this may be a long way from the true solution. Even worse the uncertainty on the estimator reduces as the amount of data increases, going to zero as the number of data points goes to infinity, giving a completely false impression of the true accuracy. It is possible to create a better methodology that includes the model discrepancy, for example see Kennedy and O’Hagan (2001), who model the real world as the sum of the simulator and the discrepancy both of which are modelled as Gaussian processes; one representing the simulator, and one representing the discrepancy. The Kennedy and O’Hagan formulation has proved very popular, but suffers from a huge drawback – the two Gaussian processes are not separately identifiable. This isn’t a problem for prediction, where we are only interested in the sum of the two processes, but if we want to gain understanding about the simulator and discrepancy we need to be able to distinguish them. A number of solutions have been proposed, including using strong prior information and restricting the form that the discrepancy can take. We suggest a different approach known as history matching. In history matching rather than trying to find a point estimate for the simulator inputs (or equivalently their joint posterior distribution) we find those sets of inputs that give simulator outputs so far from the data that we can rule them out as implausible. Once we have ruled out all the implausible input values what is left must include the ‘best’ value, if such a value exists. As we will see, it is possible to rule out all possible input values in which case it is not possible to make the simulator and the data agree. Full Abstract |
12:10 | End of Session | |
12:30 | LUNCH |
KELVIN LECTURE THEATRE: Organ Modelling and Simulation
Time | Speaker | Title |
13:30 | Mirko Bonfanti (Invited Speaker) |
Multi-scale, patient-specific modelling approaches to predict neointimal hyperplasia growth in femoro-popliteal bypass grafts Abstract preview Neointimal hyperplasia (NIH) is a major obstacle to the long-term patency of peripheral vascular grafts. The disease has a complex aetiology which is influenced, among other phenomena, by mechanical forces such as shear stresses acting on the arterial wall. The aim of this work is to use a multi-scale modelling approach to assess the impact of haemodynamic factors in NIH growth. We hypothesized that both low and oscillatory shear should be considered simultaneously when assessing the proclivity of a certain region in bypass grafts to develop NIH and we simulated NIH progression using a multi-scale computational framework that we previously developed, comparing our results to a patient specific clinical dataset (obtained with the patients’ informed consent for research and publication). Full Abstract |
13:50 | Gaia Franzetti | In vivo, in silico, in vitro patient-specific analysis of the haemodynamics of a Type-B Aortic Dissection Abstract Preview Aortic dissection (AD) is a serious condition that occurs when a tear in the aortic wall allows blood to flow within the layers of the vessel. The optimal treatment of ‘uncomplicated’ acute/subacute Type-B aortic dissections (uABADs) continues to be debated. uABADs are commonly managed medically, but up to 50% of the cases will develop complications requiring invasive intervention [1]. AD is a highly patient-specific pathology in which morphological features have high impact on the haemodynamics. However, there is still a limited understanding of the fluid mechanics phenomena influencing AD clinical outcomes. Flow patterns, pressure, velocities and shear stresses are at the same time difficult to measure and extremely important features for this pathology. Personalised computational fluid dynamics models (CFD) are being investigated as a tool to improve clinical outcome [2]. However, such models need to be rigorously validated in order to be translated to the clinic, and such validation procedures are currently lacking for AD. This scarcity of data may be supplemented using in silico and in vitro models, in which these parameters can be studied and compared for validation purposes. In this work, a unique in vitro and in silico framework to perform personalised analyses of Type-B AD, informed by in vivo data, is presented. Experimental flow rate and pressure waveforms, as well as detailed haemodynamics acquired via Particle Image Velocimetry (PIV), are compared at different locations against computational CFD results. Full Abstract |
14:05 | Bettine van Willigen | AngioSupport: an interactive tool to support coronary intervention Abstract Preview Every year about 735.000 Americans suffer from Coronary Artery Disease (CAD); one of the leading causes of death in the United States; therefore, diagnosis and treatment should be convenient and accurate with costs as low as possible. Currently, medium to high risk stable patients have been assessed based on invasive coronary angiography (ICA). In other words, ICA was the ‘gold standard’ to determine the appropriate treatment (pharmaceutical treatment, percutaneous coronary intervention (PCI), or coronary artery bypass graft (CABG)) for CAD by revealing the location and anatomy of the stenosis. This diagnostic method is based on the research of Gould et al., which demonstrates the relationship between the stenosis (lumen diameter) and ischemia (determined based on myocardial blood flow) during the hyperemic state (Gould et al., 1974). Despite the subjective visual interpretation of the clinician to interpret the ICA, the percentage stenosis defined by ICA is a decent indication for revascularization for single vessel stenosis. However, for diffuse coronary disease or multiple stenosis (Tonino et al., 2009), ICA is unreliable for the diagnosis, because hemodynamics are unpredictable based on the anatomy of the stenosis. This may result in unnecessary revascularization of patients. Full Abstract |
14:20 | Jon McCullough | Developments for the Efficient Self-coupling of HemeLB Abstract Preview The aim of this paper is to document recent methodological advancements that enable extreme scale simulation by HemeLB [1], our present lattice-Boltzmann (LB) based blood-flow solver. From pre-processing to simulation and finally post-processing, we demonstrate the entire work flow on SuperMUC-NG, a state-of-the-art high performance computing platform. Pre-processing involves the voxelisation of patient-specific geometries to form a large lattice consisting of up to tens of billions of sites. Our ultimate goal is to enable the simulation of virtual humans, or digital replicas, with HemeLB simulating the full arterial and venous trees and exchanging information with simulation tools responsible for capturing the behaviour of other organ systems. To create accurate digital patients, we rely on some of the recent advancements discussed here. Full Abstract |
14:35 | Cyril Karamaoun | Interplay between thermal transfers and degradation of the bronchial epithelium during exercise Abstract Preview Physiologists interested in the body’s behaviour at exercise now recognize that the respiratory tract is not a limiting factor of endurance performance in healthy athletes. However, after several years of intense training, a majority of them develop various exercise-induced pathologies. The importance of the repetition of bronchial epithelium loss of integrity, consequent to a sustained high level of exercise ventilation, has been recently incriminated [1]. One physiological biomarker of the loss of epithelium integrity is the measurement of the concentration of club cells proteins 16kD (CC16) in urine or blood. An increase of this biomarker after exercise has been observed to be dependent on the intensity of exercise ventilation, leading to airway dehydration [1]. Interestingly, experimental [2] and modelling [3] works have shown that the bronchial epithelium and its mucus layer are a site of non-negligible evaporation during respiration. This evaporation (or condensation, especially during expiration [3]) is driven by a heat transfer between the air and the mucus layer, due to a temperature and humidity gradient at the air-tissue interface. Full Abstract |
14:50 | Giulia Luraghi | Simulation of the thrombectomy procedure in a realistic intracranial artery Abstract Preview An ischemic stroke is caused by a blood clot (thrombus) in an intracranial artery that prevents the blood to supply the downstream tissues. The thrombus may originate from the heart, from atherosclerotic plaques, or from vessel wall dissections. It is a mass of platelets, fibrin, and other blood components, activated by a hemostasis mechanism, which may present different composition. Red thrombi, red blood cell (RBC) dominant, are usually generated where the blood flow is slow and the fibrin network entraps the RBCs, while white thrombi, fibrin dominant, are generated under high shear flow and inflammatory conditions. The clot composition affects strongly its mechanical properties [1]. The main diagnostic techniques used during the stroke investigation are the Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). In any case, detection of the location of the intracranial occlusion must be done in a fast and accurate way to ensure speedy treatment. Treatment of acute ischemic stroke is aimed at restoring blood flow in the affected cerebral arteries as fast as possible after onset. Time is crucial in stroke, 2 million neurons are lost every second without reperfusion. Full Abstract |
15:05 | End of Session | |
15:30 | REFRESHMENTS AND POSTER SESSION |
TURING LECTURE THEATRE: Quantum AI to the Virtual Human
Time | Speaker | Title |
13:30 | Peter Love | Introduction |
13:45 | Prineha Narang (Invited Speaker) |
Excited-State Dynamics: Linking Classical and Quantum Approaches Abstract Preview Utilizing quantum computers for scientific discovery presents many challenges driven by the currently still-experimental nature of quantum hardware and the absence of the essential software needed to “program” this hardware in the near term. Software for quantum computing is in its infancy, and therefore the development of executable code for quantum hardware using current strategies is arduous. In this context, in the first part of the talk, I will discuss our recent demonstration of a new allocation algorithm that combines the simulated annealing method with local search of the solution space using Dijkstra’s algorithm. Our algorithm takes into account the weighted connectivity constraints of both the quantum hardware and the quantum program being compiled. Using this novel approach, we are able to optimally reduce the error rates of quantum programs on various quantum devices. In the second part of my talk, I will present a strategy to compute excited-states and reaction dynamics on NISQs. Finally, I will discuss a pathway to computing “complex” molecules, both energies and dynamics, leveraging a combination of quantum chemistry and quantum computer science approaches. Full Abstract |
14:15 | Vivien Kendon (Invited Speaker) |
Quantum computing using continuous-time evolution Abstract Preview In the quest for more computing power, the dominant digital silicon architectures are reaching the limit of physically possible processor speeds. The heat conduction of silicon limits how fast waste heat can be extracted, in turn limiting the processor speeds. Moreover, energy consumption by computers is now a significant fraction of humanity’s energy use, and current silicon devices are orders of magnitude away from optimal in this respect. We can’t afford to apply more and more standard computers to solve the biggest problems, we need more energy-efficient computational materials, and more efficient ways to compute beyond digital. Full Abstract |
14:45 | Anita Ramanan and Frances Tibble (Invited Speakers) |
Quantum Inspired Optimisation: Transforming Healthcare Imaging using Quantum-accelerated Algorithms Abstract Preview Join this session to learn more about Microsoft’s investment in quantum computing and see how the Microsoft Quantum team are leveraging quantum inspired optimisation techniques today to solve some of industry’s most complex problems. The session will focus on the groundbreaking collaboration between Microsoft Quantum and Case Western Reserve University to enhance MRI technology through pulse sequence optimisation, reducing scan time and improving results. Full Abstract |
15:05 | Crispin Keable | Atos Quantum Learning Machine: Heading towards a quantum-accelerated life science Abstract Preview The first quantum revolution, led in the early twentieth century by young Europeans of the likes of Einstein, Heisenberg and Planck, gave birth over the years to major inventions including the transistor, the laser, MRI and GPS. Today, taking advantage of Atos’ expertise in supercomputers and cyber security, Atos is fully committed to the second quantum revolution that will disrupt all our clients’ business activities in the coming decades, from medicine to agriculture through finance. However, the computer research community has come to realize that no General-Purpose Quantum Computing (GPQC) will be available on the market for 10 to 15 years. In the meantime, a lot of research and engineering steps are needed, both in terms of the hardware and software environment. Full Abstract |
15:20 | Peter Coveney | Quantum AI to the Virtual Human: Where’s the Virtual Human? |
15:35 | REFRESHMENTS AND POSTER SESSION |
WATSON WATT ROOM: Genomics
Time | Speaker | Title |
13:30 | Maria Secrier (Invited Speaker) |
Reconstructing mutational histories of oesophageal cancer Abstract Preview Mutational processes contributing to the development of cancer emerge from various risk factors of the disease and impose specific imprints of somatic alterations in the genomes of cancer patients. These mutational footprints, called “signatures”, can be read from the tumour sequencing data and reveal the main sources of DNA damage driving neoplastic progression. In this sense, they can be considered a form of evidence for historical mutational events that have acted during tumour evolution. I will discuss some of the insights we have obtained into the development and progression of oesophageal adenocarcinoma, an aggressive disease with limited treatment options, by tracking mutational signatures in human cancer tissues as well as 3D cell models of this malignancy. Using this strategy applied to whole-genome sequencing data from 129 cases, we have previously uncovered three subtypes of oesophageal cancer with distinct aetiologies related to DNA damage repair deficiencies, ageing and oxidative stress, and with different therapeutic options. Further, we have shown that organoids grown in vitro from patients’ tumours effectively recapitulate the genomic and transcriptomic profiles of the tumours of origin, and thus constitute a suitable model for this cancer type. By tracking the evolution of mutational processes during organoid culture growth we were also able to demonstrate a dynamic clonal architecture that mimics well the extensive intratumour heterogeneity observed in this cancer. Tracing mutational signature trajectories from early to later stages of cancer development in both primary tumours and organoid systems unveils a refined picture of evolution in this cancer, with frequent bottlenecks (~60% of cases) where mutational pressures shift. Finally, we suggest that the observed genomic signatures and their specific temporal dynamics could be further exploited for patient stratification in the clinic. Full Abstract |
13:50 | Igor Ruiz de Los Mozos (Invited Speaker) |
CDK11 binds chromatin and mRNAs of replication dependent histones regulating their expression. Abstract preview Expression of canonical, replication-dependent histones (RDH) is highly regulated during the cell cycle echoing their main role during cell division and epigenetic inheritance. RDH genes produce the only non-polyadenylated transcripts and for their correct expression recruit a battery of alternative 3’ end processing factors. Exploiting metaplots, positional heat maps and computational methods, we decipher CDK11 binding along RDH mRNA and DNA identifying it as key player in the molecular regulation of RDH biogenesis. Full Abstract |
14:10 | Nik Maniatis (Invited Speaker) |
The power of high-resolution population-specific genetic maps to dissect the genetic architecture of complex diseases: Type 2 Diabetes as an example Abstract Preview Metric genetic maps in Linkage Disequilibrium Units (LDU) are analogous to Linkage maps in cM but at a much higher marker resolution. LDU blocks represent areas of conserved LD and low haplotype diversity, while steps (increasing LDU distances) define LD breakdown, primarily caused by recombination, since crossover profiles agree precisely with the corresponding LDU steps. However, LDU maps do not only capture recombination events but the detailed linkage disequilibrium information of the population in question. We recently constructed the LDU genetic maps in Europeans and African-Americans and applied these to large T2D case-control samples in order to estimate accurate locations for putative functional variants in both populations. Replicated T2D locations were tested for evidence of being regulatory locations using adipose expression. We identified 111 novel loci associated with T2D-susceptibility locations, 93 of which are cosmopolitan (co-localised on genetic maps for both populations) and 18 are European-specific. We also found that many previously known T2D signals are also risk loci in African-Americans and we obtained more refined causal locations for these signals than the published lead SNPs. Using the same LDU methods, we also showed that the majority of these T2D locations are also regulatory locations (eQTLs) conferring the risk of T2D via the regulation of expression levels for a very large number (266) of cis-regulated genes. We identified a highly significant overlap between T2D and regulatory locations with chromatin marks for different tissues/cells. Sequencing a sample of our locations provided candidate functional variants that precisely co-locate pancreatic islet enhancers, leading to our conclusions that population specific genetic maps can: (i) provide commensurability when making comparisons between different populations and SNP-arrays; (ii) provide precise location estimates on the genetic map for potential functional variants, since these estimates are more efficient than using physical maps and (iii) effectively integrate disease-associated loci in different populations with gene expression and cell-specific regulatory annotation, by providing precision in co-localisation. Full Abstract |
14:30 | Toby Andrew (Invited Speaker) |
Genetic fine-mapping and targeted sequencing to investigate allelic heterogeneity and molecular function at genomic disease susceptibility loci for Type 2 Diabetes Abstract Preview Empirical genomic studies and long-established genetic theory show that complex traits – including many common diseases – are likely to be polygenic with numerous non-coding variants conferring risk of disease via the regulation of gene expression1 and post-translational modification2. Using high-resolution genetic maps3, we have identified 173 Type 2 Diabetes (T2D) precise disease susceptibility location estimates4 and using gene expression quantitative trait loci (eQTL) analyses for subcutaneous adipose tissue, have shown strong evidence that approximately two thirds of these closely collocate (± 50Kb) of eQTL location estimates that regulate the expression neighbouring cis-genes (within ±1.5Mb of the disease locus; see Figure 1)4. Our follow up analyses show that ~80 of the 111 T2D disease loci are also eQTLs that regulate the expression of nuclear encoded mitochondrial cis-genes with the eQTLs showing a high degree of co-location with in silico functional annotation. In this talk I will discuss our current understanding of the genetic and allelic architecture of T2D and illustrate this with results from genomic analyses and follow-up fine-mapping studies conducted by our research groups. In particular, we are investigating two interesting novel loci for evidence of complex association with T2D and mitochondrial function. The first locus, a 79kb stretch in intron 3 of FGF14, was observed to harbour eQTL for genes including PCCA, for which the encoded carboxylase catalyses a terminal step in both branched chain amino acid (BCAA) catabolism and odd-chain fatty acid oxidation; two pathways relevant to T2D aetiology. The second is a predicted eQTL for the fatty acid dehydrogenase ACAD11. Full Abstract |
14:50 | Hannah Maude | Pathway analysis reveals genetic regulation of mitochondrial function and branched-chain amino acid catabolism in Type 2 Diabetes Abstract Preview In recent years, the number of genetic loci found to be associated with T2D has increased substantially, mostly through large-scale genome-wide association studies (GWAS). Recent work has, however, highlighted an underappreciated contribution of rare variants and variants in areas of low linkage disequilibrium (LD) to complex disease heritability1,2, both of which are difficult to map using single-SNP tests of association. High-resolution genetic maps offer increased power to detect associations in areas of low LD and were recently used to map and replicate 111 novel loci associated with T2D3. Co-location of eQTL (genetic ‘expression quantitative trait loci’ which associate with gene expression levels) with disease loci (genetic loci that associate with risk of T2D), based on population-specific LD, was used to identify genes regulated by disease-associated variants (cis-genes). A co-localization approach overcomes difficulties in replicating lead SNPs between studies, making it an effective tool to identify likely cis-genes and the corresponding biological pathways implicated in heritable risk of disease. In this work, a total of 255 nominally significant disease loci were co-located with adipose eQTL and cis-genes were studied at the individual and pathway level. Specifically, we aimed to address the hypothesis that changes in mitochondrial function are a heritable, causal risk factor for T2D, by searching for cis-genes involved in mitochondrial function. Full Abstract |
15:00 | Karoline Kuchenbaecker | Trans-ethnic colocalization: A novel approach to assess the transferability of trait loci across populations Abstract Preview Most previous genome-wide association studies for complex traits were based on samples with European ancestry. Consequently, it is important to determine the transferability of findings to other ancestry groups. Here we ask the fundamental question whether causal variants for lipids are shared between populations. Differences in linkage disequilibrium structure, allele frequencies and sample size make it difficult to assess replication for individual loci. Therefore, we propose a new strategy to assess evidence for shared causal variants between two populations: trans-ethnic colocalization (TEColoc). We re-purposed a method originally developed for colocalization of GWAS and eQTL results: Joint Likelihood Mapping (JLIM). In order to assess its performance for GWAS results from samples with different ancestry, we carried out a simulation study. UK Biobank (UKB) was used as a European ancestry reference Full Abstract |
15:10 | Julia Ramírez | The Genetic Architecture of T-wave Morphology Restitution Abstract Preview Cardiovascular (CV) mortality is the main cause of death in the general population1. The analysis of the electrocardiogram (ECG) has potential for non-invasive diagnosis and prediction of CV risk. ECG markers are heritable and statistical genetic methods are available to estimate the cumulative contribution of genetic factors to CV events via genetic risk scores (GRSs)2. The T-wave morphology restitution (TMR)3 is an ECG marker that quantifies the rate of variation of the T-wave morphology with heart rate and has shown to be a strong predictor of sudden cardiac death in chronic heart failure patients3. We hypothesize that the interaction between repolarization dynamics and CV risk has a genetic component and that TMR can be used to capture it. The objective was to identify single-nucleotide variants (SNVs) significantly associated with TMR using genome-wide association studies (GWASs) and to develop genetic risk scores (GRSs) to evaluate their association with CV risk. Full Abstract |
15:20 | Stefan van Duijvenboden | Genetic architecture of QT dynamics and resting QT in the general population Abstract Preview The resting QT interval, an electrocardiographic measure of myocardial repolarisation, is a heritable risk factor for cardiovascular (CV) events and genetic studies have provided new insights into the underlying biology[1]. Patient studies have reported that QT adaptation to heart rate (QT dynamics) improves cardiac risk prediction[2], but its prognostic value in the general population remains to be investigated. Furthermore, it is well recognised that the QT interval is a heritable trait and characterisation of genetic variation has provided new insights and suggests candidate genes that could predispose to CV risk. However, common variants thus far reported leave an important part of its heritability unexplained. In addition, the genetic architecture underlying QT dynamics has not been explored and might further inform about new biological mechanisms that specifically target rate adaptation of the QT interval. The objectives of this work were: (1) Evaluate the CV prognostic value of QT dynamics in the general population, (2) discover genetic variants associated with QT dynamics and (3) further investigate the genetic basis and biology of resting QT interval. Full Abstract |
15:30 | REFRESHMENTS AND POSTER SESSION |
Go to Thursday 26th September
Go to Friday 27th September