Fine Mapping Functional Noncoding Genetic Elements Via Machine Learning

158771-Thumbnail Image.png
Description
All biological processes like cell growth, cell differentiation, development, and aging requires a series of steps which are characterized by gene regulation. Studies have shown that gene regulation is the key to various traits and diseases. Various factors affect the

All biological processes like cell growth, cell differentiation, development, and aging requires a series of steps which are characterized by gene regulation. Studies have shown that gene regulation is the key to various traits and diseases. Various factors affect the gene regulation which includes genetic signals, epigenetic tracks, genetic variants, etc. Deciphering and cataloging these functional genetic elements in the non-coding regions of the genome is one of the biggest challenges in precision medicine and genetic research. This thesis presents two different approaches to identifying these elements: TreeMap and DeepCORE. The first approach involves identifying putative causal genetic variants in cis-eQTL accounting for multisite effects and genetic linkage at a locus. TreeMap performs an organized search for individual and multiple causal variants using a tree guided nested machine learning method. DeepCORE on the other hand explores novel deep learning techniques that models the relationship between genetic, epigenetic and transcriptional patterns across tissues and cell lines and identifies co-operative regulatory elements that affect gene regulation. These two methods are believed to be the link for genotype-phenotype association and a necessary step to explaining various complex diseases and missing heritability.
Date Created
2020
Agent

Operational Safety Verification of AI­-Enabled Cyber-­Physical Systems

158743-Thumbnail Image.png
Description
One of the main challenges in testing artificial intelligence (AI) enabled cyber physicalsystems (CPS) such as autonomous driving systems and internet­-of-­things (IoT) medical
devices is the presence of machine learning components, for which formal properties are
difficult to establish. In addition, operational

One of the main challenges in testing artificial intelligence (AI) enabled cyber physicalsystems (CPS) such as autonomous driving systems and internet­-of-­things (IoT) medical
devices is the presence of machine learning components, for which formal properties are
difficult to establish. In addition, operational components interaction circumstances, inclusion of human­-in-­the-­loop, and environmental changes result in a myriad of safety concerns
all of which may not only be comprehensibly tested before deployment but also may not
even have been detected during design and testing phase. This dissertation identifies major challenges of safety verification of AI­-enabled safety critical systems and addresses the
safety problem by proposing an operational safety verification technique which relies on
solving the following subproblems:
1. Given Input/Output operational traces collected from sensors/actuators, automatically
learn a hybrid automata (HA) representation of the AI-­enabled CPS.
2. Given the learned HA, evaluate the operational safety of AI­-enabled CPS in the field.
This dissertation presents novel approaches for learning hybrid automata model from time
series traces collected from the operation of the AI­-enabled CPS in the real world for linear
and non­linear CPS. The learned model allows operational safety to be stringently evaluated
by comparing the learned HA model against a reference specifications model of the system.
The proposed techniques are evaluated on the artificial pancreas control system
Date Created
2020
Agent

Cognitive Computing for Decision Support

158704-Thumbnail Image.png
Description
The Cognitive Decision Support (CDS) model is proposed. The model is widely applicable and scales to realistic, complex decision problems based on adaptive learning. The utility of a decision is discussed and four types of decisions associated with CDS model

The Cognitive Decision Support (CDS) model is proposed. The model is widely applicable and scales to realistic, complex decision problems based on adaptive learning. The utility of a decision is discussed and four types of decisions associated with CDS model are identified. The CDS model is designed to learn decision utilities. Data enrichment is introduced to promote the effectiveness of learning. Grouping is introduced for large-scale decision learning. Introspection and adjustment are presented for adaptive learning. Triage recommendation is incorporated to indicate the trustworthiness of suggested decisions.

The CDS model and methodologies are integrated into an architecture using concepts from cognitive computing. The proposed architecture is implemented with an example use case to inventory management.

Reinforcement learning (RL) is discussed as an alternative, generalized adaptive learning engine for the CDS system to handle the complexity of many problems with unknown environments. An adaptive state dimension with context that can increase with newly available information is discussed. Several enhanced components for RL which are critical for complex use cases are integrated. Deep Q networks are embedded with the adaptive learning methodologies and applied to an example supply chain management problem on capacity planning.

A new approach using Ito stochastic processes is proposed as a more generalized method to generate non-stationary demands in various patterns that can be used in decision problems. The proposed method generates demands with varying non-stationary patterns, including trend, cyclical, seasonal, and irregular patterns. Conventional approaches are identified as special cases of the proposed method. Demands are illustrated in realistic settings for various decision models. Various statistical criteria are applied to filter the generated demands. The method is applied to a real-world example.
Date Created
2020
Agent

Active Learning with Explore and Exploit Equilibriums

158694-Thumbnail Image.png
Description
In conventional supervised learning tasks, information retrieval from extensive collections of data happens automatically at low cost, whereas in many real-world problems obtaining labeled data can be hard, time-consuming, and expensive. Consider healthcare systems, for example, where unlabeled medical images

In conventional supervised learning tasks, information retrieval from extensive collections of data happens automatically at low cost, whereas in many real-world problems obtaining labeled data can be hard, time-consuming, and expensive. Consider healthcare systems, for example, where unlabeled medical images are abundant while labeling requires a considerable amount of knowledge from experienced physicians. Active learning addresses this challenge with an iterative process to select instances from the unlabeled data to annotate and improve the supervised learner. At each step, the query of examples to be labeled can be considered as a dilemma between exploitation of the supervised learner's current knowledge and exploration of the unlabeled input features.

Motivated by the need for efficient active learning strategies, this dissertation proposes new algorithms for batch-mode, pool-based active learning. The research considers the following questions: how can unsupervised knowledge of the input features (exploration) improve learning when incorporated with supervised learning (exploitation)? How to characterize exploration in active learning when data is high-dimensional? Finally, how to adaptively make a balance between exploration and exploitation?

The first contribution proposes a new active learning algorithm, Cluster-based Stochastic Query-by-Forest (CSQBF), which provides a batch-mode strategy that accelerates learning with added value from exploration and improved exploitation scores. CSQBF balances exploration and exploitation using a probabilistic scoring criterion based on classification probabilities from a tree-based ensemble model within each data cluster.

The second contribution introduces two more query strategies, Double Margin Active Learning (DMAL) and Cluster Agnostic Active Learning (CAAL), that combine consistent exploration and exploitation modules into a coherent and unified measure for label query. Instead of assuming a fixed clustering structure, CAAL and DMAL adopt a soft-clustering strategy which provides a new approach to formalize exploration in active learning.

The third contribution addresses the challenge of dynamically making a balance between exploration and exploitation criteria throughout the active learning process. Two adaptive algorithms are proposed based on feedback-driven bandit optimization frameworks that elegantly handle this issue by learning the relationship between exploration-exploitation trade-off and an active learner's performance.
Date Created
2020
Agent

Knowledge-driven methods for geographic information extraction in the biomedical domain

157879-Thumbnail Image.png
Description
Accounting for over a third of all emerging and re-emerging infections, viruses represent a major public health threat, which researchers and epidemiologists across the world have been attempting to contain for decades. Recently, genomics-based surveillance of viruses through methods such

Accounting for over a third of all emerging and re-emerging infections, viruses represent a major public health threat, which researchers and epidemiologists across the world have been attempting to contain for decades. Recently, genomics-based surveillance of viruses through methods such as virus phylogeography has grown into a popular tool for infectious disease monitoring. When conducting such surveillance studies, researchers need to manually retrieve geographic metadata denoting the location of infected host (LOIH) of viruses from public sequence databases such as GenBank and any publication related to their study. The large volume of semi-structured and unstructured information that must be reviewed for this task, along with the ambiguity of geographic locations, make it especially challenging. Prior work has demonstrated that the majority of GenBank records lack sufficient geographic granularity concerning the LOIH of viruses. As a result, reviewing full-text publications is often necessary for conducting in-depth analysis of virus migration, which can be a very time-consuming process. Moreover, integrating geographic metadata pertaining to the LOIH of viruses from different sources, including different fields in GenBank records as well as full-text publications, and normalizing the integrated metadata to unique identifiers for subsequent analysis, are also challenging tasks, often requiring expert domain knowledge. Therefore, automated information extraction (IE) methods could help significantly accelerate this process, positively impacting public health research. However, very few research studies have attempted the use of IE methods in this domain.

This work explores the use of novel knowledge-driven geographic IE heuristics for extracting, integrating, and normalizing the LOIH of viruses based on information available in GenBank and related publications; when evaluated on manually annotated test sets, the methods were found to have a high accuracy and shown to be adequate for addressing this challenging problem. It also presents GeoBoost, a pioneering software system for georeferencing GenBank records, as well as a large-scale database containing over two million virus GenBank records georeferenced using the algorithms introduced here. The methods, database and software developed here could help support diverse public health domains focusing on sequence-informed virus surveillance, thereby enhancing existing platforms for controlling and containing disease outbreaks.
Date Created
2019
Agent

Machine Learning Models for High-dimensional Biomedical Data

156679-Thumbnail Image.png
Description
The recent technological advances enable the collection of various complex, heterogeneous and high-dimensional data in biomedical domains. The increasing availability of the high-dimensional biomedical data creates the needs of new machine learning models for effective data analysis and knowledge discovery.

The recent technological advances enable the collection of various complex, heterogeneous and high-dimensional data in biomedical domains. The increasing availability of the high-dimensional biomedical data creates the needs of new machine learning models for effective data analysis and knowledge discovery. This dissertation introduces several unsupervised and supervised methods to help understand the data, discover the patterns and improve the decision making. All the proposed methods can generalize to other industrial fields.

The first topic of this dissertation focuses on the data clustering. Data clustering is often the first step for analyzing a dataset without the label information. Clustering high-dimensional data with mixed categorical and numeric attributes remains a challenging, yet important task. A clustering algorithm based on tree ensembles, CRAFTER, is proposed to tackle this task in a scalable manner.

The second part of this dissertation aims to develop data representation methods for genome sequencing data, a special type of high-dimensional data in the biomedical domain. The proposed data representation method, Bag-of-Segments, can summarize the key characteristics of the genome sequence into a small number of features with good interpretability.

The third part of this dissertation introduces an end-to-end deep neural network model, GCRNN, for time series classification with emphasis on both the accuracy and the interpretation. GCRNN contains a convolutional network component to extract high-level features, and a recurrent network component to enhance the modeling of the temporal characteristics. A feed-forward fully connected network with the sparse group lasso regularization is used to generate the final classification and provide good interpretability.

The last topic centers around the dimensionality reduction methods for time series data. A good dimensionality reduction method is important for the storage, decision making and pattern visualization for time series data. The CRNN autoencoder is proposed to not only achieve low reconstruction error, but also generate discriminative features. A variational version of this autoencoder has great potential for applications such as anomaly detection and process control.
Date Created
2018
Agent

A Novel Approach to the Comparative Genomic Analysis of Canine and Human Cancers

156520-Thumbnail Image.png
Description
Study of canine cancer’s molecular underpinnings holds great potential for informing veterinary and human oncology. Sporadic canine cancers are highly abundant (~4 million diagnoses/year in the United States) and the dog’s unique genomic architecture due to selective inbreeding, alongside the

Study of canine cancer’s molecular underpinnings holds great potential for informing veterinary and human oncology. Sporadic canine cancers are highly abundant (~4 million diagnoses/year in the United States) and the dog’s unique genomic architecture due to selective inbreeding, alongside the high similarity between dog and human genomes both confer power for improving understanding of cancer genes. However, characterization of canine cancer genome landscapes has been limited. It is hindered by lack of canine-specific tools and resources. To enable robust and reproducible comparative genomic analysis of canine cancers, I have developed a workflow for somatic and germline variant calling in canine cancer genomic data. I have first adapted a human cancer genomics pipeline to create a semi-automated canine pipeline used to map genomic landscapes of canine melanoma, lung adenocarcinoma, osteosarcoma and lymphoma. This pipeline also forms the backbone of my novel comparative genomics workflow.

Practical impediments to comparative genomic analysis of dog and human include challenges identifying similarities in mutation type and function across species. For example, canine genes could have evolved different functions and their human orthologs may perform different functions. Hence, I undertook a systematic statistical evaluation of dog and human cancer genes and assessed functional similarities and differences between orthologs to improve understanding of the roles of these genes in cancer across species. I tested this pipeline canine and human Diffuse Large B-Cell Lymphoma (DLBCL), given that canine DLBCL is the most comprehensively genomically characterized canine cancer. Logistic regression with genes bearing somatic coding mutations in each cancer was used to determine if conservation metrics (sequence identity, network placement, etc.) could explain co-mutation of genes in both species. Using this model, I identified 25 co-mutated and evolutionarily similar genes that may be compelling cross-species cancer genes. For example, PCLO was identified as a co-mutated conserved gene with PCLO having been previously identified as recurrently mutated in human DLBCL, but with an unclear role in oncogenesis. Further investigation of these genes might shed new light on the biology of lymphoma in dogs and human and this approach may more broadly serve to prioritize new genes for comparative cancer biology studies.
Date Created
2018
Agent

The Optimal Control of Child Delivery for Women with Hypertensive Disorders of Pregnancy

156127-Thumbnail Image.png
Description
Hypertensive disorders of pregnancy (HDP) affect up to 5%-15% of pregnancies around the globe, and form a leading cause of maternal and neonatal morbidity and mortality. HDP are progressive disorders for which the only cure is to deliver the baby.

Hypertensive disorders of pregnancy (HDP) affect up to 5%-15% of pregnancies around the globe, and form a leading cause of maternal and neonatal morbidity and mortality. HDP are progressive disorders for which the only cure is to deliver the baby. An increasing trend in the prevalence of HDP has been observed in the recent years. This trend is anticipated to continue due to the rise in the prevalence of diseases that strongly influence hypertension such as obesity and metabolic syndrome. In order to lessen the adverse outcomes due to HDP, we need to study (1) the natural progression of HDP, (2) the risks of adverse outcomes associated with these disorders, and (3) the optimal timing of delivery for women with HDP.

In the first study, the natural progression of HDP in the third trimester of pregnancy is modeled with a discrete-time Markov chain (DTMC). The transition probabilities of the DTMC are estimated using clinical data with an order restricted inference model that maximizes the likelihood function subject to a set of order restrictions between the transition probabilities. The results provide useful insights on the progression of HDP, and the estimated transition probabilities are used to parametrize the decision models in the third study.

In the second study, the risks of maternal and neonatal adverse outcomes for women with HDP are quantified with a composite measure of childbirth morbidity, and the estimated risks are compared with respect to type of HDP at delivery, gestational age at delivery, and type of delivery in a retrospective cohort study. Furthermore, the safety of child delivery with respect to the same variables is assessed with a provider survey and technique for order performance by similarity to ideal solution (TOPSIS). The methods and results of this study are used to parametrize the decision models in the third study.

In the third study, the decision problem of timing of delivery for women with HDP is formulated as a discrete-time Markov decision process (MDP) model that minimizes the risks of maternal and neonatal adverse outcomes. We additionally formulate a robust MDP model that gives the worst-case optimal policy when transition probabilities are allowed to vary within their confidence intervals. The results of the decision models are assessed within a probabilistic sensitivity analysis (PSA) that considers the uncertainty in the estimated risk values. In our PSA, the performance of candidate delivery policies is evaluated using a large number of problem instances that are constructed according to the orders between model parameters to incorporate physicians' intuition.
Date Created
2018
Agent

Texture analysis platform for imaging biomarker research

156061-Thumbnail Image.png
Description
The rate of progress in improving survival of patients with solid tumors is slow due to late stage diagnosis and poor tumor characterization processes that fail to effectively reflect the nature of tumor before treatment or the subsequent change in

The rate of progress in improving survival of patients with solid tumors is slow due to late stage diagnosis and poor tumor characterization processes that fail to effectively reflect the nature of tumor before treatment or the subsequent change in its dynamics because of treatment. Further advancement of targeted therapies relies on advancements in biomarker research. In the context of solid tumors, bio-specimen samples such as biopsies serve as the main source of biomarkers used in the treatment and monitoring of cancer, even though biopsy samples are susceptible to sampling error and more importantly, are local and offer a narrow temporal scope.

Because of its established role in cancer care and its non-invasive nature imaging offers the potential to complement the findings of cancer biology. Over the past decade, a compelling body of literature has emerged suggesting a more pivotal role for imaging in the diagnosis, prognosis, and monitoring of diseases. These advances have facilitated the rise of an emerging practice known as Radiomics: the extraction and analysis of large numbers of quantitative features from medical images to improve disease characterization and prediction of outcome. It has been suggested that radiomics can contribute to biomarker discovery by detecting imaging traits that are complementary or interchangeable with other markers.

This thesis seeks further advancement of imaging biomarker discovery. This research unfolds over two aims: I) developing a comprehensive methodological pipeline for converting diagnostic imaging data into mineable sources of information, and II) investigating the utility of imaging data in clinical diagnostic applications. Four validation studies were conducted using the radiomics pipeline developed in aim I. These studies had the following goals: (1 distinguishing between benign and malignant head and neck lesions (2) differentiating benign and malignant breast cancers, (3) predicting the status of Human Papillomavirus in head and neck cancers, and (4) predicting neuropsychological performances as they relate to Alzheimer’s disease progression. The long-term objective of this thesis is to improve patient outcome and survival by facilitating incorporation of routine care imaging data into decision making processes.
Date Created
2017
Agent

Novel methods of biomarker discovery and predictive modeling using Random Forest

155725-Thumbnail Image.png
Description
Random forest (RF) is a popular and powerful technique nowadays. It can be used for classification, regression and unsupervised clustering. In its original form introduced by Leo Breiman, RF is used as a predictive model to generate predictions for new

Random forest (RF) is a popular and powerful technique nowadays. It can be used for classification, regression and unsupervised clustering. In its original form introduced by Leo Breiman, RF is used as a predictive model to generate predictions for new observations. Recent researches have proposed several methods based on RF for feature selection and for generating prediction intervals. However, they are limited in their applicability and accuracy. In this dissertation, RF is applied to build a predictive model for a complex dataset, and used as the basis for two novel methods for biomarker discovery and generating prediction interval.

Firstly, a biodosimetry is developed using RF to determine absorbed radiation dose from gene expression measured from blood samples of potentially exposed individuals. To improve the prediction accuracy of the biodosimetry, day-specific models were built to deal with day interaction effect and a technique of nested modeling was proposed. The nested models can fit this complex data of large variability and non-linear relationships.

Secondly, a panel of biomarkers was selected using a data-driven feature selection method as well as handpick, considering prior knowledge and other constraints. To incorporate domain knowledge, a method called Know-GRRF was developed based on guided regularized RF. This method can incorporate domain knowledge as a penalized term to regulate selection of candidate features in RF. It adds more flexibility to data-driven feature selection and can improve the interpretability of models. Know-GRRF showed significant improvement in cross-species prediction when cross-species correlation was used to guide selection of biomarkers. The method can also compete with existing methods using intrinsic data characteristics as alternative of domain knowledge in simulated datasets.

Lastly, a novel non-parametric method, RFerr, was developed to generate prediction interval using RF regression. This method is widely applicable to any predictive models and was shown to have better coverage and precision than existing methods on the real-world radiation dataset, as well as benchmark and simulated datasets.
Date Created
2017
Agent