The Optimal Control of Child Delivery for Women with Hypertensive Disorders of Pregnancy

156127-Thumbnail Image.png
Description
Hypertensive disorders of pregnancy (HDP) affect up to 5%-15% of pregnancies around the globe, and form a leading cause of maternal and neonatal morbidity and mortality. HDP are progressive disorders for which the only cure is to deliver the baby.

Hypertensive disorders of pregnancy (HDP) affect up to 5%-15% of pregnancies around the globe, and form a leading cause of maternal and neonatal morbidity and mortality. HDP are progressive disorders for which the only cure is to deliver the baby. An increasing trend in the prevalence of HDP has been observed in the recent years. This trend is anticipated to continue due to the rise in the prevalence of diseases that strongly influence hypertension such as obesity and metabolic syndrome. In order to lessen the adverse outcomes due to HDP, we need to study (1) the natural progression of HDP, (2) the risks of adverse outcomes associated with these disorders, and (3) the optimal timing of delivery for women with HDP.

In the first study, the natural progression of HDP in the third trimester of pregnancy is modeled with a discrete-time Markov chain (DTMC). The transition probabilities of the DTMC are estimated using clinical data with an order restricted inference model that maximizes the likelihood function subject to a set of order restrictions between the transition probabilities. The results provide useful insights on the progression of HDP, and the estimated transition probabilities are used to parametrize the decision models in the third study.

In the second study, the risks of maternal and neonatal adverse outcomes for women with HDP are quantified with a composite measure of childbirth morbidity, and the estimated risks are compared with respect to type of HDP at delivery, gestational age at delivery, and type of delivery in a retrospective cohort study. Furthermore, the safety of child delivery with respect to the same variables is assessed with a provider survey and technique for order performance by similarity to ideal solution (TOPSIS). The methods and results of this study are used to parametrize the decision models in the third study.

In the third study, the decision problem of timing of delivery for women with HDP is formulated as a discrete-time Markov decision process (MDP) model that minimizes the risks of maternal and neonatal adverse outcomes. We additionally formulate a robust MDP model that gives the worst-case optimal policy when transition probabilities are allowed to vary within their confidence intervals. The results of the decision models are assessed within a probabilistic sensitivity analysis (PSA) that considers the uncertainty in the estimated risk values. In our PSA, the performance of candidate delivery policies is evaluated using a large number of problem instances that are constructed according to the orders between model parameters to incorporate physicians' intuition.
Date Created
2018
Agent

Design, analytics and quality assurance for emerging personalized clinical diagnostics based on next-gen sequencing

152494-Thumbnail Image.png
Description
Major advancements in biology and medicine have been realized during recent decades, including massively parallel sequencing, which allows researchers to collect millions or billions of short reads from a DNA or RNA sample. This capability opens the door to a

Major advancements in biology and medicine have been realized during recent decades, including massively parallel sequencing, which allows researchers to collect millions or billions of short reads from a DNA or RNA sample. This capability opens the door to a renaissance in personalized medicine if effectively deployed. Three projects that address major and necessary advancements in massively parallel sequencing are included in this dissertation. The first study involves a pair of algorithms to verify patient identity based on single nucleotide polymorphisms (SNPs). In brief, we developed a method that allows de novo construction of sample relationships, e.g., which ones are from the same individuals and which are from different individuals. We also developed a method to confirm the hypothesis that a tumor came from a known individual. The second study derives an algorithm to multiplex multiple Polymerase Chain Reaction (PCR) reactions, while minimizing interference between reactions that compromise results. PCR is a powerful technique that amplifies pre-determined regions of DNA and is often used to selectively amplify DNA and RNA targets that are destined for sequencing. It is highly desirable to multiplex reactions to save on reagent and assay setup costs as well as equalize the effect of minor handling issues across gene targets. Our solution involves a binary integer program that minimizes events that are likely to cause interference between PCR reactions. The third study involves design and analysis methods required to analyze gene expression and copy number results against a reference range in a clinical setting for guiding patient treatments. Our goal is to determine which events are present in a given tumor specimen. These events may be mutation, DNA copy number or RNA expression. All three techniques are being used in major research and diagnostic projects for their intended purpose at the time of writing this manuscript. The SNP matching solution has been selected by The Cancer Genome Atlas to determine sample identity. Paradigm Diagnostics, Viomics and International Genomics Consortium utilize the PCR multiplexing technique to multiplex various types of PCR reactions on multi-million dollar projects. The reference range-based normalization method is used by Paradigm Diagnostics to analyze results from every patient.
Date Created
2014
Agent

Minimizing total weighted tardiness in a two staged flexible flow-shop with batch processing, incompatible job families and unequal ready times using time window decomposition

151111-Thumbnail Image.png
Description
This research is motivated by a deterministic scheduling problem that is fairly common in manufacturing environments, where there are certain processes that call for a machine working on multiple jobs at the same time. An example of such an environment

This research is motivated by a deterministic scheduling problem that is fairly common in manufacturing environments, where there are certain processes that call for a machine working on multiple jobs at the same time. An example of such an environment is wafer fabrication in the semiconductor industry where some stages can be modeled as batch processes. There has been significant work done in the past in the field of a single stage of parallel machines which process jobs in batches. The primary motivation behind this research is to extend the research done in this area to a two-stage flow-shop where jobs arrive with unequal ready times and belong to incompatible job families with the goal of minimizing total weighted tardiness. As a first step to propose solutions, a mixed integer mathematical model is developed which tackles the problem at hand. The problem is NP-hard and thus the developed mathematical program can only solve problem instances of smaller sizes in a reasonable amount of time. The next step is to build heuristics which can provide feasible solutions in polynomial time for larger problem instances. The basic nature of the heuristics proposed is time window decomposition, where jobs within a moving time frame are considered for batching each time a machine becomes available on either stage. The Apparent Tardiness Cost (ATC) rule is used to build batches, and is modified to calculate ATC indices on a batch as well as a job level. An improvisation to the above heuristic is proposed, where the heuristic is run iteratively, each time assigning start times of jobs on the second stage as due dates for the jobs on the first stage. The underlying logic behind the iterative approach is to improve the way due dates are estimated for the first stage based on assigned due dates for jobs in the second stage. An important study carried out as part of this research is to analyze the bottleneck stage in terms of its location and how it affects the performance measure. Extensive experimentation is carried out to test how the quality of the solution varies when input parameters are varied between high and low values.
Date Created
2012
Agent

Matching supply and demand using dynamic quotation strategies

151051-Thumbnail Image.png
Description
Today's competitive markets force companies to constantly engage in the complex task of managing their demand. In make-to-order manufacturing or service systems, the demand of a product is shaped by price and lead times, where high price and lead time

Today's competitive markets force companies to constantly engage in the complex task of managing their demand. In make-to-order manufacturing or service systems, the demand of a product is shaped by price and lead times, where high price and lead time quotes ensure profitability for supplier, but discourage the customers from placing orders. Low price and lead times, on the other hand, generally result in high demand, but do not necessarily ensure profitability. The price and lead time quotation problem considers the trade-off between offering high and low prices and lead times. The recent practices in make-to- order manufacturing companies reveal the importance of dynamic quotation strategies, under which the prices and lead time quotes flexibly change depending on the status of the system. In this dissertation, the objective is to model a make-to-order manufacturing system and explore various aspects of dynamic quotation strategies such as the behavior of optimal price and lead time decisions, the impact of customer preferences on optimal decisions, the benefits of employing dynamic quotation in comparison to simpler quotation strategies, and the benefits of coordinating price and lead time decisions. I first consider a manufacturer that receives demand from spot purchasers (who are quoted dynamic price and lead times), as well as from contract customers who have agree- ments with the manufacturer with fixed price and lead time terms. I analyze how customer preferences affect the optimal price and lead time decisions, the benefits of dynamic quo- tation, and the optimal mix of spot purchaser and contract customers. These analyses necessitate the computation of expected tardiness of customer orders at the moment cus- tomer enters the system. Hence, in the second part of the dissertation, I develop method- ologies to compute the expected tardiness in multi-class priority queues. For the trivial single class case, a closed formulation is obtained. For the more complex multi-class case, numerical inverse Laplace transformation algorithms are developed. In the last part of the dissertation, I model a decentralized system with two components. Marketing department determines the price quotes with the objective of maximizing revenues, and manufacturing department determines the lead time quotes to minimize lateness costs. I discuss the ben- efits of coordinating price and lead time decisions, and develop an incentivization scheme to reduce the negative impacts of lack of coordination.
Date Created
2012
Agent

Competitive positioning of ports based on total landed costs of supply chains

149890-Thumbnail Image.png
Description
Nowadays ports play a critic role in the supply chains of contemporary companies and global commerce. Since the ports' operational effectiveness is critical on the development of competitive supply chains, their contribution to regional economies is essential. With the globalization

Nowadays ports play a critic role in the supply chains of contemporary companies and global commerce. Since the ports' operational effectiveness is critical on the development of competitive supply chains, their contribution to regional economies is essential. With the globalization of markets, the traffic of containers flowing through the different ports has increased significantly in the last decades. In order to attract additional container traffic and improve their comparative advantages over the competition, ports serving same hinterlands explore ways to improve their operations to become more attractive to shippers. This research explores the hypothesis that lowering the variability of the service time observed in the handling of containers, a port reduces the total logistics costs of their customers, increase its competiveness and that of their customers. This thesis proposes a methodology that allows the quantification of the variability existing in the services of a port derived from factors like inefficient internal operations, vessel congestion or external disruptions scenarios. It focuses on assessing the impact of this variability on the user's logistic costs. The methodology also allows a port to define competitive strategies that take into account its variability and that of competing ports. These competitive strategies are also translated into specific parameters that can be used to design and adjust internal operations. The methodology includes (1) a definition of a proper economic model to measure the logistic impact of port's variability, (2) a network analysis approach to the defined problem and (3) a systematic procedure to determine competitive service time parameters for a port. After the methodology is developed, a case study is presented where it is applied to the Port of Guaymas. This is done by finding service time parameters for this port that yield lower logistic costs than the observed in other competing ports.
Date Created
2011
Agent

Mixture-process variable design experiments with control and noise variables within a split-plot structure

149476-Thumbnail Image.png
Description
In mixture-process variable experiments, it is common that the number of runs is greater than in mixture-only or process-variable experiments. These experiments have to estimate the parameters from the mixture components, process variables, and interactions of both variables. In some

In mixture-process variable experiments, it is common that the number of runs is greater than in mixture-only or process-variable experiments. These experiments have to estimate the parameters from the mixture components, process variables, and interactions of both variables. In some of these experiments there are variables that are hard to change or cannot be controlled under normal operating conditions. These situations often prohibit a complete randomization for the experimental runs due to practical and economical considerations. Furthermore, the process variables can be categorized into two types: variables that are controllable and directly affect the response, and variables that are uncontrollable and primarily affect the variability of the response. These uncontrollable variables are called noise factors and assumed controllable in a laboratory environment for the purpose of conducting experiments. The model containing both noise variables and control factors can be used to determine factor settings for the control factor that makes the response "robust" to the variability transmitted from the noise factors. These types of experiments can be analyzed in a model for the mean response and a model for the slope of the response within a split-plot structure. When considering the experimental designs, low prediction variances for the mean and slope model are desirable. The methods for the mixture-process variable designs with noise variables considering a restricted randomization are demonstrated and some mixture-process variable designs that are robust to the coefficients of interaction with noise variables are evaluated using fraction design space plots with the respect to the prediction variance properties. Finally, the G-optimal design that minimizes the maximum prediction variance over the entire design region is created using a genetic algorithm.
Date Created
2010
Agent