Factory production is stochastic in nature with time varying input and output processes that are non-stationary stochastic processes. Hence, the principle quantities of interest are random variables. Typical modeling of such behavior involves numerical simulation and statistical …
Factory production is stochastic in nature with time varying input and output processes that are non-stationary stochastic processes. Hence, the principle quantities of interest are random variables. Typical modeling of such behavior involves numerical simulation and statistical analysis. A deterministic closure model leading to a second order model for the product density and product speed has previously been proposed. The resulting partial differential equations (PDE) are compared to discrete event simulations (DES) that simulate factory production as a time dependent M/M/1 queuing system. Three fundamental scenarios for the time dependent influx are studied: An instant step up/down of the mean arrival rate; an exponential step up/down of the mean arrival rate; and periodic variation of the mean arrival rate. It is shown that the second order model, in general, yields significant improvement over current first order models. Specifically, the agreement between the DES and the PDE for the step up and for periodic forcing that is not too rapid is very good. Adding diffusion to the PDE further improves the agreement. The analysis also points to fundamental open issues regarding the deterministic modeling of low signal-to-noise ratio for some stochastic processes and the possibility of resonance in deterministic models that is not present in the original stochastic process.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
We study the interplay between correlations, dynamics, and networks for repeated attacks on a socio-economic network. As a model system we consider an insurance scheme against disasters that randomly hit nodes, where a node in need receives support from its…
We study the interplay between correlations, dynamics, and networks for repeated attacks on a socio-economic network. As a model system we consider an insurance scheme against disasters that randomly hit nodes, where a node in need receives support from its network neighbors. The model is motivated by gift giving among the Maasai called Osotua. Survival of nodes under different disaster scenarios (uncorrelated, spatially, temporally and spatio-temporally correlated) and for different network architectures are studied with agent-based numerical simulations. We find that the survival rate of a node depends dramatically on the type of correlation of the disasters: Spatially and spatio-temporally correlated disasters increase the survival rate; purely temporally correlated disasters decrease it. The type of correlation also leads to strong inequality among the surviving nodes. We introduce the concept of disaster masking to explain some of the results of our simulations. We also analyze the subsets of the networks that were activated to provide support after fifty years of random disasters. They show qualitative differences for the different disaster scenarios measured by path length, degree, clustering coefficient, and number of cycles.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is…
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Signaling cascades proliferate signals received on the cell membrane to the nucleus. While noise filtering, ultra-sensitive switches, and signal amplification have all been shown to be features of such signaling cascades, it is not understood why cascades typically show three…
Signaling cascades proliferate signals received on the cell membrane to the nucleus. While noise filtering, ultra-sensitive switches, and signal amplification have all been shown to be features of such signaling cascades, it is not understood why cascades typically show three or four layers. Using singular perturbation theory, Michaelis-Menten type equations are derived for open enzymatic systems. Cascading these equations we demonstrate that the output signal as a function of time becomes sigmoidal with the addition of more layers. Furthermore, it is shown that the activation time will speed up to a point, after which more layers become superfluous. It is shown that three layers create a reliable sigmoidal response progress curve from a wide variety of time-dependent signaling inputs arriving at the cell membrane, suggesting the evolutionary benefit of the observed cascades.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Pre-Exposure Prophylaxis (PrEP) is any medical or public health procedure used before exposure to the disease causing agent, its purpose is to prevent, rather than treat or cure a disease. Most commonly, PrEP refers to an experimental HIV-prevention strategy that…
Pre-Exposure Prophylaxis (PrEP) is any medical or public health procedure used before exposure to the disease causing agent, its purpose is to prevent, rather than treat or cure a disease. Most commonly, PrEP refers to an experimental HIV-prevention strategy that would use antiretrovirals to protect HIV-negative people from HIV infection. A deterministic mathematical model of HIV transmission is developed to evaluate the public-health impact of oral PrEP interventions, and to compare PrEP effectiveness with respect to different evaluation methods. The effects of demographic, behavioral, and epidemic parameters on the PrEP impact are studied in a multivariate sensitivity analysis. Most of the published models on HIV intervention impact assume that the number of individuals joining the sexually active population per year is constant or proportional to the total population. In the second part of this study, three models are presented and analyzed to study the PrEP intervention, with constant, linear, and logistic recruitment rates. How different demographic assumptions can affect the evaluation of PrEP is studied. When provided with data, often least square fitting or similar approaches can be used to determine a single set of approximated parameter values that make the model fit the data best. However, least square fitting only provides point estimates and does not provide information on how strongly the data supports these particular estimates. Therefore, in the third part of this study, Bayesian parameter estimation is applied on fitting ODE model to the related HIV data. Starting with a set of prior distributions for the parameters as initial guess, Bayes' formula can be applied to obtain a set of posterior distributions for the parameters which makes the model fit the observed data best. Evaluating the posterior distribution often requires the integration of high-dimensional functions, which is usually difficult to calculate numerically. Therefore, the Markov chain Monte Carlo (MCMC) method is used to approximate the posterior distribution.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
This thesis presents a model for the buying behavior of consumers in a technology market. In this model, a potential consumer is not perfectly rational, but exhibits bounded rationality following the axioms of prospect theory: reference dependence, diminishing returns and…
This thesis presents a model for the buying behavior of consumers in a technology market. In this model, a potential consumer is not perfectly rational, but exhibits bounded rationality following the axioms of prospect theory: reference dependence, diminishing returns and loss sensitivity. To evaluate the products on different criteria, the analytic hierarchy process is used, which allows for relative comparisons. The analytic hierarchy process proposes that when making a choice between several alternatives, one should measure the products by comparing them relative to each other. This allows the user to put numbers to subjective criteria. Additionally, evidence suggests that a consumer will often consider not only their own evaluation of a product, but also the choices of other consumers. Thus, the model in this paper applies prospect theory to products with multiple attributes using word of mouth as a criteria in the evaluation.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Signaling cascades transduce signals received on the cell membrane to the nucleus. While noise filtering, ultra-sensitive switches, and signal amplification have all been shown to be features of such signaling cascades, it is not understood why cascades typically show three…
Signaling cascades transduce signals received on the cell membrane to the nucleus. While noise filtering, ultra-sensitive switches, and signal amplification have all been shown to be features of such signaling cascades, it is not understood why cascades typically show three or four layers. Using singular perturbation theory, Michaelis-Menten type equations are derived for open enzymatic systems. When these equations are organized into a cascade, it is demonstrated that the output signal as a function of time becomes sigmoidal with the addition of more layers. Furthermore, it is shown that the activation time will speed up to a point, after which more layers become superfluous. It is shown that three layers create a reliable sigmoidal response progress curve from a wide variety of time-dependent signaling inputs arriving at the cell membrane, suggesting that natural selection may have favored signaling cascades as a parsimonious solution to the problem of generating switch-like behavior in a noisy environment.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects…
The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of engineered complex systems and predict their future adaptive patterns. The approach allows the examination of complexity in the structure and the behavior of components as a result of their connections and in relation to their environment. This research describes and uses the major differences of natural complex adaptive systems (CASs) with artificial/engineered CASs to build a framework and platform for ECAS. While this framework focuses on the critical factors of an engineered system, it also enables one to synthetically employ engineering and mathematical models to analyze and measure complexity in such systems. In this way concepts of complex systems science are adapted to management science and system of systems engineering. In particular an integrated consumer-based optimization and agent-based modeling (ABM) platform is presented that enables managers to predict and partially control patterns of behaviors in ECASs. Demonstrated on the U.S. electricity markets, ABM is integrated with normative and subjective decision behavior recommended by the U.S. Department of Energy (DOE) and Federal Energy Regulatory Commission (FERC). The approach integrates social networks, social science, complexity theory, and diffusion theory. Furthermore, it has unique and significant contribution in exploring and representing concrete managerial insights for ECASs and offering new optimized actions and modeling paradigms in agent-based simulation.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Bacteriophage (phage) are viruses that infect bacteria. Typical laboratory experiments show that in a chemostat containing phage and susceptible bacteria species, a mutant bacteria species will evolve. This mutant species is usually resistant to the phage infection and less competitive…
Bacteriophage (phage) are viruses that infect bacteria. Typical laboratory experiments show that in a chemostat containing phage and susceptible bacteria species, a mutant bacteria species will evolve. This mutant species is usually resistant to the phage infection and less competitive compared to the susceptible bacteria species. In some experiments, both susceptible and resistant bacteria species, as well as phage, can coexist at an equilibrium for hundreds of hours. The current research is inspired by these observations, and the goal is to establish a mathematical model and explore sufficient and necessary conditions for the coexistence. In this dissertation a model with infinite distributed delay terms based on some existing work is established. A rigorous analysis of the well-posedness of this model is provided, and it is proved that the susceptible bacteria persist. To study the persistence of phage species, a "Phage Reproduction Number" (PRN) is defined. The mathematical analysis shows phage persist if PRN > 1 and vanish if PRN < 1. A sufficient condition and a necessary condition for persistence of resistant bacteria are given. The persistence of the phage is essential for the persistence of resistant bacteria. Also, the resistant bacteria persist if its fitness is the same as the susceptible bacteria and if PRN > 1. A special case of the general model leads to a system of ordinary differential equations, for which numerical simulation results are presented.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)