Nations censor specific information in accordance with their political, legal, and cultural standards. Each country adopts unique approaches and regulations for censorship, whether it involves moderating online content or prohibiting protests. This paper seeks to study the underlying motivations for…
Nations censor specific information in accordance with their political, legal, and cultural standards. Each country adopts unique approaches and regulations for censorship, whether it involves moderating online content or prohibiting protests. This paper seeks to study the underlying motivations for the disparate behaviors exhibited by authorities and individuals. To achieve this, we develop a mathematical model designed to understand the dynamics between authority figures and individuals, analyzing their behaviors under various conditions. We argue that individuals essentially act in three phases - compliance, self-censoring, and defiance when faced with different situations under their own desires and the authority's parameters. We substantiate our findings by conducting different simulations on the model and visualizing the outcomes. Through these simulations, we realize why individuals exhibit behaviors falling into one of three categories, who are influenced by factors such as the level of surveillance imposed by the authority, the severity of punishments, the tolerance for dissent, or the individuals' boldness. This also helped us to understand why certain populations in a country exhibit defiance, self-censoring behavior, or compliance as they interact with each other and behave under specific conditions within a small network world.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Graphic Processing Units (GPUs) have become a key enabler of the big-data revolution, functioning as defacto co-processors to accelerate large-scale computation. As the GPU programming stack and tool support have matured, the technology has alsobecome accessible to programmers. However, optimizing…
Graphic Processing Units (GPUs) have become a key enabler of the big-data revolution, functioning as defacto co-processors to accelerate large-scale computation. As the GPU programming stack and tool support have matured, the technology has alsobecome accessible to programmers. However, optimizing programs to run efficiently on GPUs requires developers to have both detailed understanding of the application logic and significant knowledge of parallel programming and GPU architectures.
This dissertation proposes GEVO, a tool for automatically tuning the performance of GPU kernels in the LLVM representation to meet desired criteria. GEVO uses population-based search to find edits to programs compiled to LLVM-IR which improves performance on desired criteria and retains required functionality. The evaluation of GEVO on the Rodinia benchmark suite demonstrates many runtime optimization techniques. One of the key insights is that semantic relaxation enables GEVO to
discover these optimizations that are usually prohibited by the compiler. GEVO also explores many other optimizations, including architecture- and application-specific. A follow-up evaluation of three bioinformatics applications at their different stages of development suggests that GEVO can optimize programs as well as human experts, sometimes even into a code base that is beyond a programmer’s reach. Furthermore, to unshackle the constraint of GEVO in optimizing neural network (NN) models,
GEVO-ML is proposed by extending the representation capability of GEVO, where NN models and the training/prediction process are uniformly represented in a single intermediate language. An evaluation of GEVO-ML shows that GEVO-ML can
optimize network models similar to how human developers improve model design, for example, by changing learning rates or pruning non-essential parameters. These results showcase the potential of automated program optimization tools to both reduce
the optimization burden for researchers and provide new insights for GPU experts.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
In contrast to traditional chemotherapy for cancer which fails to address tumor heterogeneity, raises patients’ levels of toxicity, and selects for drug-resistant cells, adaptive therapy applies ideas from cancer ecology in employing low-dose drugs to encourage competition between cancerous cells,…
In contrast to traditional chemotherapy for cancer which fails to address tumor heterogeneity, raises patients’ levels of toxicity, and selects for drug-resistant cells, adaptive therapy applies ideas from cancer ecology in employing low-dose drugs to encourage competition between cancerous cells, reducing toxicity and potentially prolonging disease progression. Despite promising results in some clinical trials, optimizing adaptive therapy routines involves navigating a vast space of combina- torial possibilities, including the number of drugs, drug holiday duration, and drug dosages. Computational models can serve as precursors to efficiently explore this space, narrowing the scope of possibilities for in-vivo and in-vitro experiments which are time-consuming, expensive, and specific to tumor types. Among the existing modeling techniques, agent-based models are particularly suited for studying the spatial inter- actions critical to successful adaptive therapy. In this thesis, I introduce CancerSim, a three-dimensional agent-based model fully implemented in C++ that is designed to simulate tumorigenesis, angiogenesis, drug resistance, and resource competition within a tissue. Additionally, the model is equipped to assess the effectiveness of various adaptive therapy regimens. The thesis provides detailed insights into the biological motivation and calibration of different model parameters. Lastly, I propose a series of research questions and experiments for adaptive therapy that CancerSim can address in the pursuit of advancing cancer treatment strategies.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Coffee leaf rust (CLR) is an aggressive disease that has caused devastating production losses around the world. While coffee productivity under shade conditions has been simulated in sufficient detail, explicit modeling of the interactions between shading levels, microclimate, coffee plant…
Coffee leaf rust (CLR) is an aggressive disease that has caused devastating production losses around the world. While coffee productivity under shade conditions has been simulated in sufficient detail, explicit modeling of the interactions between shading levels, microclimate, coffee plant physiology, and CLR progression remains largely unexplored, despite the recognized influence of shade on CLR epidemics. This dissertation introduces a new model, SpatialRust, as an initial approximation to an integrative simulation framework where farm design and management strategies can be linked to coffee production and CLR epidemic outcomes. SpatialRust considers stylized processes describing the dynamics of shade trees, coffee plants, and CLR and their interactions within a spatially explicit context. The dissertation then presents three experiments conducted using SpatialRust simulations. The first experiment investigates the role of shading as a mitigation tool for CLR outbreaks. It demonstrates that shade can effectively mitigate CLR epidemics when the conditions are otherwise conducive to a major CLR epidemic. Additionally, the experiment reveals complex effects of different shade management approaches, underscoring the potential value of future empirical studies that consider temporal and spatial shade dynamics in relation to CLR outcomes. The second experiment focuses on the financial balance of farms and examines how farmer preferences and needs influence farm management strategies. The findings indicate that incorporating CLR mitigation as part of the strategy's goals leads to positive long-term farm performance, even when planning for the short term. In the third experiment, the scope of the simulations is expanded to include neighboring coffee farms. The results demonstrate that the strategies adopted by immediate neighbors can affect the performance of individual farms, emphasizing the importance of considering the broader coffee-growing landscape context. This work shows that the integration of farm management practices and the resulting shading effects into a spatially explicit framework can provide valuable insights into the dynamics of CLR epidemics and how they respond to farmers' decisions.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Resistance to existing anti-cancer drugs poses a key challenge in the field of medical oncology, in that it results in the tumor not responding to treatment using the same medications to which it responded previously, leading to treatment failure. Adaptive…
Resistance to existing anti-cancer drugs poses a key challenge in the field of medical oncology, in that it results in the tumor not responding to treatment using the same medications to which it responded previously, leading to treatment failure. Adaptive therapy utilizes evolutionary principles of competitive suppression, leveraging competition between drug resistant and drug sensitive cells, to keep the population of drug resistant cells under control, thereby extending time to progression (TTP), relative to standard treatment using maximum tolerated dose (MTD). Development of adaptive therapy protocols is challenging, as it involves many parameters, and the number of parameters increase exponentially for each additional drug. Furthermore, the drugs could have a cytotoxic (killing cells directly), or a cytostatic (inhibiting cell division) mechanism of action, which could affect treatment outcome in important ways. I have implemented hybrid agent-based computational models to investigate adaptive therapy, using either a single drug (cytotoxic or cytostatic), or two drugs (cytotoxic or cytostatic), simulating three different adaptive therapy protocols for treatment using a single drug (dose modulation, intermittent, dose-skipping), and seven different treatment protocols for treatment using two drugs: three dose modulation (DM) protocols (DM Cocktail Tandem, DM Ping-Pong Alternate Every Cycle, DM Ping-Pong on Progression), and four fixed-dose (FD) protocols (FD Cocktail Intermittent, FD Ping-Pong Intermittent, FD Cocktail Dose-Skipping, FD Ping-Pong Dose-Skipping). The results indicate a Goldilocks level of drug exposure to be optimum, with both too little and too much drug having adverse effects. Adaptive therapy works best under conditions of strong cellular competition, such as high fitness costs, high replacement rates, or high turnover. Clonal competition is an important determinant of treatment outcome, and as such treatment using two drugs leads to more favorable outcome than treatment using a single drug. Switching drugs every treatment cycle (ping-pong) protocols work particularly well, as well as cocktail dose modulation, particularly when it is feasible to have a highly sensitive measurement of tumor burden. In general, overtreating seems to have adverse survival outcome, and triggering a treatment vacation, or stopping treatment sooner when the tumor is shrinking seems to work well.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
In the age of growing technology, Computer Science (CS) professionals have come into high demand. However, despite popular demand there are not enough computer scientists to fill these roles. The current demographic of computer scientists consists mainly of white men.…
In the age of growing technology, Computer Science (CS) professionals have come into high demand. However, despite popular demand there are not enough computer scientists to fill these roles. The current demographic of computer scientists consists mainly of white men. This apparent gender gap must be addressed to promote diversity and inclusivity in a career that requires high creativity and innovation. To understand what enforces gender stereotypes and the gender gap within CS, survey and interview data were collected from both male and female senior students studying CS and those who have left the CS program at Arizona State University. Students were asked what experiences either diminished or reinforced their sense of belonging in this field as well as other questions related to their involvement in CS. Interview and survey data reveal a lack of representation within courses as well as lack of peer support are key factors that influence the involvement and retention of students in CS, especially women. This data was used to identify key factors that influence retention and what can be done to remedy the growing deficit of professionals in this field.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Nucleic acid nanotechnology is a field of nanoscale engineering where the sequences of deoxyribonucleicacid (DNA) and ribonucleic acid (RNA) molecules are carefully designed to create self–assembled nanostructures with higher spatial resolution than is available to top–down fabrication methods. In the…
Nucleic acid nanotechnology is a field of nanoscale engineering where the sequences of deoxyribonucleicacid (DNA) and ribonucleic acid (RNA) molecules are carefully designed to create self–assembled nanostructures with higher spatial resolution than is available to top–down fabrication methods. In the 40 year history
of the field, the structures created have scaled from small tile–like structures constructed from a few hundred
individual nucleotides to micron–scale structures assembled from millions of nucleotides using the technique
of “DNA origami”. One of the key drivers of advancement in any modern engineering field is the parallel
development of software which facilitates the design of components and performs in silico simulation of the
target structure to determine its structural properties, dynamic behavior, and identify defects. For nucleic acid
nanotechnology, the design software CaDNAno and simulation software oxDNA are the most popular choices
for design and simulation, respectively. In this dissertation I will present my work on the oxDNA software
ecosystem, including an analysis toolkit, a web–based graphical interface, and a new molecular visualization
tool which doubles as a free–form design editor that covers some of the weaknesses of CaDNAno’s lattice–based design paradigm. Finally, as a demonstration of the utility of these new tools I show oxDNA simulation
and subsequent analysis of a nanoscale leaf–spring engine capable of converting chemical energy into dynamic motion. OxDNA simulations were used to investigate the effects of design choices on the behavior of
the system and rationalize experimental results.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Arc Routing Problems (ARPs) are a type of routing problem that finds routes of minimum total cost covering the edges or arcs in a graph representing street or road networks. They find application in many essential services such as residential…
Arc Routing Problems (ARPs) are a type of routing problem that finds routes of minimum total cost covering the edges or arcs in a graph representing street or road networks. They find application in many essential services such as residential waste collection, winter gritting, and others. Being NP-hard, solutions are usually found using heuristic methods. This dissertation contributes to heuristics for ARP, with a focus on the Capacitated Arc Routing Problem (CARP) with additional constraints. In operations such as residential waste collection, vehicle breakdown disruptions occur frequently. A new variant Capacitated Arc Re-routing Problem for Vehicle Break-down (CARP-VB) is introduced to address the need to re-route using only remaining vehicles to avoid missing services. A new heuristic Probe is developed to solve CARP-VB. Experiments on benchmark instances show that Probe is better in reducing the makespan and hence effective in reducing delays and avoiding missing services.
In addition to total cost, operators are also interested in solutions that are attractive, that is, routes that are contiguous, compact, and non-overlapping to manage the work. Operators may not adopt a solution that is not attractive even if it is optimum. They are also interested in solutions that are balanced in workload to meet equity requirements. A new multi-objective memetic algorithm, MA-ABC is developed, that optimizes three objectives: Attractiveness, makespan, and total cost. On testing with benchmark instances, MA-ABC was found to be effective in providing attractive and balanced route solutions without affecting the total cost.
Changes in the problem specification such as demand and topology occurs frequently in business operations. Machine learning be applied to learn the distribution behind these changes and generate solutions quickly at time of inference. Splice is a machine learning framework for CARP that generates closer to optimum solutions quickly using a graph neural network and deep Q-learning. Splice can solve several variants of node and arc routing problems using the same architecture without any modification. Splice was trained and tested using randomly generated instances. Splice generated solutions faster that are also better in comparison to popular metaheuristics.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
The construction of many families of combinatorial objects remains a challenging problem. A t-restriction is an array where a predicate is satisfied for every t columns; an example is a perfect hash family (PHF). The composition of a PHF and…
The construction of many families of combinatorial objects remains a challenging problem. A t-restriction is an array where a predicate is satisfied for every t columns; an example is a perfect hash family (PHF). The composition of a PHF and any t-restriction satisfying predicate P yields another t-restriction also satisfying P with more columns than the original t-restriction had. This thesis concerns three approaches in determining the smallest size of PHFs.
Firstly, hash families in which the associated property is satisfied at least some number lambda times are considered, called higher-index, which guarantees redundancy when constructing t-restrictions. Some direct and optimal constructions of hash families of higher index are given. A new recursive construction is established that generalizes previous results and generates higher-index PHFs with more columns. Probabilistic methods are employed to obtain an upper bound on the optimal size of higher-index PHFs when the number of columns is large. A new deterministic algorithm is developed that generates such PHFs meeting this bound, and computational results are reported.
Secondly, a restriction on the structure of PHFs is introduced, called fractal, a method from Blackburn. His method is extended in several ways; from homogeneous hash families (every row has the same number of symbols) to heterogeneous ones; and to distributing hash families, a relaxation of the predicate for PHFs. Recursive constructions with fractal hash families as ingredients are given, and improve upon on the best-known sizes of many PHFs.
Thirdly, a method of Colbourn and Lanus is extended in which they horizontally copied a given hash family and greedily applied transformations to each copy. Transformations of existential t-restrictions are introduced, which allow for the method to be applicable to any t-restriction having structure like those of hash families. A genetic algorithm is employed for finding the "best" such transformations. Computational results of the GA are reported using PHFs, as the number of transformations permitted is large compared to the number of symbols. Finally, an analysis is given of what trade-offs exist between computation time and the number of t-sets left not satisfying the predicate.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)