This work aims to address the design optimization of bio-inspired locomotive devices in collective swimming by developing a computational methodology which combines surrogate-based optimization with high fidelity fluid-structure interactions (FSI) simulations of thunniform swimmers. Three main phases highlight the contribution…
This work aims to address the design optimization of bio-inspired locomotive devices in collective swimming by developing a computational methodology which combines surrogate-based optimization with high fidelity fluid-structure interactions (FSI) simulations of thunniform swimmers. Three main phases highlight the contribution and novelty of the current work. The first phase includes the development and bench-marking of a constrained surrogate-based optimization algorithm which is appropriate to the current design problem. Additionally, new FSI techniques, such as a volume-conservation scheme, has been developed to enhance the accuracy and speed of the simulations. The second phase involves an investigation of the optimized hydrodynamics of a solitary accelerating self-propelled thunniform swimmer during start-up. The third phase extends the analysis to include the optimized hydrodynamics of accelerating swimmers in phalanx schools. Future work includes extending the analysis to the optimized hydrodynamics of steady-state and accelerating swimmers in a diamond-shaped school. The results of the first phase indicate that the proposed optimization algorithm maintains a competitive performance when compared to other gradient-based and gradient-free methods, in dealing with expensive simulations-based black-box optimization problems with constraints. In addition, the proposed optimization algorithm is capable of insuring strictly feasible candidates during the optimization procedure, which is a desirable property in applied engineering problems where design variables must remain feasible for simulations or experiments not to fail. The results of the second phase indicate that the optimized kinematic gait of a solitary accelerating swimmer generates the reverse Karman vortex street associated with high propulsive efficiency. Moreover, the efficiency of sub-optimum modes, in solitary swimming, is found to increase with both the tail amplitude and the effective flapping length of the swimmer, and a new scaling law is proposed to capture these trends. Results of the third phase indicate that the optimal midline kinematics in accelerating phalanx schools resemble those of accelerating solitary swimmers. The optimal separation distance in a phalanx school is shown to be around 2L (where L is the swimmer's total length). Furthermore, separation distance is shown to have a stronger effect, ceteris paribus, on the propulsion efficiency of a school when compared to phase synchronization.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
The emerging multimodal mobility as a service (MaaS) and connected and automated mobility (CAM) are expected to improve individual travel experience and entire transportation system performance in various aspects, such as convenience, safety, and reliability. There have been extensive efforts…
The emerging multimodal mobility as a service (MaaS) and connected and automated mobility (CAM) are expected to improve individual travel experience and entire transportation system performance in various aspects, such as convenience, safety, and reliability. There have been extensive efforts in the literature devoted to enhancing existing and developing new methodologies and tools to investigate the impacts and potentials of CAM systems. Due to the hierarchical nature of CAM systems and associated intrinsic correlated human factors and physical infrastructures from various resolutions, simply considering components across different levels into a single model may be practically infeasible and computationally prohibitive in operation and decision stages. One of the greatest challenges in existing studies is to construct a theoretically sound and computationally efficient architecture such that CAM system modeling can be performed in an inherently consistent cross-resolution manner. This research aims to contribute to the modeling of CAM systems on layered transportation networks, with a special focus on the following three aspects: (1) layered CAM system architecture with a tight network and modeling consistency, in which different levels of tasks can be efficiently performed at dedicated layers; (2) cross-resolution traffic state estimation in CAM systems using heterogeneous observations; and (3) integrated city logistics operation optimization in CAM for improving system performance.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Global optimization (programming) has been attracting the attention of researchers for almost a century. Since linear programming (LP) and mixed integer linear programming (MILP) had been well studied in early stages, MILP methods and software tools had improved in their…
Global optimization (programming) has been attracting the attention of researchers for almost a century. Since linear programming (LP) and mixed integer linear programming (MILP) had been well studied in early stages, MILP methods and software tools had improved in their efficiency in the past few years. They are now fast and robust even for problems with millions of variables. Therefore, it is desirable to use MILP software to solve mixed integer nonlinear programming (MINLP) problems. For an MINLP problem to be solved by an MILP solver, its nonlinear functions must be transformed to linear ones. The most common method to do the transformation is the piecewise linear approximation (PLA). This dissertation will summarize the types of optimization and the most important tools and methods, and will discuss in depth the PLA tool. PLA will be done using nonuniform partitioning of the domain of the variables involved in the function that will be approximated. Also partial PLA models that approximate only parts of a complicated optimization problem will be introduced. Computational experiments will be done and the results will show that nonuniform partitioning and partial PLA can be beneficial.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
This dissertation develops a second order accurate approximation to the magnetic resonance (MR) signal model used in the PARSE (Parameter Assessment by Retrieval from Single Encoding) method to recover information about the reciprocal of the spin-spin relaxation time function (R2*)…
This dissertation develops a second order accurate approximation to the magnetic resonance (MR) signal model used in the PARSE (Parameter Assessment by Retrieval from Single Encoding) method to recover information about the reciprocal of the spin-spin relaxation time function (R2*) and frequency offset function (w) in addition to the typical steady-state transverse magnetization (M) from single-shot magnetic resonance imaging (MRI) scans. Sparse regularization on an approximation to the edge map is used to solve the associated inverse problem. Several studies are carried out for both one- and two-dimensional test problems, including comparisons to the first order approximation method, as well as the first order approximation method with joint sparsity across multiple time windows enforced. The second order accurate model provides increased accuracy while reducing the amount of data required to reconstruct an image when compared to piecewise constant in time models. A key component of the proposed technique is the use of fast transforms for the forward evaluation. It is determined that the second order model is capable of providing accurate single-shot MRI reconstructions, but requires an adequate coverage of k-space to do so. Alternative data sampling schemes are investigated in an attempt to improve reconstruction with single-shot data, as current trajectories do not provide ideal k-space coverage for the proposed method.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Background: The binding of peptide fragments of extracellular peptides to class II MHC is a crucial event in the adaptive immune response. Each MHC allotype generally binds a distinct subset of peptides and the enormous number of possible peptide epitopes prevents…
Background: The binding of peptide fragments of extracellular peptides to class II MHC is a crucial event in the adaptive immune response. Each MHC allotype generally binds a distinct subset of peptides and the enormous number of possible peptide epitopes prevents their complete experimental characterization. Computational methods can utilize the limited experimental data to predict the binding affinities of peptides to class II MHC.
Results: We have developed the Regularized Thermodynamic Average, or RTA, method for predicting the affinities of peptides binding to class II MHC. RTA accounts for all possible peptide binding conformations using a thermodynamic average and includes a parameter constraint for regularization to improve accuracy on novel data. RTA was shown to achieve higher accuracy, as measured by AUC, than SMM-align on the same data for all 17 MHC allotypes examined. RTA also gave the highest accuracy on all but three allotypes when compared with results from 9 different prediction methods applied to the same data. In addition, the method correctly predicted the peptide binding register of 17 out of 18 peptide-MHC complexes. Finally, we found that suboptimal peptide binding registers, which are often ignored in other prediction methods, made significant contributions of at least 50% of the total binding energy for approximately 20% of the peptides.
Conclusions: The RTA method accurately predicts peptide binding affinities to class II MHC and accounts for multiple peptide binding registers while reducing overfitting through regularization. The method has potential applications in vaccine design and in understanding autoimmune disorders. A web server implementing the RTA prediction method is available at http://bordnerlab.org/RTA/.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Background: The binding of peptide fragments of antigens to class II MHC is a crucial step in initiating a helper T cell immune response. The identification of such peptide epitopes has potential applications in vaccine design and in better understanding autoimmune…
Background: The binding of peptide fragments of antigens to class II MHC is a crucial step in initiating a helper T cell immune response. The identification of such peptide epitopes has potential applications in vaccine design and in better understanding autoimmune diseases and allergies. However, comprehensive experimental determination of peptide-MHC binding affinities is infeasible due to MHC diversity and the large number of possible peptide sequences. Computational methods trained on the limited experimental binding data can address this challenge. We present the MultiRTA method, an extension of our previous single-type RTA prediction method, which allows the prediction of peptide binding affinities for multiple MHC allotypes not used to train the model. Thus predictions can be made for many MHC allotypes for which experimental binding data is unavailable.
Results: We fit MultiRTA models for both HLA-DR and HLA-DP using large experimental binding data sets. The performance in predicting binding affinities for novel MHC allotypes, not in the training set, was tested in two different ways. First, we performed leave-one-allele-out cross-validation, in which predictions are made for one allotype using a model fit to binding data for the remaining MHC allotypes. Comparison of the HLA-DR results with those of two other prediction methods applied to the same data sets showed that MultiRTA achieved performance comparable to NetMHCIIpan and better than the earlier TEPITOPE method. We also directly tested model transferability by making leave-one-allele-out predictions for additional experimentally characterized sets of overlapping peptide epitopes binding to multiple MHC allotypes. In addition, we determined the applicability of prediction methods like MultiRTA to other MHC allotypes by examining the degree of MHC variation accounted for in the training set. An examination of predictions for the promiscuous binding CLIP peptide revealed variations in binding affinity among alleles as well as potentially distinct binding registers for HLA-DR and HLA-DP. Finally, we analyzed the optimal MultiRTA parameters to discover the most important peptide residues for promiscuous and allele-specific binding to HLA-DR and HLA-DP allotypes.
Conclusions: The MultiRTA method yields competitive performance but with a significantly simpler and physically interpretable model compared with previous prediction methods. A MultiRTA prediction webserver is available at http://bordnerlab.org/MultiRTA.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
In this dissertation, two problems are addressed in the verification and control of Cyber-Physical Systems (CPS):
1) Falsification: given a CPS, and a property of interest that the CPS must satisfy under all allowed operating conditions, does the CPS violate, i.e.…
In this dissertation, two problems are addressed in the verification and control of Cyber-Physical Systems (CPS):
1) Falsification: given a CPS, and a property of interest that the CPS must satisfy under all allowed operating conditions, does the CPS violate, i.e. falsify, the property?
2) Conformance testing: given a model of a CPS, and an implementation of that CPS on an embedded platform, how can we characterize the properties satisfied by the implementation, given the properties satisfied by the model?
Both problems arise in the context of Model-Based Design (MBD) of CPS: in MBD, the designers start from a set of formal requirements that the system-to-be-designed must satisfy.
A first model of the system is created.
Because it may not be possible to formally verify the CPS model against the requirements, falsification tries to verify whether the model satisfies the requirements by searching for behavior that violates them.
In the first part of this dissertation, I present improved methods for finding falsifying behaviors of CPS when properties are expressed in Metric Temporal Logic (MTL).
These methods leverage the notion of robust semantics of MTL formulae: if a falsifier exists, it is in the neighborhood of local minimizers of the robustness function.
The proposed algorithms compute descent directions of the robustness function in the space of initial conditions and input signals, and provably converge to local minima of the robustness function.
The initial model of the CPS is then iteratively refined by modeling previously ignored phenomena, adding more functionality, etc., with each refinement resulting in a new model.
Many of the refinements in the MBD process described above do not provide an a priori guaranteed relation between the successive models.
Thus, the second problem above arises: how to quantify the distance between two successive models M_n and M_{n+1}?
If M_n has been verified to satisfy the specification, can it be guaranteed that M_{n+1} also satisfies the same, or some closely related, specification?
This dissertation answers both questions for a general class of CPS, and properties expressed in MTL.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
We study the problem of controlling multiple 2-D directional sensors while maximizing an objective function based on the information gain corresponding to multiple target locations. We assume a joint prior Gaussian distribution for the target locations. A sensor generates a…
We study the problem of controlling multiple 2-D directional sensors while maximizing an objective function based on the information gain corresponding to multiple target locations. We assume a joint prior Gaussian distribution for the target locations. A sensor generates a (noisy) measurement of a target only if the target lies within the field-of-view of the sensor, where the statistical properties of the measurement error depend on the location of the target with respect to the sensor and direction of the sensor. The measurements from the sensors are fused to form global estimates of target locations. This problem is combinatorial in nature-the computation time increases exponentially with the number of sensors. We develop heuristic methods to solve the problem approximately, and provide analytical results on performance guarantees. We then improve the performance of our heuristic approaches by applying an approximate dynamic programming approach called rollout. In addition, we address a variant of the above problem, where the goal is to map the sensors to the targets while maximizing the abovementioned objective function. This mapping problem also turns out to be combinatorial in nature, so we extend one of the above heuristics to solve this mapping problem approximately. We compare the performance of these heuristic approaches analytically and empirically.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)