Composition of Geographic-Based Component Simulation Models

157996-Thumbnail Image.png
Description
Component simulation models, such as agent-based models, may depend on spatial data associated with geographic locations. Composition of such models can be achieved using a Geographic Knowledge Interchange Broker (GeoKIB) enabled with spatial-temporal data transformation functions, each of which is

Component simulation models, such as agent-based models, may depend on spatial data associated with geographic locations. Composition of such models can be achieved using a Geographic Knowledge Interchange Broker (GeoKIB) enabled with spatial-temporal data transformation functions, each of which is responsible for a set of interactions between two independent models. The use of autonomous interaction models allows model composition without alteration of the composed component models. An interaction model must handle differences in the spatial resolutions between models, in addition to differences in their temporal input/output data types and resolutions.

A generalized GeoKIB was designed that regulates unidirectional spatially-based interactions between composed models. Different input and output data types are used for the interaction model, depending on whether data transfer should be passive or active. Synchronization of time-tagged input/output values is made possible with the use of dependency on a discrete simulation clock. An algorithm supporting spatial conversion is developed to transform any two-dimensional geographic data map between different region specifications. Maps belonging to the composed models can have different regions, map cell sizes, or boundaries. The GeoKIB can be extended based on the model specifications to be composed and the target application domain.

Two separate, simplistic models were created to demonstrate model composition via the GeoKIB. An interaction model was created for each of the two directions the composed models interact. This exemplar is developed to demonstrate composition and simulation of geographic-based component models.
Date Created
2019
Agent

Activity Specification for Time-based Discrete Event Simulation Models

157772-Thumbnail Image.png
Description
Computational models for relatively complex systems are subject to many difficulties, among which is the ability for the models to be discretely understandable and applicable to specific problem types and their solutions. This demands the specification of a dynamic system

Computational models for relatively complex systems are subject to many difficulties, among which is the ability for the models to be discretely understandable and applicable to specific problem types and their solutions. This demands the specification of a dynamic system as a collection of models, including metamodels. In this context, new modeling approaches and tools can help provide a richer understanding and, therefore, the development of sophisticated behavior in system dynamics. From this vantage point, an activity specification is proposed as a modeling approach based on a time-based discrete event system abstraction. Such models are founded upon set-theoretic principles and methods for modeling and simulation with the intent of making them subject to specific and profound questions for user-defined experiments.

Because developing models is becoming more time-consuming and expensive, some research has focused on the acquisition of concrete means targeted at the early stages of component-based system analysis and design. The model-driven architecture (MDA) framework provides some means for the behavioral modeling of discrete systems. The development of models can benefit from simplifications and elaborations enabled by the MDA meta-layers, which is essential for managing model complexity. Although metamodels pose difficulties, especially for developing complex behavior, as opposed to structure, they are advantageous and complementary to formal models and concrete implementations in programming languages.

The developed approach is focused on action and control concepts across the MDA meta-layers and is proposed for the parallel Discrete Event System Specification (P-DEVS) formalism. The Unified Modeling Language (UML) activity meta-models are used with syntax and semantics that conform to the DEVS formalism and its execution protocol. The notions of the DEVS component and state are used together according to their underlying system-theoretic foundation. A prototype tool supporting activity modeling was developed to demonstrate the degree to which action-based behavior can be modeled using the MDA and DEVS. The parallel DEVS, as a formal approach, supports identifying the semantics of the UML activities. Another prototype was developed to create activity models and support their execution with the DEVS-Suite simulator, and a set of prototypical multiprocessor architecture model specifications were designed, simulated, and analyzed.
Date Created
2019
Agent

Application-aware Performance Optimization for Software Managed Manycore Architectures

157100-Thumbnail Image.png
Description
One of the main goals of computer architecture design is to improve performance without much increase in the power consumption. It cannot be achieved by adding increasingly complex intelligent schemes in the hardware, since they will become increasingly less power-efficient.

One of the main goals of computer architecture design is to improve performance without much increase in the power consumption. It cannot be achieved by adding increasingly complex intelligent schemes in the hardware, since they will become increasingly less power-efficient. Therefore, parallelism comes up as the solution. In fact, the irrevocable trend of computer design in near future is still to keep increasing the number of cores while reducing the operating frequency. However, it is not easy to scale number of cores. One important challenge is that existing cores consume too much power. Another challenge is that cache-based memory hierarchy poses a serious limitation due to the rapidly increasing demand of area and power for coherence maintenance.

In this dissertation, opportunities to resolve the aforementioned issues were explored in two aspects.

Firstly, the possibility of removing hardware cache altogether, and replacing it with scratchpad memory with software management was explored. Scratchpad memory consumes much less power than caches. However, as data management logic is completely shifted to Software, how to reduce software overhead is challenging. This thesis presents techniques to manage scratchpad memory judiciously by exploiting application semantics and knowledge of data access patterns, thereby enabling optimization of data movement across the memory hierarchy. Experimental results show that the optimization was able to reduce stack data management overhead by 13X, produce better code mapping in more than 80% of the case, and improve performance by 83% in heap management.

Secondly, the possibility of using software branch hinting to replace hardware branch prediction to completely eliminate power consumption on corresponding hardware components was explored. As branch predictor is removed from hardware, software logic is responsible for reducing branch penalty. Techniques to minimize the branch penalty by optimizing branch hint placement were proposed, which can reduce branch penalty by 35.4% over the state-of-the-art.
Date Created
2019
Agent

Visualizing Network Structures in the Food, Energy, and Water Nexus

157047-Thumbnail Image.png
Description
In recent years, the food, energy, and water (FEW) nexus has become a topic of considerable importance and has spurred research in many scientific and technical fields. This increased interest stems from the high level, and broad area, of impact

In recent years, the food, energy, and water (FEW) nexus has become a topic of considerable importance and has spurred research in many scientific and technical fields. This increased interest stems from the high level, and broad area, of impact that could occur in the long term if the interactions between these complex FEW sectors are incorrectly or only partially defined. For this reason, a significant amount of interdisciplinary collaboration is needed to accurately define these interactions and produce viable solutions to help sustain and secure resources within these sectors. Providing tools that effectively promote interdisciplinary collaboration would allow for the development of a better understanding of FEW nexus interactions, support FEW policy-making under uncertainty, facilitate identification of critical design requirements for FEW visualizations, and encourage proactive FEW visualization design.

The goal of this research will be the completion of 3 primary objectives: (i) specify visualization design requirements relating to the FEW nexus; (ii) develop visualization approaches for the FEW nexus; and (iii) provide a comparison of current FEW visualization approaches against the proposed visualization approach. These objectives will be accomplished by reviewing graph-based visualization, network evolution, and visual analysis of volume data tasks, discussion with domain experts, examination of currently used visualization methods in FEW research, and conduction of a user study. This will provide a more thorough and representative depiction of the FEW nexus, as well as a basis for further research in the area of FEW visualization. This research will enhance collaboration between policymakers and domain experts in an attempt to encourage in-depth nexus research that will help support informed policy-making and promote future resource security.
Date Created
2019
Agent

An Approach to QoS-based Task Distribution in Edge Computing Networks for IoT Applications

156819-Thumbnail Image.png
Description
Internet of Things (IoT) is emerging as part of the infrastructures for advancing a large variety of applications involving connections of many intelligent devices, leading to smart communities. Due to the severe limitation of the computing resources of IoT devices,

Internet of Things (IoT) is emerging as part of the infrastructures for advancing a large variety of applications involving connections of many intelligent devices, leading to smart communities. Due to the severe limitation of the computing resources of IoT devices, it is common to offload tasks of various applications requiring substantial computing resources to computing systems with sufficient computing resources, such as servers, cloud systems, and/or data centers for processing. However, this offloading method suffers from both high latency and network congestion in the IoT infrastructures.

Recently edge computing has emerged to reduce the negative impacts of tasks offloading to remote computing systems. As edge computing is in close proximity to IoT devices, it can reduce the latency of task offloading and reduce network congestion. Yet, edge computing has its drawbacks, such as the limited computing resources of some edge computing devices and the unbalanced loads among these devices. In order to effectively explore the potential of edge computing to support IoT applications, it is necessary to have efficient task management and load balancing in edge computing networks.

In this dissertation research, an approach is presented to periodically distributing tasks within the edge computing network while satisfying the quality-of-service (QoS) requirements of tasks. The QoS requirements include task completion deadline and security requirement. The approach aims to maximize the number of tasks that can be accommodated in the edge computing network, with consideration of tasks’ priorities. The goal is achieved through the joint optimization of the computing resource allocation and network bandwidth provisioning. Evaluation results show the improvement of the approach in increasing the number of tasks that can be accommodated in the edge computing network and the efficiency in resource utilization.
Date Created
2018
Agent

Gene Network Inference via Sequence Alignment and Rectification

156080-Thumbnail Image.png
Description
While techniques for reading DNA in some capacity has been possible for decades,

the ability to accurately edit genomes at scale has remained elusive. Novel techniques

have been introduced recently to aid in the writing of DNA sequences. While writing

DNA is more

While techniques for reading DNA in some capacity has been possible for decades,

the ability to accurately edit genomes at scale has remained elusive. Novel techniques

have been introduced recently to aid in the writing of DNA sequences. While writing

DNA is more accessible, it still remains expensive, justifying the increased interest in

in silico predictions of cell behavior. In order to accurately predict the behavior of

cells it is necessary to extensively model the cell environment, including gene-to-gene

interactions as completely as possible.

Significant algorithmic advances have been made for identifying these interactions,

but despite these improvements current techniques fail to infer some edges, and

fail to capture some complexities in the network. Much of this limitation is due to

heavily underdetermined problems, whereby tens of thousands of variables are to be

inferred using datasets with the power to resolve only a small fraction of the variables.

Additionally, failure to correctly resolve gene isoforms using short reads contributes

significantly to noise in gene quantification measures.

This dissertation introduces novel mathematical models, machine learning techniques,

and biological techniques to solve the problems described above. Mathematical

models are proposed for simulation of gene network motifs, and raw read simulation.

Machine learning techniques are shown for DNA sequence matching, and DNA

sequence correction.

Results provide novel insights into the low level functionality of gene networks. Also

shown is the ability to use normalization techniques to aggregate data for gene network

inference leading to larger data sets while minimizing increases in inter-experimental

noise. Results also demonstrate that high error rates experienced by third generation

sequencing are significantly different than previous error profiles, and that these errors can be modeled, simulated, and rectified. Finally, techniques are provided for amending this DNA error that preserve the benefits of third generation sequencing.
Date Created
2017
Agent

Hybrid Multiresolution Simulation & Model Checking: Network-On-Chip Systems

156003-Thumbnail Image.png
Description
Designers employ a variety of modeling theories and methodologies to create functional models of discrete network systems. These dynamical models are evaluated using verification and validation techniques throughout incremental design stages. Models created for these systems should directly represent their

Designers employ a variety of modeling theories and methodologies to create functional models of discrete network systems. These dynamical models are evaluated using verification and validation techniques throughout incremental design stages. Models created for these systems should directly represent their growing complexity with respect to composition and heterogeneity. Similar to software engineering practices, incremental model design is required for complex system design. As a result, models at early increments are significantly simpler relative to real systems. While experimenting (verification or validation) on models at early increments are computationally less demanding, the results of these experiments are less trustworthy and less rewarding. At any increment of design, a set of tools and technique are required for controlling the complexity of models and experimentation.

A complex system such as Network-on-Chip (NoC) may benefit from incremental design stages. Current design methods for NoC rely on multiple models developed using various modeling frameworks. It is useful to develop frameworks that can formalize the relationships among these models. Fine-grain models are derived using their coarse-grain counterparts. Moreover, validation and verification capability at various design stages enabled through disciplined model conversion is very beneficial.

In this research, Multiresolution Modeling (MRM) is used for system level design of NoC. MRM aids in creating a family of models at different levels of scale and complexity with well-formed relationships. In addition, a variant of the Discrete Event System Specification (DEVS) formalism is proposed which supports model checking. Hierarchical models of Network-on-Chip components may be created at different resolutions while each model can be validated using discrete-event simulation and verified via state exploration. System property expressions are defined in the DEVS language and developed as Transducers which can be applied seamlessly for model checking and simulation purposes.

Multiresolution Modeling with verification and validation capabilities of this framework complement one another. MRM manages the scale and complexity of models which in turn can reduces V&V time and effort and conversely the V&V helps ensure correctness of models at multiple resolutions. This framework is realized through extending the DEVS-Suite simulator and its applicability demonstrated for exemplar NoC models.
Date Created
2017
Agent

From Formal Requirement Analysis to Testing and Monitoring of Cyber-Physical Systems

155975-Thumbnail Image.png
Description
Cyber-Physical Systems (CPS) are being used in many safety-critical applications. Due to the important role in virtually every aspect of human life, it is crucial to make sure that a CPS works properly before its deployment. However, formal verification of

Cyber-Physical Systems (CPS) are being used in many safety-critical applications. Due to the important role in virtually every aspect of human life, it is crucial to make sure that a CPS works properly before its deployment. However, formal verification of CPS is a computationally hard problem. Therefore, lightweight verification methods such as testing and monitoring of the CPS are considered in the industry. The formal representation of the CPS requirements is a challenging task. In addition, checking the system outputs with respect to requirements is a computationally complex problem. In this dissertation, these problems for the verification of CPS are addressed. The first method provides a formal requirement analysis framework which can find logical issues in the requirements and help engineers to correct the requirements. Also, a method is provided to detect tests which vacuously satisfy the requirement because of the requirement structure. This method is used to improve the test generation framework for CPS. Finally, two runtime verification algorithms are developed for off-line/on-line monitoring with respect to real-time requirements. These monitoring algorithms are computationally efficient, and they can be used in practical applications for monitoring CPS with low runtime overhead.
Date Created
2017
Agent

Formal Requirements-Driven Analysis of Cyber Physical Systems

155738-Thumbnail Image.png
Description
Testing and Verification of Cyber-Physical Systems (CPS) is a challenging problem. The challenge arises as a result of the complex interactions between the components of these systems: the digital control, and the physical environment. Furthermore, the software complexity that governs

Testing and Verification of Cyber-Physical Systems (CPS) is a challenging problem. The challenge arises as a result of the complex interactions between the components of these systems: the digital control, and the physical environment. Furthermore, the software complexity that governs the high-level control logic in these systems is increasing day by day. As a result, in recent years, both the academic community and the industry have been heavily invested in developing tools and methodologies for the development of safety-critical systems. One scalable approach in testing and verification of these systems is through guided system simulation using stochastic optimization techniques. The goal of the stochastic optimizer is to find system behavior that does not meet the intended specifications.

In this dissertation, three methods that facilitate the testing and verification process for CPS are presented:

1. A graphical formalism and tool which enables the elicitation of formal requirements. To evaluate the performance of the tool, a usability study is conducted.

2. A parameter mining method to infer, analyze, and visually represent falsifying ranges for parametrized system specifications.

3. A notion of conformance between a CPS model and implementation along with a testing framework.

The methods are evaluated over high-fidelity case studies from the industry.
Date Created
2017
Agent

Multi-Tenancy and Sub-Tenancy Architecture in Software-As-A-Service (Saas)

155671-Thumbnail Image.png
Description
Multi-tenancy architecture (MTA) is often used in Software-as-a-Service (SaaS) and

the central idea is that multiple tenant applications can be developed using compo

nents stored in the SaaS infrastructure. Recently, MTA has been extended where

a tenant application can have its own sub-tenants

Multi-tenancy architecture (MTA) is often used in Software-as-a-Service (SaaS) and

the central idea is that multiple tenant applications can be developed using compo

nents stored in the SaaS infrastructure. Recently, MTA has been extended where

a tenant application can have its own sub-tenants as the tenant application acts

like a SaaS infrastructure. In other words, MTA is extended to STA (Sub-Tenancy

Architecture ). In STA, each tenant application not only need to develop its own

functionalities, but also need to prepare an infrastructure to allow its sub-tenants to

develop customized applications. This dissertation formulates eight models for STA,

and proposes a Variant Point based customization model to help tenants and sub

tenants customize tenant and sub-tenant applications. In addition, this dissertation

introduces Crowd- sourcing to become the core of STA component development life

cycle. To discover fit tenant developers or components to help building and com

posing new components, dynamic and static ranking models are proposed. Further,

rank computation architecture is presented to deal with the case when the number of

tenants and components becomes huge. At last, an experiment is performed to prove

rank models and the rank computation architecture work as design.
Date Created
2017
Agent