Design and Fabrication of a Low-Cost Gripper for a Swarm Robotic Platform

132909-Thumbnail Image.png
Description
This thesis details the design and construction of a torque-controlled robotic gripper for use with the Pheeno swarm robotics platform. This project required expertise from several fields of study including: robotic design, programming, rapid prototyping, and control theory. An electronic

This thesis details the design and construction of a torque-controlled robotic gripper for use with the Pheeno swarm robotics platform. This project required expertise from several fields of study including: robotic design, programming, rapid prototyping, and control theory. An electronic Inertial Measurement Unit and a DC Motor were both used along with 3D printed plastic components and an electronic motor control board to develop a functional open-loop controlled gripper for use in collective transportation experiments. Code was developed that effectively acquired and filtered rate of rotation data alongside other code that allows for straightforward control of the DC motor through experimentally derived relationships between the voltage applied to the DC motor and the torque output of the DC motor. Additionally, several versions of the physical components are described through their development.
Date Created
2019-05
Agent

Data-Augmented Structure-Property Mapping for Accelerating Computational Design of Advanced Material Systems

156953-Thumbnail Image.png
Description
Advanced material systems refer to materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to their superior properties over conventional materials. This dissertation is motivated by the grand challenge in accelerating the design of advanced

Advanced material systems refer to materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to their superior properties over conventional materials. This dissertation is motivated by the grand challenge in accelerating the design of advanced material systems through systematic optimization with respect to material microstructures or processing settings. While optimization techniques have mature applications to a large range of engineering systems, their application to material design meets unique challenges due to the high dimensionality of microstructures and the high costs in computing process-structure-property (PSP) mappings. The key to addressing these challenges is the learning of material representations and predictive PSP mappings while managing a small data acquisition budget. This dissertation thus focuses on developing learning mechanisms that leverage context-specific meta-data and physics-based theories. Two research tasks will be conducted: In the first, we develop a statistical generative model that learns to characterize high-dimensional microstructure samples using low-dimensional features. We improve the data efficiency of a variational autoencoder by introducing a morphology loss to the training. We demonstrate that the resultant microstructure generator is morphology-aware when trained on a small set of material samples, and can effectively constrain the microstructure space during material design. In the second task, we investigate an active learning mechanism where new samples are acquired based on their violation to a theory-driven constraint on the physics-based model. We demonstrate using a topology optimization case that while data acquisition through the physics-based model is often expensive (e.g., obtaining microstructures through simulation or optimization processes), the evaluation of the constraint can be far more affordable (e.g., checking whether a solution is optimal or equilibrium). We show that this theory-driven learning algorithm can lead to much improved learning efficiency and generalization performance when such constraints can be derived. The outcomes of this research is a better understanding of how physics knowledge about material systems can be integrated into machine learning frameworks, in order to achieve more cost-effective and reliable learning of material representations and predictive models, which are essential to accelerate computational material design.
Date Created
2018
Agent

Multi-Agent Coordination and Control under Information Asymmetry with Applications to Collective Load Transport

156938-Thumbnail Image.png
Description
Coordination and control of Intelligent Agents as a team is considered in this thesis.

Intelligent agents learn from experiences, and in times of uncertainty use the knowl-

edge acquired to make decisions and accomplish their individual or team objectives.

Agent objectives are defined

Coordination and control of Intelligent Agents as a team is considered in this thesis.

Intelligent agents learn from experiences, and in times of uncertainty use the knowl-

edge acquired to make decisions and accomplish their individual or team objectives.

Agent objectives are defined using cost functions designed uniquely for the collective

task being performed. Individual agent costs are coupled in such a way that group ob-

jective is attained while minimizing individual costs. Information Asymmetry refers

to situations where interacting agents have no knowledge or partial knowledge of cost

functions of other agents. By virtue of their intelligence, i.e., by learning from past

experiences agents learn cost functions of other agents, predict their responses and

act adaptively to accomplish the team’s goal.

Algorithms that agents use for learning others’ cost functions are called Learn-

ing Algorithms, and algorithms agents use for computing actuation (control) which

drives them towards their goal and minimize their cost functions are called Control

Algorithms. Typically knowledge acquired using learning algorithms is used in con-

trol algorithms for computing control signals. Learning and control algorithms are

designed in such a way that the multi-agent system as a whole remains stable during

learning and later at an equilibrium. An equilibrium is defined as the event/point

where cost functions of all agents are optimized simultaneously. Cost functions are

designed so that the equilibrium coincides with the goal state multi-agent system as

a whole is trying to reach.

In collective load transport, two or more agents (robots) carry a load from point

A to point B in space. Robots could have different control preferences, for example,

different actuation abilities, however, are still required to coordinate and perform

load transport. Control preferences for each robot are characterized using a scalar

parameter θ i unique to the robot being considered and unknown to other robots.

With the aid of state and control input observations, agents learn control preferences

of other agents, optimize individual costs and drive the multi-agent system to a goal

state.

Two learning and Control algorithms are presented. In the first algorithm(LCA-

1), an existing work, each agent optimizes a cost function similar to 1-step receding

horizon optimal control problem for control. LCA-1 uses recursive least squares as

the learning algorithm and guarantees complete learning in two time steps. LCA-1 is

experimentally verified as part of this thesis.

A novel learning and control algorithm (LCA-2) is proposed and verified in sim-

ulations and on hardware. In LCA-2, each agent solves an infinite horizon linear

quadratic regulator (LQR) problem for computing control. LCA-2 uses a learning al-

gorithm similar to line search methods, and guarantees learning convergence to true

values asymptotically.

Simulations and hardware implementation show that the LCA-2 is stable for a

variety of systems. Load transport is demonstrated using both the algorithms. Ex-

periments running algorithm LCA-2 are able to resist disturbances and balance the

assumed load better compared to LCA-1.
Date Created
2018
Agent

Statistical models for prediction of mechanical property and manufacturing process parameters for gas pipeline steels

156902-Thumbnail Image.png
Description
Pipeline infrastructure forms a vital aspect of the United States economy and standard of living. A majority of the current pipeline systems were installed in the early 1900’s and often lack a reliable database reporting the mechanical properties, and information

Pipeline infrastructure forms a vital aspect of the United States economy and standard of living. A majority of the current pipeline systems were installed in the early 1900’s and often lack a reliable database reporting the mechanical properties, and information about manufacturing and installation, thereby raising a concern for their safety and integrity. Testing for the aging pipe strength and toughness estimation without interrupting the transmission and operations thus becomes important. The state-of-the-art techniques tend to focus on the single modality deterministic estimation of pipe strength and do not account for inhomogeneity and uncertainties, many others appear to rely on destructive means. These gaps provide an impetus for novel methods to better characterize the pipe material properties. The focus of this study is the design of a Bayesian Network information fusion model for the prediction of accurate probabilistic pipe strength and consequently the maximum allowable operating pressure. A multimodal diagnosis is performed by assessing the mechanical property variation within the pipe in terms of material property measurements, such as microstructure, composition, hardness and other mechanical properties through experimental analysis, which are then integrated with the Bayesian network model that uses a Markov chain Monte Carlo (MCMC) algorithm. Prototype testing is carried out for model verification, validation and demonstration and data training of the model is employed to obtain a more accurate measure of the probabilistic pipe strength. With a view of providing a holistic measure of material performance in service, the fatigue properties of the pipe steel are investigated. The variation in the fatigue crack growth rate (da/dN) along the direction of the pipe wall thickness is studied in relation to the microstructure and the material constants for the crack growth have been reported. A combination of imaging and composition analysis is incorporated to study the fracture surface of the fatigue specimen. Finally, some well-known statistical inference models are employed for prediction of manufacturing process parameters for steel pipelines. The adaptability of the small datasets for the accuracy of the prediction outcomes is discussed and the models are compared for their performance.
Date Created
2018
Agent

Modeling Complex Material Systems Using Stochastic Reconstruction and Lattice Particle Simulation

156283-Thumbnail Image.png
Description
In this dissertation, three complex material systems including a novel class of hyperuniform composite materials, cellularized collagen gel and low melting point alloy (LMPA) composite are investigated, using statistical pattern characterization, stochastic microstructure reconstruction and micromechanical analysis. In Chapter 1,

In this dissertation, three complex material systems including a novel class of hyperuniform composite materials, cellularized collagen gel and low melting point alloy (LMPA) composite are investigated, using statistical pattern characterization, stochastic microstructure reconstruction and micromechanical analysis. In Chapter 1, an introduction of this report is provided, in which a brief review is made about these three material systems. In Chapter 2, detailed discussion of the statistical morphological descriptors and a stochastic optimization approach for microstructure reconstruction is presented. In Chapter 3, the lattice particle method for micromechanical analysis of complex heterogeneous materials is introduced. In Chapter 4, a new class of hyperuniform heterogeneous material with superior mechanical properties is investigated. In Chapter 5, a bio-material system, i.e., cellularized collagen gel is modeled using correlation functions and stochastic reconstruction to study the collective dynamic behavior of the embed tumor cells. In chapter 6, LMPA soft robotic system is generated by generalizing the correlation functions and the rigidity tunability of this smart composite is discussed. In Chapter 7, a future work plan is presented.
Date Created
2018
Agent

Evaluation of an Original Design for a Cost-Effective Wheel-Mounted Dynamometer for Road Vehicles

133887-Thumbnail Image.png
Description
This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the

This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the edge of a wheel can yield results that are comparable to results obtained using a conventional chassis dynamometer. Torque curves were generated via the experimental method under a variety of circumstances and also obtained professionally by a precision engine testing company. Metrics were created to measure the precision of the experimental device's ability to consistently generate torque curves and also to compare the similarity of these curves to the professionally obtained torque curves. The results revealed that although the test device does not quite provide the same level of precision as the professional chassis dynamometer, it does create torque curves that closely resemble the chassis dynamometer torque curves and exhibit a consistency between trials comparable to the professional results, even on rough road surfaces. The results suggest that the test device provides enough accuracy and precision to satisfy the needs of most consumers interested in measuring their vehicle's engine performance but probably lacks the level of accuracy and precision needed to appeal to professionals.
Date Created
2018-05
Agent

Optimal Co-Design of Structure Topology and Sensor Deployment for Balanced System Performance and Observability

133979-Thumbnail Image.png
Description
As technology increases in capability, its purposes can become multifaceted, meaning it must accomplish multiple requirements as opposed to just one. An example of said technology could be high speed airplane wings, which must be strong enough to withstand high

As technology increases in capability, its purposes can become multifaceted, meaning it must accomplish multiple requirements as opposed to just one. An example of said technology could be high speed airplane wings, which must be strong enough to withstand high loads, light enough to enable the aircraft to fly, and have enough thermal conductivity to withstand high temperatures. Two objectives in particular, topology and sensor deployment, are important for designing structures such as robots which need accurate sensor readings, known as observability. In an attempt to display how these two dissimilar objectives coincide with each other, a project was created around the idea of finding an optimum balance of both. This supposed state would allow the structure not only to remain strong and light but also to be monitored via sensors with a high degree of accuracy. The main focus of the project was to compare levels of observability of two known factors of input estimation error. The first system involves a structure that has been topologically optimized for compliance minimization, which increases input estimation error. The second system produces structures with random placements of sensors within the structure, which, as the average distance from load to sensor increases, induces input estimation error. These two changes in observability were compared to see which had a more direct effect. The main findings were that changes in topology had a much more direct effect over levels of observability than changes in sensor placement. Results also show that theoretical input estimation time is significantly reduced when compared to previous systems.
Date Created
2018-05
Agent

Big Data Analytics for Pipe Damage and Risk Identification

134000-Thumbnail Image.png
Description
In this thesis, Inception V3, a convolutional neural network model from Google, was partially retrained to categorize pipeline images based on their damage modes. The images for different damage modes of the pipeline were simulated through MATLAB to represent image

In this thesis, Inception V3, a convolutional neural network model from Google, was partially retrained to categorize pipeline images based on their damage modes. The images for different damage modes of the pipeline were simulated through MATLAB to represent image data collected from in-line pipe inspection. The final convolutional layer of the model was retrained with the simulated pipeline images using TensorFlow as the base platform. First, a small-scale retraining was done with real images and simulated images to compare the differences in performance. Then, using simulated images, a 2^5 full factorial design of experiment and individual parametric studies were performed on five different chosen parameters, including training steps, learning rate, batch size, training data size and image noise. The effect of each parameter on the performance of the model was evaluated and analyzed. It is crucial to understand that due to the nature of the experiment, the learnings may or may not apply to neural network models trained for other tasks. After analyzing the results, the effects and trade-offs for each parameter are discussed in detail. In addition, a method of predicting the training time was proposed. Based on the findings, an optimized model was proposed for this training exercise, with 1180 training steps, a learning rate of 0.01, a batch size of 100 and a training data set of 200 images. The optimized model reached 87.2% accuracy with a training time of 2 minutes and 6 seconds. This study will enhance our understanding in applying machine learning techniques in damage and risk identification.
Date Created
2018-05
Agent

Squeezing Out Electricity: Computer-Aided Design and Optimization of Electrodes of Solid Oxide Fuel Cells

135418-Thumbnail Image.png
Description
Solid oxide fuel cells have become a promising candidate in the development of high-density clean energy sources for the rapidly increasing demands in energy and global sustainability. In order to understand more about solid oxide fuel cells, the important ste

Solid oxide fuel cells have become a promising candidate in the development of high-density clean energy sources for the rapidly increasing demands in energy and global sustainability. In order to understand more about solid oxide fuel cells, the important step is to understand how to model heterogeneous materials. Heterogeneous materials are abundant in nature and also created in various processes. The diverse properties exhibited by these materials result from their complex microstructures, which also make it hard to model the material. Microstructure modeling and reconstruction on a meso-scale level is needed in order to produce heterogeneous models without having to shave and image every slice of the physical material, which is a destructive and irreversible process. Yeong and Torquato [1] introduced a stochastic optimization technique that enables the generation of a model of the material with the use of correlation functions. Spatial correlation functions of each of the various phases within the heterogeneous structure are collected from a two-dimensional micrograph representing a slice of a solid oxide fuel cell through computational means. The assumption is that two-dimensional images contain key structural information representative of the associated full three-dimensional microstructure. The collected spatial correlation functions, a combination of one-point and two-point correlation functions are then outputted and are representative of the material. In the reconstruction process, the characteristic two-point correlation functions is then inputted through a series of computational modeling codes and software to generate a three-dimensional visual model that is statistically similar to that of the original two-dimensional micrograph. Furthermore, parameters of temperature cooling stages and number of pixel exchanges per temperature stage are utilized and altered accordingly to observe which parameters has a higher impact on the reconstruction results. Stochastic optimization techniques to produce three-dimensional visual models from two-dimensional micrographs are therefore a statistically reliable method to understanding heterogeneous materials.
Date Created
2016-05
Agent

Large-Scale Rapid Prototyping Utilizing Adaptive Slicing Techniques

135702-Thumbnail Image.png
Description
A method has been developed that employs both procedural and optimization algorithms to adaptively slice CAD models for large-scale additive manufacturing (AM) applications. AM, the process of joining material layer by layer to create parts based on 3D model data,

A method has been developed that employs both procedural and optimization algorithms to adaptively slice CAD models for large-scale additive manufacturing (AM) applications. AM, the process of joining material layer by layer to create parts based on 3D model data, has been shown to be an effective method for quickly producing parts of a high geometric complexity in small quantities. 3D printing, a popular and successful implementation of this method, is well-suited to creating small-scale parts that require a fine layer resolution. However, it starts to become impractical for large-scale objects due to build volume and print speed limitations. The proposed layered manufacturing technique builds up models from layers of much thicker sheets of material that can be cut on three-axis CNC machines and assembled manually. Adaptive slicing techniques were utilized to vary layer thickness based on surface complexity to minimize both the cost and error of the layered model. This was realized as a multi-objective optimization problem where the number of layers used represented the cost and the geometric difference between the sliced model and the CAD model defined the error. This problem was approached with two different methods, one of which was a procedural process of placing layers from a set of discrete thicknesses based on the Boolean Exclusive OR (XOR) area difference between adjacent layers. The other method implemented an optimization solver to calculate the precise thickness of each layer to minimize the overall volumetric XOR difference between the sliced and original models. Both methods produced results that help validate the efficiency and practicality of the proposed layered manufacturing technique over existing AM technologies for large-scale applications.
Date Created
2016-05
Agent