Computational Challenges in Non-parametric Prediction of Bradycardia in Preterm Infants

158864-Thumbnail Image.png
Description
Infants born before 37 weeks of pregnancy are considered to be preterm. Typically, preterm infants have to be strictly monitored since they are highly susceptible to health problems like hypoxemia (low blood oxygen level), apnea, respiratory issues, cardiac problems, neurological

Infants born before 37 weeks of pregnancy are considered to be preterm. Typically, preterm infants have to be strictly monitored since they are highly susceptible to health problems like hypoxemia (low blood oxygen level), apnea, respiratory issues, cardiac problems, neurological problems as well as an increased chance of long-term health issues such as cerebral palsy, asthma and sudden infant death syndrome. One of the leading health complications in preterm infants is bradycardia - which is defined as the slower than expected heart rate, generally beating lower than 60 beats per minute. Bradycardia is often accompanied by low oxygen levels and can cause additional long term health problems in the premature infant.The implementation of a non-parametric method to predict the onset of brady- cardia is presented. This method assumes no prior knowledge of the data and uses kernel density estimation to predict the future onset of bradycardia events. The data is preprocessed, and then analyzed to detect the peaks in the ECG signals, following which different kernels are implemented to estimate the shared underlying distribu- tion of the data. The performance of the algorithm is evaluated using various metrics and the computational challenges and methods to overcome them are also discussed.
It is observed that the performance of the algorithm with regards to the kernels used are consistent with the theoretical performance of the kernel as presented in a previous work. The theoretical approach has also been automated in this work and the various implementation challenges have been addressed.
Date Created
2020
Agent

Robust Target Detection Methods: Performance Analysis and Experimental Validation

158768-Thumbnail Image.png
Description
Constant false alarm rate is one of the essential algorithms in a RADAR detection system. It allows the RADAR system to dynamically set thresholds based on the data power level to distinguish targets with interfering noise and clutters.

To have a

Constant false alarm rate is one of the essential algorithms in a RADAR detection system. It allows the RADAR system to dynamically set thresholds based on the data power level to distinguish targets with interfering noise and clutters.

To have a better acknowledgment of constant false alarm rate approaches performance, three clutter models, Gamma, Weibull, and Log-normal, have been introduced to evaluate the detection's capability of each constant false alarm rate algorithm.

The order statistical constant false alarm rate approach outperforms other conventional constant false alarm rate methods, especially in clutter evolved environments. However, this method requires high power consumption due to repeat sorting.
In the automotive RADAR system, the computational complexity of algorithms is essential because this system is in real-time. Therefore, the algorithms must be fast and efficient to ensure low power consumption and processing time.

The reduced computational complexity implementations of cell-averaging and order statistic constant false alarm rate were explored. Their big O and processing time has been reduced.
Date Created
2020
Agent

Electroencephalographic Signal Source Estimation Using Power Dissipation and Interface Surface Charge

158425-Thumbnail Image.png
Description
The inverse problem in electroencephalography (EEG) is the determination of form and location of neural activity associated to EEG recordings. This determination is of interest in evoked potential experiments where the activity is elicited by an external stimulus. This work

The inverse problem in electroencephalography (EEG) is the determination of form and location of neural activity associated to EEG recordings. This determination is of interest in evoked potential experiments where the activity is elicited by an external stimulus. This work investigates three aspects of this problem: the use of forward methods in its solution, the elimination of artifacts that complicate the accurate determination of sources, and the construction of physical models that capture the electrical properties of the human head.

Results from this work aim to increase the accuracy and performance of the inverse solution process.

The inverse problem can be approached by constructing forward solutions where, for a know source, the scalp potentials are determined. This work demonstrates that the use of two variables, the dissipated power and the accumulated charge at interfaces, leads to a new solution method for the forward problem. The accumulated charge satisfies a boundary integral equation. Consideration of dissipated power determines bounds on the range of eigenvalues of the integral operators that appear in this formulation. The new method uses the eigenvalue structure to regularize singular integral operators thus allowing unambiguous solutions to the forward problem.

A major problem in the estimation of properties of neural sources is the presence of artifacts that corrupt EEG recordings. A method is proposed for the determination of inverse solutions that integrates sequential Bayesian estimation with probabilistic data association in order to suppress artifacts before estimating neural activity. This method improves the tracking of neural activity in a dynamic setting in the presence of artifacts.

Solution of the inverse problem requires the use of models of the human head. The electrical properties of biological tissues are best described by frequency dependent complex conductivities. Head models in EEG analysis, however, usually consider head regions as having only constant real conductivities. This work presents a model for tissues as composed of confined electrolytes that predicts complex conductivities for macroscopic measurements. These results indicate ways in which EEG models can be improved.
Date Created
2020
Agent

Theoretical Receiver Operating Characteristics of Two-Stage Change Detector for Synthetic Aperture Radar Images

158254-Thumbnail Image.png
Description
Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good

Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection can be implemented using the classical coherence estimator, which does better at detecting subtle changes, like vehicle tracks. A two-stage detector was proposed by Cha et al., where the sample variance ratio forms the first stage, and the second stage comprises of Berger's alternative coherence estimator.

A modification to the first stage of the two-stage detector is proposed in this study, which significantly simplifies the analysis of the this detector. Cha et al. have used a heuristic approach to determine the thresholds for this two-stage detector. In this study, the probability density function for the modified two-stage detector is derived, and using this probability density function, an approach for determining the thresholds for this two-dimensional detection problem has been proposed. The proposed method of threshold selection reveals an interesting behavior shown by the two-stage detector. With the help of theoretical receiver operating characteristic analysis, it is shown that the two-stage detector gives a better detection performance as compared to the other three detectors. However, the Berger's estimator proves to be a simpler alternative, since it gives only a slightly poorer performance as compared to the two-stage detector. All the four detectors have also been implemented on a SAR data set, and it is shown that the two-stage detector and the Berger's estimator generate images where the areas showing change are easily visible.
Date Created
2020
Agent

Predicting and Controlling Complex Dynamical Systems

158202-Thumbnail Image.png
Description
Complex dynamical systems are the kind of systems with many interacting components that usually have nonlinear dynamics. Those systems exist in a wide range of disciplines, such as physical, biological, and social fields. Those systems, due to a large amount

Complex dynamical systems are the kind of systems with many interacting components that usually have nonlinear dynamics. Those systems exist in a wide range of disciplines, such as physical, biological, and social fields. Those systems, due to a large amount of interacting components, tend to possess very high dimensionality. Additionally, due to the intrinsic nonlinear dynamics, they have tremendous rich system behavior, such as bifurcation, synchronization, chaos, solitons. To develop methods to predict and control those systems has always been a challenge and an active research area.

My research mainly concentrates on predicting and controlling tipping points (saddle-node bifurcation) in complex ecological systems, comparing linear and nonlinear control methods in complex dynamical systems. Moreover, I use advanced artificial neural networks to predict chaotic spatiotemporal dynamical systems. Complex networked systems can exhibit a tipping point (a “point of no return”) at which a total collapse occurs. Using complex mutualistic networks in ecology as a prototype class of systems, I carry out a dimension reduction process to arrive at an effective two-dimensional (2D) system with the two dynamical variables corresponding to the average pollinator and plant abundances, respectively. I demonstrate that, using 59 empirical mutualistic networks extracted from real data, our 2D model can accurately predict the occurrence of a tipping point even in the presence of stochastic disturbances. I also develop an ecologically feasible strategy to manage/control the tipping point by maintaining the abundance of a particular pollinator species at a constant level, which essentially removes the hysteresis associated with tipping points.

Besides, I also find that the nodal importance ranking for nonlinear and linear control exhibits opposite trends: for the former, large degree nodes are more important but for the latter, the importance scale is tilted towards the small-degree nodes, suggesting strongly irrelevance of linear controllability to these systems. Focusing on a class of recurrent neural networks - reservoir computing systems that have recently been exploited for model-free prediction of nonlinear dynamical systems, I uncover a surprising phenomenon: the emergence of an interval in the spectral radius of the neural network in which the prediction error is minimized.
Date Created
2020
Agent

Data-Efficient Reinforcement Learning Control of Robotic Lower-Limb Prosthesis With Human in the Loop

158010-Thumbnail Image.png
Description
Robotic lower limb prostheses provide new opportunities to help transfemoral amputees regain mobility. However, their application is impeded by that the impedance control parameters need to be tuned and optimized manually by prosthetists for each individual user in different task

Robotic lower limb prostheses provide new opportunities to help transfemoral amputees regain mobility. However, their application is impeded by that the impedance control parameters need to be tuned and optimized manually by prosthetists for each individual user in different task environments. Reinforcement learning (RL) is capable of automatically learning from interacting with the environment. It becomes a natural candidate to replace human prosthetists to customize the control parameters. However, neither traditional RL approaches nor the popular deep RL approaches are readily suitable for learning with limited number of samples and samples with large variations. This dissertation aims to explore new RL based adaptive solutions that are data-efficient for controlling robotic prostheses.

This dissertation begins by proposing a new flexible policy iteration (FPI) framework. To improve sample efficiency, FPI can utilize either on-policy or off-policy learning strategy, can learn from either online or offline data, and can even adopt exiting knowledge of an external critic. Approximate convergence to Bellman optimal solutions are guaranteed under mild conditions. Simulation studies validated that FPI was data efficient compared to several established RL methods. Furthermore, a simplified version of FPI was implemented to learn from offline data, and then the learned policy was successfully tested for tuning the control parameters online on a human subject.

Next, the dissertation discusses RL control with information transfer (RL-IT), or knowledge-guided RL (KG-RL), which is motivated to benefit from transferring knowledge acquired from one subject to another. To explore its feasibility, knowledge was extracted from data measurements of able-bodied (AB) subjects, and transferred to guide Q-learning control for an amputee in OpenSim simulations. This result again demonstrated that data and time efficiency were improved using previous knowledge.

While the present study is new and promising, there are still many open questions to be addressed in future research. To account for human adaption, the learning control objective function may be designed to incorporate human-prosthesis performance feedback such as symmetry, user comfort level and satisfaction, and user energy consumption. To make the RL based control parameter tuning practical in real life, it should be further developed and tested in different use environments, such as from level ground walking to stair ascending or descending, and from walking to running.
Date Created
2020
Agent

Advanced Processing of Multispectral Satellite Data for Detecting and Learning Knowledge-based Features of Planetary Surface Anomalies

157998-Thumbnail Image.png
Description
The marked increase in the inflow of remotely sensed data from satellites have trans- formed the Earth and Space Sciences to a data rich domain creating a rich repository for domain experts to analyze. These observations shed light on a

The marked increase in the inflow of remotely sensed data from satellites have trans- formed the Earth and Space Sciences to a data rich domain creating a rich repository for domain experts to analyze. These observations shed light on a diverse array of disciplines ranging from monitoring Earth system components to planetary explo- ration by highlighting the expected trend and patterns in the data. However, the complexity of these patterns from local to global scales, coupled with the volume of this ever-growing repository necessitates advanced techniques to sequentially process the datasets to determine the underlying trends. Such techniques essentially model the observations to learn characteristic parameters of data-generating processes and highlight anomalous planetary surface observations to help domain scientists for making informed decisions. The primary challenge in defining such models arises due to the spatio-temporal variability of these processes.

This dissertation introduces models of multispectral satellite observations that sequentially learn the expected trend from the data by extracting salient features of planetary surface observations. The main objectives are to learn the temporal variability for modeling dynamic processes and to build representations of features of interest that is learned over the lifespan of an instrument. The estimated model parameters are then exploited in detecting anomalies due to changes in land surface reflectance as well as novelties in planetary surface landforms. A model switching approach is proposed that allows the selection of the best matched representation given the observations that is designed to account for rate of time-variability in land surface. The estimated parameters are exploited to design a change detector, analyze the separability of change events, and form an expert-guided representation of planetary landforms for prioritizing the retrieval of scientifically relevant observations with both onboard and post-downlink applications.
Date Created
2019
Agent

Algorithm and Hardware Design for High Volume Rate 3-D Medical Ultrasound Imaging

157982-Thumbnail Image.png
Description
Ultrasound B-mode imaging is an increasingly significant medical imaging modality for clinical applications. Compared to other imaging modalities like computed tomography (CT) or magnetic resonance imaging (MRI), ultrasound imaging has the advantage of being safe, inexpensive, and portable. While two

Ultrasound B-mode imaging is an increasingly significant medical imaging modality for clinical applications. Compared to other imaging modalities like computed tomography (CT) or magnetic resonance imaging (MRI), ultrasound imaging has the advantage of being safe, inexpensive, and portable. While two dimensional (2-D) ultrasound imaging is very popular, three dimensional (3-D) ultrasound imaging provides distinct advantages over its 2-D counterpart by providing volumetric imaging, which leads to more accurate analysis of tumor and cysts. However, the amount of received data at the front-end of 3-D system is extremely large, making it impractical for power-constrained portable systems.



In this thesis, algorithm and hardware design techniques to support a hand-held 3-D ultrasound imaging system are proposed. Synthetic aperture sequential beamforming (SASB) is chosen since its computations can be split into two stages, where the output generated of Stage 1 is significantly smaller in size compared to the input. This characteristic enables Stage 1 to be done in the front end while Stage 2 can be sent out to be processed elsewhere.



The contributions of this thesis are as follows. First, 2-D SASB is extended to 3-D. Techniques to increase the volume rate of 3-D SASB through a new multi-line firing scheme and use of linear chirp as the excitation waveform, are presented. A new sparse array design that not only reduces the number of active transducers but also avoids the imaging degradation caused by grating lobes, is proposed. A combination of these techniques increases the volume rate of 3-D SASB by 4\texttimes{} without introducing extra computations at the front end.



Next, algorithmic techniques to further reduce the Stage 1 computations in the front end are presented. These include reducing the number of distinct apodization coefficients and operating with narrow-bit-width fixed-point data. A 3-D die stacked architecture is designed for the front end. This highly parallel architecture enables the signals received by 961 active transducers to be digitalized, routed by a network-on-chip, and processed in parallel. The processed data are accumulated through a bus-based structure. This architecture is synthesized using TSMC 28 nm technology node and the estimated power consumption of the front end is less than 2 W.



Finally, the Stage 2 computations are mapped onto a reconfigurable multi-core architecture, TRANSFORMER, which supports different types of on-chip memory banks and run-time reconfigurable connections between general processing elements and memory banks. The matched filtering step and the beamforming step in Stage 2 are mapped onto TRANSFORMER with different memory configurations. Gem5 simulations show that the private cache mode generates shorter execution time and higher computation efficiency compared to other cache modes. The overall execution time for Stage 2 is 14.73 ms. The average power consumption and the average Giga-operations-per-second/Watt in 14 nm technology node are 0.14 W and 103.84, respectively.
Date Created
2019
Agent

Bayesian Framework for Sparse Vector Recovery and Parameter Bounds with Application to Compressive Sensing

157937-Thumbnail Image.png
Description
Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a

Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed using a Bernoulli-Gaussian (BG) prior distribution and is assumed to be the true generative model. The second is constructed using a Gamma-Normal (GN) prior distribution and is, therefore, a different (i.e. misspecified) model. To estimate the posterior distribution for the correctly specified scenario, an algorithm based on generalized approximated message passing (GAMP) is constructed, while an algorithm based on sparse Bayesian learning (SBL) is used for the misspecified scenario. Recovering sparse signal using Bayesian framework is one class of algorithms to solve the sparse problem. All classes of algorithms aim to get around the high computations associated with the combinatorial searches. Compressive sensing (CS) is a widely-used terminology attributed to optimize the sparse problem and its applications. Applications such as magnetic resonance imaging (MRI), image acquisition in radar imaging, and facial recognition. In CS literature, the target vector can be recovered either by optimizing an objective function using point estimation, or recovering a distribution of the sparse vector using Bayesian estimation. Although Bayesian framework provides an extra degree of freedom to assume a distribution that is directly applicable to the problem of interest, it is hard to find a theoretical guarantee of convergence. This limitation has shifted some of researches to use a non-Bayesian framework. This thesis tries to close this gab by proposing a Bayesian framework with a suggested theoretical bound for the assumed, not necessarily correct, distribution. In the simulation study, a general lower Bayesian Cram\'er-Rao bound (BCRB) bound is extracted along with misspecified Bayesian Cram\'er-Rao bound (MBCRB) for GN model. Both bounds are validated using mean square error (MSE) performances of the aforementioned algorithms. Also, a quantification of the performance in terms of gains versus losses is introduced as one main finding of this report.
Date Created
2019
Agent

Driver Assistance System and Feedback for Hybrid Electric Vehicles Using Sensor Fusion

157934-Thumbnail Image.png
Description
Transportation plays a significant role in every human's life. Numerous factors, such as cost of living, available amenities, work style, to name a few, play a vital role in determining the amount of travel time. Such factors, among others, led

Transportation plays a significant role in every human's life. Numerous factors, such as cost of living, available amenities, work style, to name a few, play a vital role in determining the amount of travel time. Such factors, among others, led in part to an increased need for private transportation and, consequently, leading to an increase in the purchase of private cars. Also, road safety was impacted by numerous factors such as Driving Under Influence (DUI), driver’s distraction due to the increase in the use of mobile devices while driving. These factors led to an increasing need for an Advanced Driver Assistance System (ADAS) to help the driver stay aware of the environment and to improve road safety.

EcoCAR3 is one of the Advanced Vehicle Technology Competitions, sponsored by the United States Department of Energy (DoE) and managed by Argonne National Laboratory in partnership with the North American automotive industry. Students are challenged beyond the traditional classroom environment in these competitions, where they redesign a donated production vehicle to improve energy efficiency and to meet emission standards while maintaining the features that are attractive to the customer, including but not limited to performance, consumer acceptability, safety, and cost.

This thesis presents a driver assistance system interface that was implemented as part of EcoCAR3, including the adopted sensors, hardware and software components, system implementation, validation, and testing. The implemented driver assistance system uses a combination of range measurement sensors to determine the distance, relative location, & the relative velocity of obstacles and surrounding objects together with a computer vision algorithm for obstacle detection and classification. The sensor system and vision system were tested individually and then combined within the overall system. Also, a visual and audio feedback system was designed and implemented to provide timely feedback for the driver as an attempt to enhance situational awareness and improve safety.

Since the driver assistance system was designed and developed as part of a DoE sponsored competition, the system needed to satisfy competition requirements and rules. This work attempted to optimize the system in terms of performance, robustness, and cost while satisfying these constraints.
Date Created
2019
Agent