EEG-Based Estimation of Human Reaction Time Corresponding to Change of Visual Event.

157824-Thumbnail Image.png
Description
The human brain controls a person's actions and reactions. In this study, the main objective is to quantify reaction time towards a change of visual event and figuring out the inherent relationship between response time and corresponding brain activities. Furthermore,

The human brain controls a person's actions and reactions. In this study, the main objective is to quantify reaction time towards a change of visual event and figuring out the inherent relationship between response time and corresponding brain activities. Furthermore, which parts of the human brain are responsible for the reaction time is also of interest. As electroencephalogram (EEG) signals are proportional to the change of brain functionalities with time, EEG signals from different locations of the brain are used as indicators of brain activities. As the different channels are from different parts of our brain, identifying most relevant channels can provide the idea of responsible brain locations. In this study, response time is estimated using EEG signal features from time, frequency and time-frequency domain. Regression-based estimation using the full data-set results in RMSE (Root Mean Square Error) of 99.5 milliseconds and a correlation value of 0.57. However, the addition of non-EEG features with the existing features gives RMSE of 101.7 ms and a correlation value of 0.58. Using the same analysis with a custom data-set provides RMSE of 135.7 milliseconds and a correlation value of 0.69. Classification-based estimation provides 79% & 72% of accuracy for binary and 3-class classication respectively. Classification of extremes (high-low) results in 95% of accuracy. Combining recursive feature elimination, tree-based feature importance, and mutual feature information method, important channels, and features are isolated based on the best result. As human response time is not solely dependent on brain activities, it requires additional information about the subject to improve the reaction time estimation.
Date Created
2019
Agent

DESIGN OF SIGNAL PROCESSING ALGORITHMS AND DEVELOPMENT OF A REAL-TIME SYSTEM FOR MAPPING AUDIO TO HAPTICS FOR COCHLEAR IMPLANT USERS

132037-Thumbnail Image.png
Description
In the field of electronic music, haptic feedback is a crucial feature of digital musical instruments (DMIs) because it gives the musician a more immersive experience. This feedback might come in the form of a wearable haptic device that vibrates

In the field of electronic music, haptic feedback is a crucial feature of digital musical instruments (DMIs) because it gives the musician a more immersive experience. This feedback might come in the form of a wearable haptic device that vibrates in response to music. Such advancements in the electronic music field are applicable to the field of speech and hearing. More specifically, wearable haptic feedback devices can enhance the musical listening experience for people who use cochlear implant (CI) devices.
This Honors Thesis is a continuation of Prof. Lauren Hayes’s and Dr. Xin Luo’s research initiative, Haptic Electronic Audio Research into Musical Experience (HEAR-ME), which investigates how to enhance the musical listening experience for CI users using a wearable haptic system. The goals of this Honors Thesis are to adapt Prof. Hayes’s system code from the Max visual programming language into the C++ object-oriented programming language and to study the results of the developed C++ codes. This adaptation allows the system to operate in real-time and independently of a computer.
Towards these goals, two signal processing algorithms were developed and programmed in C++. The first algorithm is a thresholding method, which outputs a pulse of a predefined width when the input signal falls below some threshold in amplitude. The second algorithm is a root-mean-square (RMS) method, which outputs a pulse-width modulation signal with a fixed period and with a duty cycle dependent on the RMS of the input signal. The thresholding method was found to work best with speech, and the RMS method was found to work best with music. Future work entails the design of adaptive signal processing algorithms to allow the system to work more effectively on speech in a noisy environment and to emphasize a variety of elements in music.
Date Created
2019-12
Agent

Bayesian nonparametric modeling and inference for multiple object tracking

157748-Thumbnail Image.png
Description
The problem of multiple object tracking seeks to jointly estimate the time-varying cardinality and trajectory of each object. There are numerous challenges that are encountered in tracking multiple objects including a time-varying number of measurements, under varying constraints, and environmental

The problem of multiple object tracking seeks to jointly estimate the time-varying cardinality and trajectory of each object. There are numerous challenges that are encountered in tracking multiple objects including a time-varying number of measurements, under varying constraints, and environmental conditions. In this thesis, the proposed statistical methods integrate the use of physical-based models with Bayesian nonparametric methods to address the main challenges in a tracking problem. In particular, Bayesian nonparametric methods are exploited to efficiently and robustly infer object identity and learn time-dependent cardinality; together with Bayesian inference methods, they are also used to associate measurements to objects and estimate the trajectory of objects. These methods differ from the current methods to the core as the existing methods are mainly based on random finite set theory.

The first contribution proposes dependent nonparametric models such as the dependent Dirichlet process and the dependent Pitman-Yor process to capture the inherent time-dependency in the problem at hand. These processes are used as priors for object state distributions to learn dependent information between previous and current time steps. Markov chain Monte Carlo sampling methods exploit the learned information to sample from posterior distributions and update the estimated object parameters.

The second contribution proposes a novel, robust, and fast nonparametric approach based on a diffusion process over infinite random trees to infer information on object cardinality and trajectory. This method follows the hierarchy induced by objects entering and leaving a scene and the time-dependency between unknown object parameters. Markov chain Monte Carlo sampling methods integrate the prior distributions over the infinite random trees with time-dependent diffusion processes to update object states.

The third contribution develops the use of hierarchical models to form a prior for statistically dependent measurements in a single object tracking setup. Dependency among the sensor measurements provides extra information which is incorporated to achieve the optimal tracking performance. The hierarchical Dirichlet process as a prior provides the required flexibility to do inference. Bayesian tracker is integrated with the hierarchical Dirichlet process prior to accurately estimate the object trajectory.

The fourth contribution proposes an approach to model both the multiple dependent objects and multiple dependent measurements. This approach integrates the dependent Dirichlet process modeling over the dependent object with the hierarchical Dirichlet process modeling of the measurements to fully capture the dependency among both object and measurements. Bayesian nonparametric models can successfully associate each measurement to the corresponding object and exploit dependency among them to more accurately infer the trajectory of objects. Markov chain Monte Carlo methods amalgamate the dependent Dirichlet process with the hierarchical Dirichlet process to infer the object identity and object cardinality.

Simulations are exploited to demonstrate the improvement in multiple object tracking performance when compared to approaches that are developed based on random finite set theory.
Date Created
2019
Agent

Structured disentangling networks for learning deformation invariant latent spaces

157645-Thumbnail Image.png
Description
Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify

Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of variations. While this is generally a hard problem because of the non-existence of analytical expressions to capture these variations, there are certain factors like geometric

transforms that can be expressed analytically. Furthermore, in existing frameworks, the disentangled values are also not interpretable. The focus of this work is to disentangle these geometric factors of variations (which turn out to be nuisance factors for many applications) from the semantic content of the signal in an interpretable manner which in turn makes the features more discriminative. Experiments are designed to show the modularity of the approach with other disentangling strategies as well as on multiple one-dimensional (1D) and two-dimensional (2D) datasets, clearly indicating the efficacy of the proposed approach.
Date Created
2019
Agent

Modeling and Parameter Estimation of Sea Clutter Intensity in Thermal Noise

157507-Thumbnail Image.png
Description
A critical problem for airborne, ship board, and land based radars operating in maritime or littoral environments is the detection, identification and tracking of targets against backscattering caused by the roughness of the sea surface. Statistical models, such as

A critical problem for airborne, ship board, and land based radars operating in maritime or littoral environments is the detection, identification and tracking of targets against backscattering caused by the roughness of the sea surface. Statistical models, such as the compound K-distribution (CKD), were shown to accurately describe two separate structures of the sea clutter intensity fluctuations. The first structure is the texture that is associated with long sea waves and exhibits long temporal decorrelation period. The second structure is the speckle that accounts for reflections from multiple scatters and exhibits a short temporal decorrelation period from pulse to pulse. Existing methods for estimating the CKD model parameters do not include the thermal noise power, which is critical for real sea clutter processing. Estimation methods that include the noise power are either computationally intensive or require very large data records.



This work proposes two new approaches for accurately estimating all three CKD model parameters, including noise power. The first method integrates, in an iterative fashion, the noise power estimation, using one-dimensional nonlinear curve fitting,

with the estimation of the shape and scale parameters, using closed-form solutions in terms of the CKD intensity moments. The second method is similar to the first except it replaces integer-based intensity moments with fractional moments which have been shown to achieve more accurate estimates of the shape parameter. These new methods can be implemented in real time without requiring large data records. They can also achieve accurate estimation performance as demonstrated with simulated and real sea clutter observation datasets. The work also investigates the numerically computed Cram\'er-Rao lower bound (CRLB) of the variance of the shape parameter estimate using intensity observations in thermal noise with unknown power. Using the CRLB, the asymptotic estimation performance behavior of the new estimators is studied and compared to that of other estimators.
Date Created
2019
Agent

Health management and prognostics of complex structures and systems

157108-Thumbnail Image.png
Description
This dissertation presents the development of structural health monitoring and prognostic health management methodologies for complex structures and systems in the field of mechanical engineering. To overcome various challenges historically associated with complex structures and systems such as complicated sensing

This dissertation presents the development of structural health monitoring and prognostic health management methodologies for complex structures and systems in the field of mechanical engineering. To overcome various challenges historically associated with complex structures and systems such as complicated sensing mechanisms, noisy information, and large-size datasets, a hybrid monitoring framework comprising of solid mechanics concepts and data mining technologies is developed. In such a framework, the solid mechanics simulations provide additional intuitions to data mining techniques reducing the dependence of accuracy on the training set, while the data mining approaches fuse and interpret information from the targeted system enabling the capability for real-time monitoring with efficient computation.

In the case of structural health monitoring, ultrasonic guided waves are utilized for damage identification and localization in complex composite structures. Signal processing and data mining techniques are integrated into the damage localization framework, and the converted wave modes, which are induced by the thickness variation due to the presence of delamination, are used as damage indicators. This framework has been validated through experiments and has shown sufficient accuracy in locating delamination in X-COR sandwich composites without the need of baseline information. Besides the localization of internal damage, the Gaussian process machine learning technique is integrated with finite element method as an online-offline prediction model to predict crack propagation with overloads under biaxial loading conditions; such a probabilistic prognosis model, with limited number of training examples, has shown increased accuracy over state-of-the-art techniques in predicting crack retardation behaviors induced by overloads. In the case of system level management, a monitoring framework built using a multivariate Gaussian model as basis is developed to evaluate the anomalous condition of commercial aircrafts. This method has been validated using commercial airline data and has shown high sensitivity to variations in aircraft dynamics and pilot operations. Moreover, this framework was also tested on simulated aircraft faults and its feasibility for real-time monitoring was demonstrated with sufficient computation efficiency.

This research is expected to serve as a practical addition to the existing literature while possessing the potential to be adopted in realistic engineering applications.
Date Created
2019
Agent

Model Based Automatic and Robust Spike Sorting for Large Volumes of Multi-channel Extracellular Data

157044-Thumbnail Image.png
Description
Spike sorting is a critical step for single-unit-based analysis of neural activities extracellularly and simultaneously recorded using multi-channel electrodes. When dealing with recordings from very large numbers of neurons, existing methods, which are mostly semiautomatic in nature, become inadequate.

This dissertation

Spike sorting is a critical step for single-unit-based analysis of neural activities extracellularly and simultaneously recorded using multi-channel electrodes. When dealing with recordings from very large numbers of neurons, existing methods, which are mostly semiautomatic in nature, become inadequate.

This dissertation aims at automating the spike sorting process. A high performance, automatic and computationally efficient spike detection and clustering system, namely, the M-Sorter2 is presented. The M-Sorter2 employs the modified multiscale correlation of wavelet coefficients (MCWC) for neural spike detection. At the center of the proposed M-Sorter2 are two automatic spike clustering methods. They share a common hierarchical agglomerative modeling (HAM) model search procedure to strategically form a sequence of mixture models, and a new model selection criterion called difference of model evidence (DoME) to automatically determine the number of clusters. The M-Sorter2 employs two methods differing by how they perform clustering to infer model parameters: one uses robust variational Bayes (RVB) and the other uses robust Expectation-Maximization (REM) for Student’s 𝑡-mixture modeling. The M-Sorter2 is thus a significantly improved approach to sorting as an automatic procedure.

M-Sorter2 was evaluated and benchmarked with popular algorithms using simulated, artificial and real data with truth that are openly available to researchers. Simulated datasets with known statistical distributions were first used to illustrate how the clustering algorithms, namely REMHAM and RVBHAM, provide robust clustering results under commonly experienced performance degrading conditions, such as random initialization of parameters, high dimensionality of data, low signal-to-noise ratio (SNR), ambiguous clusters, and asymmetry in cluster sizes. For the artificial dataset from single-channel recordings, the proposed sorter outperformed Wave_Clus, Plexon’s Offline Sorter and Klusta in most of the comparison cases. For the real dataset from multi-channel electrodes, tetrodes and polytrodes, the proposed sorter outperformed all comparison algorithms in terms of false positive and false negative rates. The software package presented in this dissertation is available for open access.
Date Created
2019
Agent

Structural Health Monitoring: Acoustic Emissions

Description
Non-Destructive Testing (NDT) is integral to preserving the structural health of materials. Techniques that fall under the NDT category are able to evaluate integrity and condition of a material without permanently altering any property of the material. Additionally,

Non-Destructive Testing (NDT) is integral to preserving the structural health of materials. Techniques that fall under the NDT category are able to evaluate integrity and condition of a material without permanently altering any property of the material. Additionally, they can typically be used while the material is in active use instead of needing downtime for inspection.
The two general categories of structural health monitoring (SHM) systems include passive and active monitoring. Active SHM systems utilize an input of energy to monitor the health of a structure (such as sound waves in ultrasonics), while passive systems do not. As such, passive SHM tends to be more desirable. A system could be permanently fixed to a critical location, passively accepting signals until it records a damage event, then localize and characterize the damage. This is the goal of acoustic emissions testing.
When certain types of damage occur, such as matrix cracking or delamination in composites, the corresponding release of energy creates sound waves, or acoustic emissions, that propagate through the material. Audio sensors fixed to the surface can pick up data from both the time and frequency domains of the wave. With proper data analysis, a time of arrival (TOA) can be calculated for each sensor allowing for localization of the damage event. The frequency data can be used to characterize the damage.
In traditional acoustic emissions testing, the TOA combined with wave velocity and information about signal attenuation in the material is used to localize events. However, in instances of complex geometries or anisotropic materials (such as carbon fibre composites), velocity and attenuation can vary wildly based on the direction of interest. In these cases, localization can be based off of the time of arrival distances for each sensor pair. This technique is called Delta T mapping, and is the main focus of this study.
Date Created
2019-05
Agent

Time-Varying Modeling of Glottal Source and Vocal Tract and Sequential Bayesian Estimation of Model Parameters for Speech Synthesis

156987-Thumbnail Image.png
Description
Speech is generated by articulators acting on

a phonatory source. Identification of this

phonatory source and articulatory geometry are

individually challenging and ill-posed

problems, called speech separation and

articulatory inversion, respectively.

There exists a trade-off

between decomposition and recovered

articulatory geometry due to multiple

possible mappings

Speech is generated by articulators acting on

a phonatory source. Identification of this

phonatory source and articulatory geometry are

individually challenging and ill-posed

problems, called speech separation and

articulatory inversion, respectively.

There exists a trade-off

between decomposition and recovered

articulatory geometry due to multiple

possible mappings between an

articulatory configuration

and the speech produced. However, if measurements

are obtained only from a microphone sensor,

they lack any invasive insight and add

additional challenge to an already difficult

problem.

A joint non-invasive estimation

strategy that couples articulatory and

phonatory knowledge would lead to better

articulatory speech synthesis. In this thesis,

a joint estimation strategy for speech

separation and articulatory geometry recovery

is studied. Unlike previous

periodic/aperiodic decomposition methods that

use stationary speech models within a

frame, the proposed model presents a

non-stationary speech decomposition method.

A parametric glottal source model and an

articulatory vocal tract response are

represented in a dynamic state space formulation.

The unknown parameters of the

speech generation components are estimated

using sequential Monte Carlo methods

under some specific assumptions.

The proposed approach is compared with other

glottal inverse filtering methods,

including iterative adaptive inverse filtering,

state-space inverse filtering, and

the quasi-closed phase method.
Date Created
2018
Agent

Separation of Agile Waveform Time-Frequency Signatures from Coexisting Multimodal Systems

156974-Thumbnail Image.png
Description
As the demand for wireless systems increases exponentially, it has become necessary

for different wireless modalities, like radar and communication systems, to share the

available bandwidth. One approach to realize coexistence successfully is for each

system to adopt a transmit waveform with a

As the demand for wireless systems increases exponentially, it has become necessary

for different wireless modalities, like radar and communication systems, to share the

available bandwidth. One approach to realize coexistence successfully is for each

system to adopt a transmit waveform with a unique nonlinear time-varying phase

function. At the receiver of the system of interest, the waveform received for process-

ing may still suffer from low signal-to-interference-plus-noise ratio (SINR) due to the

presence of the waveforms that are matched to the other coexisting systems. This

thesis uses a time-frequency based approach to increase the SINR of a system by estimating the unique nonlinear instantaneous frequency (IF) of the waveform matched

to the system. Specifically, the IF is estimated using the synchrosqueezing transform,

a highly localized time-frequency representation that also enables reconstruction of

individual waveform components. As the IF estimate is biased, modified versions of

the transform are investigated to obtain estimators that are both unbiased and also

matched to the unique nonlinear phase function of a given waveform. Simulations

using transmit waveforms of coexisting wireless systems are provided to demonstrate

the performance of the proposed approach using both biased and unbiased IF estimators.
Date Created
2018
Agent