RADAR-Based Non-Stationary and Stationary Human Presence Detection

193417-Thumbnail Image.png
Description
Humanpresence detection is essential for a various number of applications including defense and healthcare. Accurate measurements of distances, relative velocities of humans, and other objects can be made with radars. They are largely impervious to external factors like the impact

Humanpresence detection is essential for a various number of applications including defense and healthcare. Accurate measurements of distances, relative velocities of humans, and other objects can be made with radars. They are largely impervious to external factors like the impact of smoke, dust, or rain. They are also capable of working in varied intensity of light in indoor environments. This report explores the analyzing of real data captured and the application of different detection algorithms. Adaptive thresholding suppresses stationary backgrounds while maintaining detection thresholds to keep false alarm rates low. Using different approaches of Constant False Alarm Rate (CFAR) namely Cell averaging, Smallest of Cell averaging,Greatest of Cell Averaging and Order Statistic, this report aims to show its performance in detecting humans in an indoor environment using real time data collected. The objective of this project is to explain the signal processing chain of presence detection using a small scale RADAR
Date Created
2024
Agent

Differential Privacy Protection via Inexact Data Cloning

187820-Thumbnail Image.png
Description
With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful,

With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has been challenging, especially in healthcare. The data owners lack an automated resource that provides layers of protection on a published dataset with validated statistical values for usability. Differential privacy (DP) has gained a lot of attention in the past few years as a solution to the above-mentioned dual problem. DP is defined as a statistical anonymity model that can protect the data from adversarial observation while still providing intended usage. This dissertation introduces a novel DP protection mechanism called Inexact Data Cloning (IDC), which simultaneously protects and preserves information in published data while conveying source data intent. IDC preserves the privacy of the records by converting the raw data records into clonesets. The clonesets then pass through a classifier that removes potential compromising clonesets, filtering only good inexact cloneset. The mechanism of IDC is dependent on a set of privacy protection metrics called differential privacy protection metrics (DPPM), which represents the overall protection level. IDC uses two novel performance values, differential privacy protection score (DPPS) and clone classifier selection percentage (CCSP), to estimate the privacy level of protected data. In support of using IDC as a viable data security product, a software tool chain prototype, differential privacy protection architecture (DPPA), was developed to utilize the IDC. DPPA used the engineering security mechanism of IDC. DPPA is a hub which facilitates a market for data DP security mechanisms. DPPA works by incorporating standalone IDC mechanisms and provides automation, IDC protected published datasets and statistically verified IDC dataset diagnostic report. DPPA is currently doing functional, and operational benchmark processes that quantifies the DP protection of a given published dataset. The DPPA tool was recently used to test a couple of health datasets. The test results further validate the IDC mechanism as being feasible.
Date Created
2023
Agent

In-Band Full Duplex Analog Control and Analysis

187695-Thumbnail Image.png
Description
In-Band Full-Duplex (IBFD) can maximize the spectral resources and enable new types of technology, but generates self-interference (SI) that must be mitigated to enable practical applications. Analog domain SI cancellation (SIC), usually implemented as a digitally controlled adaptive filter, is

In-Band Full-Duplex (IBFD) can maximize the spectral resources and enable new types of technology, but generates self-interference (SI) that must be mitigated to enable practical applications. Analog domain SI cancellation (SIC), usually implemented as a digitally controlled adaptive filter, is one technique that is necessary to mitigate the interference below the noise floor. To maximize the efficiency and performance of the adaptive filter this thesis studies how key design choices impact the performance so that device designers can make better tradeoff decisions. Additionally, algorithms are introduced to maximize the SIC that incorporate the hardware constraints. The provided simulations show up to 45dB SIC with 7 bits of precision at 100MHz bandwidth.
Date Created
2023
Agent

Distributed Coherent Mesh Beamforming: Algorithms and Implementation

187540-Thumbnail Image.png
Description
In this dissertation, I implement and demonstrate a distributed coherent mesh beamforming system, for wireless communications, that provides increased range, data rate, and robustness to interference. By using one or multiple distributed, locally-coherent meshes as antenna arrays, I develop an

In this dissertation, I implement and demonstrate a distributed coherent mesh beamforming system, for wireless communications, that provides increased range, data rate, and robustness to interference. By using one or multiple distributed, locally-coherent meshes as antenna arrays, I develop an approach that realizes a performance improvement, related to the number of mesh elements, in signal-to-noise ratio over a traditional single-antenna to single-antenna link without interference. I further demonstrate that in the presence of interference, the signal-to-interference-plus-noise ratio improvement is significantly greater for a wide range of environments. I also discuss key performance bounds that drive system design decisions as well as techniques for robust distributed adaptive beamformer construction. I develop and implement an over-the-air distributed time and frequency synchronization algorithm to enable distributed coherence on software-defined radios. Finally, I implement the distributed coherent mesh beamforming system over-the-air on a network of software-defined radios and demonstrate both simulated and experimental results both with and without interference that achieve performance approaching the theoretical bounds.
Date Created
2023
Agent

Bayesian Filtering and Smoothing for Tracking in High Noise and Clutter Environments

171768-Thumbnail Image.png
Description
Object tracking refers to the problem of estimating a moving object's time-varying parameters that are indirectly observed in measurements at each time step. Increased noise and clutter in the measurements reduce estimation accuracy as they increase the uncertainty of tracking

Object tracking refers to the problem of estimating a moving object's time-varying parameters that are indirectly observed in measurements at each time step. Increased noise and clutter in the measurements reduce estimation accuracy as they increase the uncertainty of tracking in the field of view. Whereas tracking is performed using a Bayesian filter, a Bayesian smoother can be utilized to refine parameter state estimations that occurred before the current time. In practice, smoothing can be widely used to improve state estimation or correct data association errors, and it can lead to significantly better estimation performance as it reduces the impact of noise and clutter. In this work, a single object tracking method is proposed based on integrating Kalman filtering and smoothing with thresholding to remove unreliable measurements. As the new method is effective when the noise and clutter in the measurements are high, the main goal is to find these measurements using a moving average filter and a thresholding method to improve estimation. Thus, the proposed method is designed to reduce estimation errors that result from measurements corrupted with high noise and clutter. Simulations are provided to demonstrate the improved performance of the new method when compared to smoothing without thresholding. The root-mean-square error in estimating the object state parameters is shown to be especially reduced under high noise conditions.
Date Created
2022
Agent

Theoretical Receiver Operating Characteristics of Two-Stage Change Detector for Synthetic Aperture Radar Images

158254-Thumbnail Image.png
Description
Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good

Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection can be implemented using the classical coherence estimator, which does better at detecting subtle changes, like vehicle tracks. A two-stage detector was proposed by Cha et al., where the sample variance ratio forms the first stage, and the second stage comprises of Berger's alternative coherence estimator.

A modification to the first stage of the two-stage detector is proposed in this study, which significantly simplifies the analysis of the this detector. Cha et al. have used a heuristic approach to determine the thresholds for this two-stage detector. In this study, the probability density function for the modified two-stage detector is derived, and using this probability density function, an approach for determining the thresholds for this two-dimensional detection problem has been proposed. The proposed method of threshold selection reveals an interesting behavior shown by the two-stage detector. With the help of theoretical receiver operating characteristic analysis, it is shown that the two-stage detector gives a better detection performance as compared to the other three detectors. However, the Berger's estimator proves to be a simpler alternative, since it gives only a slightly poorer performance as compared to the two-stage detector. All the four detectors have also been implemented on a SAR data set, and it is shown that the two-stage detector and the Berger's estimator generate images where the areas showing change are easily visible.
Date Created
2020
Agent

Anticipating Postoperative Delirium During Cardiac Surgeries Involving Deep Hypothermia Circulatory Arrest

158175-Thumbnail Image.png
Description
Aortic aneurysms and dissections are life threatening conditions addressed by replacing damaged sections of the aorta. Blood circulation must be halted to facilitate repairs. Ischemia places the body, especially the brain, at risk of damage. Deep hypothermia circulatory arrest (DHCA)

Aortic aneurysms and dissections are life threatening conditions addressed by replacing damaged sections of the aorta. Blood circulation must be halted to facilitate repairs. Ischemia places the body, especially the brain, at risk of damage. Deep hypothermia circulatory arrest (DHCA) is employed to protect patients and provide time for surgeons to complete repairs on the basis that reducing body temperature suppresses the metabolic rate. Supplementary surgical techniques can be employed to reinforce the brain's protection and increase the duration circulation can be suspended. Even then, protection is not completely guaranteed though. A medical condition that can arise early in recovery is postoperative delirium, which is correlated with poor long term outcome. This study develops a methodology to intraoperatively monitor neurophysiology through electroencephalography (EEG) and anticipate postoperative delirium. The earliest opportunity to detect occurrences of complications through EEG is immediately following DHCA during warming. The first observable electrophysiological activity after being completely suppressed is a phenomenon known as burst suppression, which is related to the brain's metabolic state and recovery of nominal neurological function. A metric termed burst suppression duty cycle (BSDC) is developed to characterize the changing electrophysiological dynamics. Predictions of postoperative delirium incidences are made by identifying deviations in the way these dynamics evolve. Sixteen cases are examined in this study. Accurate predictions can be made, where on average 89.74% of cases are correctly classified when burst suppression concludes and 78.10% when burst suppression begins. The best case receiver operating characteristic curve has an area under its convex hull of 0.8988, whereas the worst case area under the hull is 0.7889. These results demonstrate the feasibility of monitoring BSDC to anticipate postoperative delirium during burst suppression. They also motivate a further analysis on identifying footprints of causal mechanisms of neural injury within BSDC. Being able to raise warning signs of postoperative delirium early provides an opportunity to intervene and potentially avert neurological complications. Doing so would improve the success rate and quality of life after surgery.
Date Created
2020
Agent

Control and Estimation Theory in Ranging Applications

158028-Thumbnail Image.png
Description
For the last 50 years, oscillator modeling in ranging systems has received considerable

attention. Many components in a navigation system, such as the master oscillator

driving the receiver system, as well the master oscillator in the transmitting system

contribute significantly to timing errors.

For the last 50 years, oscillator modeling in ranging systems has received considerable

attention. Many components in a navigation system, such as the master oscillator

driving the receiver system, as well the master oscillator in the transmitting system

contribute significantly to timing errors. Algorithms in the navigation processor must

be able to predict and compensate such errors to achieve a specified accuracy. While

much work has been done on the fundamentals of these problems, the thinking on said

problems has not progressed. On the hardware end, the designers of local oscillators

focus on synthesized frequency and loop noise bandwidth. This does nothing to

mitigate, or reduce frequency stability degradation in band. Similarly, there are not

systematic methods to accommodate phase and frequency anomalies such as clock

jumps. Phase locked loops are fundamentally control systems, and while control

theory has had significant advancement over the last 30 years, the design of timekeeping

sources has not advanced beyond classical control. On the software end,

single or two state oscillator models are typically embedded in a Kalman Filter to

alleviate time errors between the transmitter and receiver clock. Such models are

appropriate for short term time accuracy, but insufficient for long term time accuracy.

Additionally, flicker frequency noise may be present in oscillators, and it presents

mathematical modeling complications. This work proposes novel H∞ control methods

to address the shortcomings in the standard design of time-keeping phase locked loops.

Such methods allow the designer to address frequency stability degradation as well

as high phase/frequency dynamics. Additionally, finite-dimensional approximants of

flicker frequency noise that are more representative of the truth system than the

tradition Gauss Markov approach are derived. Last, to maintain timing accuracy in

a wide variety of operating environments, novel Banks of Adaptive Extended Kalman

Filters are used to address both stochastic and dynamic uncertainty.
Date Created
2020
Agent

Simultaneous Positioning and Communications: Hybrid Radio Architecture, Estimation Techniques, and Experimental Validation

157986-Thumbnail Image.png
Description
Limited spectral access motivates technologies that adapt to diminishing resources and increasingly cluttered environments. A joint positioning-communications system is designed and implemented on \acf{COTS} hardware. This system enables simultaneous positioning of, and communications between, nodes in a distributed network

Limited spectral access motivates technologies that adapt to diminishing resources and increasingly cluttered environments. A joint positioning-communications system is designed and implemented on \acf{COTS} hardware. This system enables simultaneous positioning of, and communications between, nodes in a distributed network of base-stations and unmanned aerial systems (UASs). This technology offers extreme ranging precision ($<$ 5 cm) with minimal bandwidth (10 MHz), a secure communications link to protect against cyberattacks, a small form factor that enables integration into numerous platforms, and minimal resource consumption which supports high-density networks. The positioning and communications tasks are performed simultaneously with a single, co-use waveform, which efficiently utilizes limited resources and supports higher user densities. The positioning task uses a cooperative, point-to-point synchronization protocol to estimate the relative position and orientation of all users within the network. The communications task distributes positioning information between users and secures the positioning task against cyberattacks. This high-performance system is enabled by advanced time-of-arrival estimation techniques and a modern phase-accurate distributed coherence synchronization algorithm. This technology may be installed in ground-stations, ground vehicles, unmanned aerial systems, and airborne vehicles, enabling a highly-mobile, re-configurable network with numerous applications.
Date Created
2019
Agent

EEG-Based Estimation of Human Reaction Time Corresponding to Change of Visual Event.

157824-Thumbnail Image.png
Description
The human brain controls a person's actions and reactions. In this study, the main objective is to quantify reaction time towards a change of visual event and figuring out the inherent relationship between response time and corresponding brain activities. Furthermore,

The human brain controls a person's actions and reactions. In this study, the main objective is to quantify reaction time towards a change of visual event and figuring out the inherent relationship between response time and corresponding brain activities. Furthermore, which parts of the human brain are responsible for the reaction time is also of interest. As electroencephalogram (EEG) signals are proportional to the change of brain functionalities with time, EEG signals from different locations of the brain are used as indicators of brain activities. As the different channels are from different parts of our brain, identifying most relevant channels can provide the idea of responsible brain locations. In this study, response time is estimated using EEG signal features from time, frequency and time-frequency domain. Regression-based estimation using the full data-set results in RMSE (Root Mean Square Error) of 99.5 milliseconds and a correlation value of 0.57. However, the addition of non-EEG features with the existing features gives RMSE of 101.7 ms and a correlation value of 0.58. Using the same analysis with a custom data-set provides RMSE of 135.7 milliseconds and a correlation value of 0.69. Classification-based estimation provides 79% & 72% of accuracy for binary and 3-class classication respectively. Classification of extremes (high-low) results in 95% of accuracy. Combining recursive feature elimination, tree-based feature importance, and mutual feature information method, important channels, and features are isolated based on the best result. As human response time is not solely dependent on brain activities, it requires additional information about the subject to improve the reaction time estimation.
Date Created
2019
Agent