Covid-19 Hotspot Estimation Using Consensus Methods, SEIR Models and ML Algorithms

193540-Thumbnail Image.png
Description
The primary objective of this thesis is to identify locations or regions where COVID-19 transmission is more prevalent, termed “hotspots,” assess the likelihood of contracting the virus after visiting crowded areas or potential hotspots, and make predictions on confirmed COVID-19

The primary objective of this thesis is to identify locations or regions where COVID-19 transmission is more prevalent, termed “hotspots,” assess the likelihood of contracting the virus after visiting crowded areas or potential hotspots, and make predictions on confirmed COVID-19 cases and recoveries. A consensus algorithm is used to identify such hotspots; the SEIR epidemiological model tracks COVID-19 cases, allowing for a better understanding of the disease dynamics and enabling informed decision-making in public health strategies. Consensus-based distributed methodologies have been developed to estimate the magnitude, density, and locations of COVID-19 hotspots to provide well-informed alerts based on continuous data risk assessments. Assuming agents own a mobile device, transmission hotspots use information from user devices with Bluetooth and WiFi. In a consensus-based distributed clustering algorithm, users are divided into smaller groups, and then the number of users is estimated in each group. This process allows for the determination of the population of an outdoor site and the distances between individuals. The proposed algorithm demonstrates versatility by being applicable not only in outdoor environments but also in indoor settings. Considerations are made for signal attenuation caused by walls and other barriers to adapt to indoor environments, and a wall detection algorithm is employed for this purpose. The clustering mechanism is designed to dynamically choose the appropriate clustering technique based on data-dependent patterns, ensuring that every node undergoes proper clustering. After networks have been established and clustered, the output of the consensus algorithmis fed as one of many inputs into the SEIR model. SEIR, representing Susceptible, Exposed, Infectious, and Removed, forms the basis of a model designed to assess the probability of infection at a Point of Interest (POI). The SEIR model utilizes calculated parameters such as β (contact), σ (latency),γ (recovery), ω (loss of immunity) along with current COVID-19 case data to precisely predict the infection spread in a specific area. The SEIR model is implemented with diverse methodologies for transitioning populations between compartments. Hence, the model identifies optimal parameter values under different conditions and scenarios and forecasts the number of infected and recovered cases for the upcoming days.
Date Created
2024
Agent

Bayesian Inference and Information Learning for Switching Nonlinear Gene Regulatory Networks

190903-Thumbnail Image.png
Description
This dissertation centers on the development of Bayesian methods for learning differ- ent types of variation in switching nonlinear gene regulatory networks (GRNs). A new nonlinear and dynamic multivariate GRN model is introduced to account for different sources of variability

This dissertation centers on the development of Bayesian methods for learning differ- ent types of variation in switching nonlinear gene regulatory networks (GRNs). A new nonlinear and dynamic multivariate GRN model is introduced to account for different sources of variability in GRNs. The new model is aimed at more precisely capturing the complexity of GRN interactions through the introduction of time-varying kinetic order parameters, while allowing for variability in multiple model parameters. This model is used as the drift function in the development of several stochastic GRN mod- els based on Langevin dynamics. Six models are introduced which capture intrinsic and extrinsic noise in GRNs, thereby providing a full characterization of a stochastic regulatory system. A Bayesian hierarchical approach is developed for learning the Langevin model which best describes the noise dynamics at each time step. The trajectory of the state, which are the gene expression values, as well as the indicator corresponding to the correct noise model are estimated via sequential Monte Carlo (SMC) with a high degree of accuracy. To address the problem of time-varying regulatory interactions, a Bayesian hierarchical model is introduced for learning variation in switching GRN architectures with unknown measurement noise covariance. The trajectory of the state and the indicator corresponding to the network configuration at each time point are estimated using SMC. This work is extended to a fully Bayesian hierarchical model to account for uncertainty in the process noise covariance associated with each network architecture. An SMC algorithm with local Gibbs sampling is developed to estimate the trajectory of the state and the indicator correspond- ing to the network configuration at each time point with a high degree of accuracy. The results demonstrate the efficacy of Bayesian methods for learning information in switching nonlinear GRNs.
Date Created
2023
Agent

Development of Signal Analysis Synthesis Methods : Quantum Fourier Transforms and Quantum Linear Prediction Algorithms

189305-Thumbnail Image.png
Description
Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two

Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two key approaches: quantum Fourier transform (QFT) and quantum linear prediction (QLP). The research is motivated by the potential advantages offered by quantum computing in massive signal processing tasks and presents novel quantum circuit designs for QFT, quantum autocorrelation, and QLP, enabling signal analysis synthesis using quantum algorithms. The two approaches are explained as follows. The Quantum Fourier transform (QFT) demonstrates the potential for improved speed in quantum computing compared to classical methods. This thesis focuses on quantum encoding of signals and designing quantum algorithms for signal analysis synthesis, and signal compression using QFTs. Comparative studies are conducted to evaluate quantum computations for Fourier transform applications, considering Signal-to-Noise-Ratio results. The effects of qubit precision and quantum noise are also analyzed. The QFT algorithm is also developed in the J-DSP simulation environment, providing hands-on laboratory experiences for signal-processing students. User-friendly simulation programs on QFT-based signal analysis synthesis using peak picking, and perceptual selection using psychoacoustics in the J-DSP are developed. Further, this research is extended to analyze the autocorrelation of the signal using QFTs and develop a quantum linear prediction (QLP) algorithm for speech processing applications. QFTs and IQFTs are used to compute the quantum autocorrelation of the signal, and the HHL algorithm is modified and used to compute the solutions of the linear equations using quantum computing. The performance of the QLP algorithm is evaluated for system identification, spectral estimation, and speech analysis synthesis, and comparisons are performed for QLP and CLP results. The results demonstrate the following: effective quantum circuits for accurate QFT-based speech analysis synthesis, evaluation of performance with quantum noise, design of accurate quantum autocorrelation, and development of a modified HHL algorithm for efficient QLP. Overall, this thesis contributes to the research on quantum computing for signal processing applications and provides a foundation for further exploration of quantum algorithms for signal analysis synthesis.
Date Created
2023
Agent

Distributed Learning and Data Collection with Strategic Agents

187813-Thumbnail Image.png
Description
The presence of strategic agents can pose unique challenges to data collection and distributed learning. This dissertation first explores the social network dimension of data collection markets, and then focuses on how the strategic agents can be efficiently and effectively

The presence of strategic agents can pose unique challenges to data collection and distributed learning. This dissertation first explores the social network dimension of data collection markets, and then focuses on how the strategic agents can be efficiently and effectively incentivized to cooperate in distributed machine learning frameworks. The first problem explores the impact of social learning in collecting and trading unverifiable information where a data collector purchases data from users through a payment mechanism. Each user starts with a personal signal which represents the knowledge about the underlying state the data collector desires to learn. Through social interactions, each user also acquires additional information from his neighbors in the social network. It is revealed that both the data collector and the users can benefit from social learning which drives down the privacy costs and helps to improve the state estimation for a given total payment budget. In the second half, a federated learning scheme to train a global learning model with strategic agents, who are not bound to contribute their resources unconditionally, is considered. Since the agents are not obliged to provide their true stochastic gradient updates and the server is not capable of directly validating the authenticity of reported updates, the learning process may reach a noncooperative equilibrium. First, the actions of the agents are assumed to be binary: cooperative or defective. If the cooperative action is taken, the agent sends a privacy-preserved version of stochastic gradient signal. If the defective action is taken, the agent sends an arbitrary uninformative noise signal. Furthermore, this setup is extended into the scenarios with more general actions spaces where the quality of the stochastic gradient updates have a range of discrete levels. The proposed methodology evaluates each agent's stochastic gradient according to a reference gradient estimate which is constructed from the gradients provided by other agents, and rewards the agent based on that evaluation.
Date Created
2023
Agent

Methodologies to Improve Fidelity and Reliability of Deep Learning Models for Real-World Deployment

187456-Thumbnail Image.png
Description
The past decade witnessed the success of deep learning models in various applications of computer vision and natural language processing. This success can be predominantly attributed to the (i) availability of large amounts of training data; (ii) access of domain

The past decade witnessed the success of deep learning models in various applications of computer vision and natural language processing. This success can be predominantly attributed to the (i) availability of large amounts of training data; (ii) access of domain aware knowledge; (iii) i.i.d assumption between the train and target distributions and (iv) belief on existing metrics as reliable indicators of performance. When any of these assumptions are violated, the models exhibit brittleness producing adversely varied behavior. This dissertation focuses on methods for accurate model design and characterization that enhance process reliability when certain assumptions are not met. With the need to safely adopt artificial intelligence tools in practice, it is vital to build reliable failure detectors that indicate regimes where the model must not be invoked. To that end, an error predictor trained with a self-calibration objective is developed to estimate loss consistent with the underlying model. The properties of the error predictor are described and their utility in supporting introspection via feature importances and counterfactual explanations is elucidated. While such an approach can signal data regime changes, it is critical to calibrate models using regimes of inlier (training) and outlier data to prevent under- and over-generalization in models i.e., incorrectly identifying inliers as outliers and vice-versa. By identifying the space for specifying inliers and outliers, an anomaly detector that can effectively flag data of varying semantic complexities in medical imaging is next developed. Uncertainty quantification in deep learning models involves identifying sources of failure and characterizing model confidence to enable actionability. A training strategy is developed that allows the accurate estimation of model uncertainties and its benefits are demonstrated for active learning and generalization gap prediction. This helps identify insufficiently sampled regimes and representation insufficiency in models. In addition, the task of deep inversion under data scarce scenarios is considered, which in practice requires a prior to control the optimization. By identifying limitations in existing work, data priors powered by generative models and deep model priors are designed for audio restoration. With relevant empirical studies on a variety of benchmarks, the need for such design strategies is demonstrated.
Date Created
2023
Agent

Reconfigurable Intelligent Surfaces for Next-Generation Communication and Sensing Systems

187375-Thumbnail Image.png
Description
With the rapid development of reflect-arrays and software-defined meta-surfaces, reconfigurable intelligent surfaces (RISs) have been envisioned as promising technologies for next-generation wireless communication and sensing systems. These surfaces comprise massive numbers of nearly-passive elements that interact with the incident signals

With the rapid development of reflect-arrays and software-defined meta-surfaces, reconfigurable intelligent surfaces (RISs) have been envisioned as promising technologies for next-generation wireless communication and sensing systems. These surfaces comprise massive numbers of nearly-passive elements that interact with the incident signals in a smart way to improve the performance of such systems. In RIS-aided communication systems, designing this smart interaction, however, requires acquiring large-dimensional channel knowledge between the RIS and the transmitter/receiver. Acquiring this knowledge is one of the most crucial challenges in RISs as it is associated with large computational and hardware complexity. For RIS-aided sensing systems, it is interesting to first investigate scene depth perception based on millimeter wave (mmWave) multiple-input multiple-output (MIMO) sensing. While mmWave MIMO sensing systems address some critical limitations suffered by optical sensors, realizing these systems possess several key challenges: communication-constrained sensing framework design, beam codebook design, and scene depth estimation challenges. Given the high spatial resolution provided by the RISs, RIS-aided mmWave sensing systems have the potential to improve the scene depth perception, while imposing some key challenges too. In this dissertation, for RIS-aided communication systems, efficient RIS interaction design solutions are proposed by leveraging tools from compressive sensing and deep learning. The achievable rates of these solutions approach the upper bound, which assumes perfect channel knowledge, with negligible training overhead. For RIS-aided sensing systems, a mmWave MIMO based sensing framework is first developed for building accurate depth maps under the constraints imposed by the communication transceivers. Then, a scene depth estimation framework based on RIS-aided sensing is developed for building high-resolution accurate depth maps. Numerical simulations illustrate the promising performance of the proposed solutions, highlighting their potential for next-generation communication and sensing systems.
Date Created
2023
Agent

Effective Prior Selection and Knowledge Transfer for Deep Learning Applications

171997-Thumbnail Image.png
Description
In the recent years, deep learning has gained popularity for its ability to be utilized for several computer vision applications without any apriori knowledge. However, to introduce better inductive bias incorporating prior knowledge along with learnedinformation is critical. To that

In the recent years, deep learning has gained popularity for its ability to be utilized for several computer vision applications without any apriori knowledge. However, to introduce better inductive bias incorporating prior knowledge along with learnedinformation is critical. To that end, human intervention including choice of algorithm, data and model in deep learning pipelines can be considered a prior. Thus, it is extremely important to select effective priors for a given application. This dissertation explores different aspects of a deep learning pipeline and provides insights as to why a particular prior is effective for the corresponding application. For analyzing the effect of model priors, three applications which involvesequential modelling problems i.e. Audio Source Separation, Clinical Time-series (Electroencephalogram (EEG)/Electrocardiogram(ECG)) based Differential Diagnosis and Global Horizontal Irradiance Forecasting for Photovoltaic (PV) Applications are chosen. For data priors, the application of image classification is chosen and a new algorithm titled,“Invenio” that can effectively use data semantics for both task and distribution shift scenarios is proposed. Finally, the effectiveness of a data selection prior is shown using the application of object tracking wherein the aim is to maintain the tracking performance while prolonging the battery usage of image sensors by optimizing the data selected for reading from the environment. For every research contribution of this dissertation, several empirical studies are conducted on benchmark datasets. The proposed design choices demonstrate significant performance improvements in comparison to the existing application specific state-of-the-art deep learning strategies.
Date Created
2022
Agent

Integrating Time-Frequency and Machine Learning Methods for Tracking In Radar and Communications Coexisting Systems

171737-Thumbnail Image.png
Description
Increased demand on bandwidth has resulted in wireless communications and radar systems sharing spectrum. As signal transmissions from both modalities coexist, methodologies must be designed to reduce the induced interference from each system. This work considers the problem of tracking

Increased demand on bandwidth has resulted in wireless communications and radar systems sharing spectrum. As signal transmissions from both modalities coexist, methodologies must be designed to reduce the induced interference from each system. This work considers the problem of tracking an object using radar measurements embedded in noise and corrupted from transmissions of multiple communications users. Radar received signals in low noise can be successively processed to estimate object parameters maximum likelihood estimation. For linear frequency-modulated (LFM) signals, such estimates can be efficiently computed by integrating the Wigner distribution along lines in the time-frequency (TF) plane. However, the presence of communications interference highly reduces estimation performance.This thesis proposes a new approach to increase radar estimation performance by integrating a highly-localized TF method with data clustering. The received signal is first decomposed into highly localized Gaussians using the iterative matching pursuit decomposition (MPD). As the MPD is iterative, high noise levels can be reduced by appropriately selecting the algorithm’s stopping criteria. The decomposition also provides feature vectors of reduced dimensionality that can be used for clustering using a Gaussian mixture model (GMM). The proposed estimation method integrates along lines of a modified Wigner distribution of the Gaussian clusters in the TF plane. Using simulations, the object parameter estimation performance of the MPD is shown to highly improve when the MPD is integrated with GMM clustering.
Date Created
2022
Agent

Temperature Dependence of PV Fault Detection Neural Networks

Description

This study measure the effect of temperature on a neural network's ability to detect and classify solar panel faults. It's well known that temperature negatively affects the power output of solar panels. This has consequences on their output data and our ability to distinguish between conditions via machine learning.

Date Created
2022-12
Agent

Graph Based Semi-Supervised Classification and Manifold Learning

168621-Thumbnail Image.png
Description
Due to their effectiveness in capturing similarities between different entities, graphical models are widely used to represent datasets that reside on irregular and complex manifolds. Graph signal processing offers support to handle such complex datasets. By extending the digital signal

Due to their effectiveness in capturing similarities between different entities, graphical models are widely used to represent datasets that reside on irregular and complex manifolds. Graph signal processing offers support to handle such complex datasets. By extending the digital signal processing conceptual frame from time and frequency domain to graph domain, operators such as graph shift, graph filter and graph Fourier transform are defined. In this dissertation, two novel graph filter design methods are proposed. First, a graph filter with multiple shift matrices is applied to semi-supervised classification, which can handle features with uneven qualities through an embedded feature importance evaluation process. Three optimization solutions are provided: an alternating minimization method that is simple to implement, a convex relaxation method that provides a theoretical performance benchmark and a genetic algorithm, which is computationally efficient and better at configuring overfitting. Second, a graph filter with splitting-and-merging scheme is proposed, which splits the graph into multiple subgraphs. The corresponding subgraph filters are trained parallelly and in the last, by merging all the subgraph filters, the final graph filter is obtained. Due to the splitting process, the redundant edges in the original graph are dropped, which can save computational cost in semi-supervised classification. At the same time, this scheme also enables the filter to represent unevenly sampled data in manifold learning. To evaluate the performance of the proposed graph filter design approaches, simulation experiments with synthetic and real datasets are conduct. The Monte Carlo cross validation method is employed to demonstrate the need for the proposed graph filter design approaches in various application scenarios. Criterions, such as accuracy, Gini score, F1-score and learning curves, are provided to analyze the performance of the proposed methods and their competitors.
Date Created
2022
Agent