Identifying Sources of Anomalies in Complex Networks

171925-Thumbnail Image.png
Description
The problem of monitoring complex networks for the detection of anomalous behavior is well known. Sensors are usually deployed for the purpose of monitoring these networks for anomalies and Sensor Placement Optimization (SPO) is the problem of determining where these

The problem of monitoring complex networks for the detection of anomalous behavior is well known. Sensors are usually deployed for the purpose of monitoring these networks for anomalies and Sensor Placement Optimization (SPO) is the problem of determining where these sensors should be placed (deployed) in the network. Prior works have utilized the well known Set Cover formulation in order to determine the locations where sensors should be placed in the network, so that anomalies can be effectively detected. However, such works cannot be utilized to address the problem when the objective is to not only detect the presence of anomalies, but also to detect (distinguish) the source(s) of the detected anomalies, i.e., uniquely monitoring the network. In this dissertation, I attempt to fill in this gap by utilizing the mathematical concept of Identifying Codes and illustrating how it not only can overcome the aforementioned limitation, but also it, and its variants, can be utilized to monitor complex networks modeled from multiple domains. Over the course of this dissertation, I make key contributions which further enhance the efficacy and applicability of Identifying Codes as a monitoring strategy. First, I show how Identifying Codes are superior to not only the Set Cover formulation but also standard graph centrality metrics, for the purpose of uniquely monitoring complex networks. Second, I study novel problems such as the budget constrained Identifying Code, scalable Identifying Code, robust Identifying Code etc., and present algorithms and results for the respective problems. Third, I present useful Identifying Code results for restricted graph classes such as Unit Interval Bigraphs and Unit Disc Bigraphs. Finally, I show the universality of Identifying Codes by applying it to multiple domains.
Date Created
2022
Agent

Measuring and Enhancing Users' Privacy in Machine Learning

171921-Thumbnail Image.png
Description
With the bloom of machine learning, a massive amount of data has been used in the training process of machine learning. A tremendous amount of this data is user-generated data which allows the machine learning models to produce accurate results

With the bloom of machine learning, a massive amount of data has been used in the training process of machine learning. A tremendous amount of this data is user-generated data which allows the machine learning models to produce accurate results and personalized services. Nevertheless, I recognize the importance of preserving the privacy of individuals by protecting their information in the training process. One privacy attack that affects individuals is the private attribute inference attack. The private attribute attack is the process of inferring individuals' information that they do not explicitly reveal, such as age, gender, location, and occupation. The impacts of this go beyond knowing the information as individuals face potential risks. Furthermore, some applications need sensitive data to train the models and predict helpful insights and figuring out how to build privacy-preserving machine learning models will increase the capabilities of these applications.However, improving privacy affects the data utility which leads to a dilemma between privacy and utility. The utility of the data is measured by the quality of the data for different tasks. This trade-off between privacy and utility needs to be maintained to satisfy the privacy requirement and the result quality. To achieve more scalable privacy-preserving machine learning models, I investigate the privacy risks that affect individuals' private information in distributed machine learning. Even though the distributed machine learning has been driven by privacy concerns, privacy issues have been proposed in the literature which threaten individuals' privacy. In this dissertation, I investigate how to measure and protect individuals' privacy in centralized and distributed machine learning models. First, a privacy-preserving text representation learning is proposed to protect users' privacy that can be revealed from user generated data. Second, a novel privacy-preserving text classification for split learning is presented to improve users' privacy and retain high utility by defending against private attribute inference attacks.
Date Created
2022
Agent

Risk-based Network Vulnerability Prioritization

171813-Thumbnail Image.png
Description
This dissertation investigates the problem of efficiently and effectively prioritizing a vulnerability risk in a computer networking system. Vulnerability prioritization is one of the most challenging issues in vulnerability management, which affects allocating preventive and defensive resources in a computer

This dissertation investigates the problem of efficiently and effectively prioritizing a vulnerability risk in a computer networking system. Vulnerability prioritization is one of the most challenging issues in vulnerability management, which affects allocating preventive and defensive resources in a computer networking system. Due to the large number of identified vulnerabilities, it is very challenging to remediate them all in a timely fashion. Thus, an efficient and effective vulnerability prioritization framework is required. To deal with this challenge, this dissertation proposes a novel risk-based vulnerability prioritization framework that integrates the recent artificial intelligence techniques (i.e., neuro-symbolic computing and logic reasoning). The proposed work enhances the vulnerability management process by prioritizing vulnerabilities with high risk by refining the initial risk assessment with the network constraints. This dissertation is organized as follows. The first part of this dissertation presents the overview of the proposed risk-based vulnerability prioritization framework, which contains two stages. The second part of the dissertation investigates vulnerability risk features in a computer networking system. The third part proposes the first stage of this framework, a vulnerability risk assessment model. The proposed assessment model captures the pattern of vulnerability risk features to provide a more comprehensive risk assessment for a vulnerability. The fourth part proposes the second stage of this framework, a vulnerability prioritization reasoning engine. This reasoning engine derives network constraints from interactions between vulnerabilities and network environment elements based on network and system setups. This proposed framework assesses a vulnerability in a computer networking system based on its actual security impact by refining the initial risk assessment with the network constraints.
Date Created
2022
Agent

Human-in-the-Loop Machine Learning Systems for Data Integration and Predictive Analytics

171809-Thumbnail Image.png
Description

Data integration involves the reconciliation of data from diverse data sources in order to obtain a unified data repository, upon which an end user such as a data analyst can run analytics sessions to explore the data and obtain useful

Data integration involves the reconciliation of data from diverse data sources in order to obtain a unified data repository, upon which an end user such as a data analyst can run analytics sessions to explore the data and obtain useful insights. Supervised Machine Learning (ML) for data integration tasks such as ontology (schema) or entity (instance) matching requires several training examples in terms of manually curated, pre-labeled matching and non-matching schema concept or entity pairs which are hard to obtain. On similar lines, an analytics system without predictive capabilities about the impending workload can incur huge querying latencies, while leaving the onus of understanding the underlying database schema and writing a meaningful query at every step during a data exploration session on the user. In this dissertation, I will describe the human-in-the-loop Machine Learning (ML) systems that I have built towards data integration and predictive analytics. I alleviate the need for extensive prior labeling by utilizing active learning (AL) for dataintegration. In each AL iteration, I detect the unlabeled entity or schema concept pairs that would strengthen the ML classifier and selectively query the human oracle for such labels in a budgeted fashion. Thus, I make use of human assistance for ML-based data integration. On the other hand, when the human is an end user exploring data through Online Analytical Processing (OLAP) queries, my goal is to pro-actively assist the human by predicting the top-K next queries that s/he is likely to be interested in. I will describe my proposed SQL-predictor, a Business Intelligence (BI) query predictor and a geospatial query cardinality estimator with an emphasis on schema abstraction, query representation and how I adapt the ML models for these tasks. For each system, I will discuss the evaluation metrics and how the proposed systems compare to the state-of-the-art baselines on multiple datasets and query workloads.

Date Created
2022
Agent

Computational Beltrami Coefficient Quantification of Retinotopic Maps in the Visual Processing Cascade

171764-Thumbnail Image.png
Description
This dissertation constructs a new computational processing framework to robustly and precisely quantify retinotopic maps based on their angle distortion properties. More generally, this framework solves the problem of how to robustly and precisely quantify (angle) distortions of noisy or

This dissertation constructs a new computational processing framework to robustly and precisely quantify retinotopic maps based on their angle distortion properties. More generally, this framework solves the problem of how to robustly and precisely quantify (angle) distortions of noisy or incomplete (boundary enclosed) 2-dimensional surface to surface mappings. This framework builds upon the Beltrami Coefficient (BC) description of quasiconformal mappings that directly quantifies local mapping (circles to ellipses) distortions between diffeomorphisms of boundary enclosed plane domains homeomorphic to the unit disk. A new map called the Beltrami Coefficient Map (BCM) was constructed to describe distortions in retinotopic maps. The BCM can be used to fully reconstruct the original target surface (retinal visual field) of retinotopic maps. This dissertation also compared retinotopic maps in the visual processing cascade, which is a series of connected retinotopic maps responsible for visual data processing of physical images captured by the eyes. By comparing the BCM results from a large Human Connectome project (HCP) retinotopic dataset (N=181), a new computational quasiconformal mapping description of the transformed retinal image as it passes through the cascade is proposed, which is not present in any current literature. The description applied on HCP data provided direct visible and quantifiable geometric properties of the cascade in a way that has not been observed before. Because retinotopic maps are generated from in vivo noisy functional magnetic resonance imaging (fMRI), quantifying them comes with a certain degree of uncertainty. To quantify the uncertainties in the quantification results, it is necessary to generate statistical models of retinotopic maps from their BCMs and raw fMRI signals. Considering that estimating retinotopic maps from real noisy fMRI time series data using the population receptive field (pRF) model is a time consuming process, a convolutional neural network (CNN) was constructed and trained to predict pRF model parameters from real noisy fMRI data
Date Created
2022
Agent

Interpretable Features for Distinguishing Machine Generated News Articles.

171756-Thumbnail Image.png
Description
Social media has become a primary means of communication and a prominent source of information about day-to-day happenings in the contemporary world. The rise in the popularity of social media platforms in recent decades has empowered people with an unprecedented

Social media has become a primary means of communication and a prominent source of information about day-to-day happenings in the contemporary world. The rise in the popularity of social media platforms in recent decades has empowered people with an unprecedented level of connectivity. Despite the benefits social media offers, it also comes with disadvantages. A significant downside to staying connected via social media is the susceptibility to falsified information or Fake News. Easy accessibility to social media and lack of truth verification tools favored the miscreants on online platforms to spread false propaganda at scale, ensuing chaos. The spread of misinformation on these platforms ultimately leads to mistrust and social unrest. Consequently, there is a need to counter the spread of misinformation which could otherwise have a detrimental impact on society. A notable example of such a case is the 2019 Covid pandemic misinformation spread, where coordinated misinformation campaigns misled the public on vaccination and health safety. The advancements in Natural Language Processing gave rise to sophisticated language generation models that can generate realistic-looking texts. Although the current Fake News generation process is manual, it is just a matter of time before this process gets automated at scale and generates Neural Fake News using language generation models like the Bidirectional Encoder Representations from Transformers (BERT) and the third generation Generative Pre-trained Transformer (GPT-3). Moreover, given that the current state of fact verification is manual, it calls for an urgent need to develop reliable automated detection tools to counter Neural Fake News generated at scale. Existing tools demonstrate state-of-the-art performance in detecting Neural Fake News but exhibit a black box behavior. Incorporating explainability into the Neural Fake News classification task will build trust and acceptance amongst different communities and decision-makers. Therefore, the current study proposes a new set of interpretable discriminatory features. These features capture statistical and stylistic idiosyncrasies, achieving an accuracy of 82% on Neural Fake News classification. Furthermore, this research investigates essential dependency relations contributing to the classification process. Lastly, the study concludes by providing directions for future research in building explainable tools for Neural Fake News detection.
Date Created
2022
Agent

Building Vision and Language Models with Implicit Supervision and Increased Efficiency

171740-Thumbnail Image.png
Description
An important objective of AI is to understand real-world observations and build up interactive communication with people. The ability to interpret and react to the perception reveals the important necessity of developing such a system across both the modalities of

An important objective of AI is to understand real-world observations and build up interactive communication with people. The ability to interpret and react to the perception reveals the important necessity of developing such a system across both the modalities of Vision (V) and Language (L). Although there have been massive efforts on various VL tasks, e.g., Image/Video Captioning, Visual Question Answering, and Textual Grounding, very few of them focus on building the VL models with increased efficiency under real-world scenarios. The main focus of this dissertation is to comprehensively investigate the very uncharted efficient VL learning, aiming to build lightweight, data-efficient, and real-world applicable VL models. The proposed studies in this dissertation take three primary aspects into account when it comes to efficient VL, 1). Data Efficiency: collecting task-specific annotations is prohibitively expensive and so manual labor is not always attainable. Techniques are developed to assist the VL learning from implicit supervision, i.e., in a weakly- supervised fashion. 2). Continuing from that, efficient representation learning is further explored with increased scalability, leveraging a large image-text corpus without task-specific annotations. In particular, the knowledge distillation technique is studied for generic Representation Learning which proves to bring substantial performance gain to the regular representation learning schema. 3). Architectural Efficiency. Deploying the VL model on edge devices is notoriously challenging due to their cumbersome architectures. To further extend these advancements to the real world, a novel efficient VL architecture is designed to tackle the inference bottleneck and the inconvenient two-stage training. Extensive discussions have been conducted on several critical aspects that prominently influence the performances of compact VL models.
Date Created
2022
Agent

Analyzing Failure Modes of Inscrutable Machine Learning Models

171440-Thumbnail Image.png
Description
Machine learning models and in specific, neural networks, are well known for being inscrutable in nature. From image classification tasks and generative techniques for data augmentation, to general purpose natural language models, neural networks are currently the algorithm of preference

Machine learning models and in specific, neural networks, are well known for being inscrutable in nature. From image classification tasks and generative techniques for data augmentation, to general purpose natural language models, neural networks are currently the algorithm of preference that is riding the top of the current artificial intelligence (AI) wave, having experienced the greatest boost in popularity above any other machine learning solution. However, due to their inscrutable design based on the optimization of millions of parameters, it is ever so complex to understand how their decision is influenced nor why (and when) they fail. While some works aim at explaining neural network decisions or making systems to be inherently interpretable the great majority of state of the art machine learning works prioritize performance over interpretability effectively becoming black boxes. Hence, there is still uncertainty in the decision boundaries of these already deployed solutions whose predictions should still be analyzed and taken with care. This becomes even more important when these models are used on sensitive scenarios such as medicine, criminal justice, settings with native inherent social biases or where egregious mispredictions can negatively impact the system or human trust down the line. Thus, the aim of this work is to provide a comprehensive analysis on the failure modes of the state of the art neural networks from three domains: large image classifiers and their misclassifications, generative adversarial networks when used for data augmentation and transformer networks applied to structured representations and reasoning about actions and change.
Date Created
2022
Agent

Algorithmic Solutions for Socially Responsible AI

168720-Thumbnail Image.png
Description
Artificial intelligence (AI) has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks of oppression and calamity. For example, social media platforms have knowingly and surreptitiously promoted harmful content,

Artificial intelligence (AI) has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks of oppression and calamity. For example, social media platforms have knowingly and surreptitiously promoted harmful content, e.g., the rampant instances of disinformation and hate speech. Machine learning algorithms designed for combating hate speech were also found biased against underrepresented and disadvantaged groups. In response, researchers and organizations have been working to publish principles and regulations for the responsible use of AI. However, these conceptual principles also need to be turned into actionable algorithms to materialize AI for good. The broad aim of my research is to design AI systems that responsibly serve users and develop applications with social impact. This dissertation seeks to develop the algorithmic solutions for Socially Responsible AI (SRAI), a systematic framework encompassing the responsible AI principles and algorithms, and the responsible use of AI. In particular, it first introduces an interdisciplinary definition of SRAI and the AI responsibility pyramid, in which four types of AI responsibilities are described. It then elucidates the purpose of SRAI: how to bridge from the conceptual definitions to responsible AI practice through the three human-centered operations -- to Protect and Inform users, and Prevent negative consequences. They are illustrated in the social media domain given that social media has revolutionized how people live but has also contributed to the rise of many societal issues. The three representative tasks for each dimension are cyberbullying detection, disinformation detection and dissemination, and unintended bias mitigation. The means of SRAI is to develop responsible AI algorithms. Many issues (e.g., discrimination and generalization) can arise when AI systems are trained to improve accuracy without knowing the underlying causal mechanism. Causal inference, therefore, is intrinsically related to understanding and resolving these challenging issues in AI. As a result, this dissertation also seeks to gain an in-depth understanding of AI by looking into the precise relationships between causes and effects. For illustration, it introduces a recent work that applies deep learning to estimating causal effects and shows that causal learning algorithms can outperform traditional methods.
Date Created
2022
Agent

An Exploration of Echo Chambers

165121-Thumbnail Image.png
Description

This paper addresses echo chambers, an online phenomena wherein social media users can "only hear their own voice". In this paper I will examine the history and recent proliferation of online echo chambers. I will outline a comprehensive theory of

This paper addresses echo chambers, an online phenomena wherein social media users can "only hear their own voice". In this paper I will examine the history and recent proliferation of online echo chambers. I will outline a comprehensive theory of echo chamber generation and maintenance, intended for educational value. I then conduct my own experiment based on previous echo chamber detection work.

Date Created
2022-05
Agent