Design and Analysis of Multi-Layer Decomposition for Resource Allocation

187467-Thumbnail Image.png
Description
A distributed framework is proposed for addressing resource sharing problems in communications, micro-economics, and various other network systems. The approach uses a hierarchical multi-layer decomposition for network utility maximization. This methodology uses central management and distributed computations to allocate resources,

A distributed framework is proposed for addressing resource sharing problems in communications, micro-economics, and various other network systems. The approach uses a hierarchical multi-layer decomposition for network utility maximization. This methodology uses central management and distributed computations to allocate resources, and in dynamic environments, it aims to efficiently respond to network changes. The main contributions include a comprehensive description of an exemplary unifying optimization framework to share resources across different operators and platforms, and a detailed analysis of the generalized methods under the assumption that the network changes are on the same time-scale as the convergence time of the algorithms employed for local computations.Assuming strong concavity and smoothness of the objective functions, and under some stability conditions for each layer, convergence rates and optimality bounds are presented. The effectiveness of the framework is demonstrated through numerical examples. Furthermore, a novel Federated Edge Network Utility Maximization (FEdg-NUM) architecture is proposed for solving large-scale distributed network utility maximization problems in a fully decentralized way. In FEdg-NUM, clients with private utilities communicate with a peer-to-peer network of edge servers. Convergence properties are examined both through analysis and numerical simulations, and potential applications are highlighted. Finally, problems in a complex stochastic dynamic environment, specifically motivated by resource sharing during disasters occurring in multiple areas, are studied. In a hierarchical management scenario, a method of applying a primal-dual algorithm in higher-layer along with deep reinforcement learning algorithms in localities is presented. Analytical details as well as case studies such as pandemic and wildfire response are provided.
Date Created
2023
Agent

Stability and Security of Distribution Networks with High-Penetration Renewables

161802-Thumbnail Image.png
Description
Rapid increases in the installed amounts of Distributed Energy Resources are forcing a paradigm shift to guarantee stability, security, and economics of power distribution systems. This dissertation explores these challenges and proposes solutions to enable higher penetrations of grid-edge devices.

Rapid increases in the installed amounts of Distributed Energy Resources are forcing a paradigm shift to guarantee stability, security, and economics of power distribution systems. This dissertation explores these challenges and proposes solutions to enable higher penetrations of grid-edge devices. The thesis shows that integrating Graph Signal Processing with State Estimation formulation allows accurate estimation of voltage phasors for radial feeders under low-observability conditions using traditional measurements. Furthermore, the Optimal Power Flow formulation presented in this work can reduce the solution time of a bus injection-based convex relaxation formulation, as shown through numerical results. The enhanced real-time knowledge of the system state is leveraged to develop new approaches to cyber-security of a transactive energy market by introducing a blockchain-based Electron Volt Exchange framework that includes a distributed protocol for pricing and scheduling prosumers' production/consumption while keeping constraints and bids private. The distributed algorithm prevents power theft and false data injection by comparing prosumers' reported power exchanges to models of expected power exchanges using measurements from grid sensors to estimate system state. Necessary hardware security is described and integrated into underlying grid-edge devices to verify the provenance of messages to and from these devices. These preventive measures for securing energy transactions are accompanied by additional mitigation measures to maintain voltage stability in inverter-dominated networks by expressing local control actions through Lyapunov analysis to mitigate cyber-attack and generation intermittency effects. The proposed formulation is applicable as long as the Volt-Var and Volt-Watt curves of the inverters can be represented as Lipschitz constants. Simulation results demonstrate how smart inverters can mitigate voltage oscillations throughout the distribution network. Approaches are rigorously explored and validated using a combination of real distribution networks and synthetic test cases. Finally, to overcome the scarcity of real data to test distribution systems algorithms a framework is introduced to generate synthetic distribution feeders mapped to real geospatial topologies using available OpenStreetMap data. The methods illustrate how to create synthetic feeders across the entire ZIP Code, with minimal input data for any location. These stackable scientific findings conclude with a brief discussion of physical deployment opportunities to accelerate grid modernization efforts.
Date Created
2021
Agent

Power System Modeling Under Uncertainty With Controllable Demand

161246-Thumbnail Image.png
Description
With demand for increased efficiency and smaller carbon footprint, power system operators are striving to improve their modeling, down to the individual consumer device, paving the way for higher production and consumption efficiencies and increased renewable generation without sacrificing system

With demand for increased efficiency and smaller carbon footprint, power system operators are striving to improve their modeling, down to the individual consumer device, paving the way for higher production and consumption efficiencies and increased renewable generation without sacrificing system reliability. This dissertation explores two lines of research. The first part looks at stochastic continuous-time power system scheduling, where the goal is to better capture system ramping characteristics to address increased variability and uncertainty. The second part of the dissertation starts by developing aggregate population models for residential Demand Response (DR), focusing on storage devices, Electric Vehicles (EVs), Deferrable Appliances (DAs) and Thermostatically Controlled Loads (TCLs). Further, the characteristics of such a population aggregate are explored, such as the resemblance to energy storage devices, and particular attentions is given to how such aggregate models can be considered approximately convex even if the individual resource model is not. Armed with an approximately convex aggregate model for DR, how to interface it with present day energy markets is explored, looking at directions the market could go towards to better accommodate such devices for the benefit of not only the prosumer itself but the system as a whole.
Date Created
2020
Agent

Model-Based Machine Learning for the Power Grid

158716-Thumbnail Image.png
Description
The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of

The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of grid data in lower-dimensional feature spaces so that machine learning algorithms can be employed for a variety of purposes. To achieve that, without sacrificing the interpretation of the results, the dissertation leverages the physics behind power systems, well-known laws that underlie this man-made infrastructure, and the nature of the underlying stochastic phenomena that define the system operating conditions as the backbone for modeling data from the grid.

The first part of the dissertation introduces a new framework of graph signal processing (GSP) for the power grid, Grid-GSP, and applies it to voltage phasor measurements that characterize the overall system state of the power grid. Concepts from GSP are used in conjunction with known power system models in order to highlight the low-dimensional structure in data and present generative models for voltage phasors measurements. Applications such as identification of graphical communities, network inference, interpolation of missing data, detection of false data injection attacks and data compression are explored wherein Grid-GSP based generative models are used.

The second part of the dissertation develops a model for a joint statistical description of solar photo-voltaic (PV) power and the outdoor temperature which can lead to better management of power generation resources so that electricity demand such as air conditioning and supply from solar power are always matched in the face of stochasticity. The low-rank structure inherent in solar PV power data is used for forecasting and to detect partial-shading type of faults in solar panels.
Date Created
2020
Agent

Unobservable False Data Injection Attacks on Power Systems

158293-Thumbnail Image.png
Description
Reliable operation of modern power systems is ensured by an intelligent cyber layer that monitors and controls the physical system. The data collection and transmission is achieved by the supervisory control and data acquisition (SCADA) system, and data processing is

Reliable operation of modern power systems is ensured by an intelligent cyber layer that monitors and controls the physical system. The data collection and transmission is achieved by the supervisory control and data acquisition (SCADA) system, and data processing is performed by the energy management system (EMS). In the recent decades, the development of phasor measurement units (PMUs) enables wide area real-time monitoring and control. However, both SCADA-based and PMU-based cyber layers are prone to cyber attacks that can impact system operation and lead to severe physical consequences.

This dissertation studies false data injection (FDI) attacks that are unobservable to bad data detectors (BDD). Prior work has shown that an attacker-defender bi-level linear program (ADBLP) can be used to determine the worst-case consequences of FDI attacks aiming to maximize the physical power flow on a target line. However, the results were only demonstrated on small systems assuming that they are operated with DC optimal power flow (OPF). This dissertation is divided into four parts to thoroughly understand the consequences of these attacks as well as develop countermeasures.

The first part focuses on evaluating the vulnerability of large-scale power systems to FDI attacks. The solution technique introduced in prior work to solve the ADBLP is intractable on large-scale systems due to the large number of binary variables. Four new computationally efficient algorithms are presented to solve this problem.

The second part studies vulnerability of N-1 reliable power systems operated by state-of-the-art EMSs commonly used in practice, specifically real-time contingency analysis (RTCA), and security-constrained economic dispatch (SCED). An ADBLP is formulated with detailed assumptions on attacker's knowledge and system operations.

The third part considers FDI attacks on PMU measurements that have strong temporal correlations due to high data rate. It is shown that predictive filters can detect suddenly injected attacks, but not gradually ramping attacks.

The last part proposes a machine learning-based attack detection framework consists of a support vector regression (SVR) load predictor that predicts loads by exploiting both spatial and temporal correlations, and a subsequent support vector machine (SVM) attack detector to determine the existence of attacks.
Date Created
2020
Agent

Creating, Validating, and Using Synthetic Power Flow Cases: A Statistical Approach to Power System Analysis

157058-Thumbnail Image.png
Description
Synthetic power system test cases offer a wealth of new data for research and development purposes, as well as an avenue through which new kinds of analyses and questions can be examined. This work provides both a methodology for creating

Synthetic power system test cases offer a wealth of new data for research and development purposes, as well as an avenue through which new kinds of analyses and questions can be examined. This work provides both a methodology for creating and validating synthetic test cases, as well as a few use-cases for how access to synthetic data enables otherwise impossible analysis.

First, the question of how synthetic cases may be generated in an automatic manner, and how synthetic samples should be validated to assess whether they are sufficiently ``real'' is considered. Transmission and distribution levels are treated separately, due to the different nature of the two systems. Distribution systems are constructed by sampling distributions observed in a dataset from the Netherlands. For transmission systems, only first-order statistics, such as generator limits or line ratings are sampled statistically. The task of constructing an optimal power flow case from the sample sets is left to an optimization problem built on top of the optimal power flow formulation.

Secondly, attention is turned to some examples where synthetic models are used to inform analysis and modeling tasks. Co-simulation of transmission and multiple distribution systems is considered, where distribution feeders are allowed to couple transmission substations. Next, a distribution power flow method is parametrized to better account for losses. Numerical values for the parametrization can be statistically supported thanks to the ability to generate thousands of feeders on command.
Date Created
2019
Agent

Critical coupling and synchronized clusters in arbitrary networks of Kuramoto oscillators

156420-Thumbnail Image.png
Description
The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to

The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to complete synchronization where all oscillators

move with the same frequency and show a collective rhythm. Kuramoto-like

dynamics are considered a relevant model for instabilities of the AC-power grid which

operates in synchrony under standard conditions but exhibits, in a state of failure,

segmentation of the grid into desynchronized clusters.

In this dissertation the minimum coupling strength required to ensure total frequency

synchronization in a Kuramoto system, called the critical coupling, is investigated.

For coupling strength below the critical coupling, clusters of oscillators form

where oscillators within a cluster are on average oscillating with the same long-term

frequency. A unified order parameter based approach is developed to create approximations

of the critical coupling. Some of the new approximations provide strict lower

bounds for the critical coupling. In addition, these approximations allow for predictions

of the partially synchronized clusters that emerge in the bifurcation from the

synchronized state.

Merging the order parameter approach with graph theoretical concepts leads to a

characterization of this bifurcation as a weighted graph partitioning problem on an

arbitrary networks which then leads to an optimization problem that can efficiently

estimate the partially synchronized clusters. Numerical experiments on random Kuramoto

systems show the high accuracy of these methods. An interpretation of the

methods in the context of power systems is provided.
Date Created
2018
Agent

Techniques for Decentralized and Dynamic Resource Allocation

155997-Thumbnail Image.png
Description
This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable solution. The motivation for this work relies on the modern-era

This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable solution. The motivation for this work relies on the modern-era proliferation of sensors and devices, in the Data Acquisition Systems (DAS) layer of the Internet of Things (IoT) architecture. To avoid congestion and enable low-latency services, limits have to be imposed on the amount of decisions that can be centralized (i.e. solved in the ``cloud") and/or amount of control information that devices can exchange. This has been the motivation to develop i) a lightweight PHY Layer protocol for time synchronization and scheduling in Wireless Sensor Networks (WSNs), ii) an adaptive receiver that enables Sub-Nyquist sampling, for efficient spectrum sensing at high frequencies, and iii) an SDN-scheme for resource-sharing across different technologies and operators, to harmoniously and holistically respond to fluctuations in demands at the eNodeB' s layer.

The proposed solution for time synchronization and scheduling is a new protocol, called PulseSS, which is completely event-driven and is inspired by biological networks. The results on convergence and accuracy for locally connected networks, presented in this thesis, constitute the theoretical foundation for the protocol in terms of performance guarantee. The derived limits provided guidelines for ad-hoc solutions in the actual implementation of the protocol.

The proposed receiver for Compressive Spectrum Sensing (CSS) aims at tackling the noise folding phenomenon, e.g., the accumulation of noise from different sub-bands that are folded, prior to sampling and baseband processing, when an analog front-end aliasing mixer is utilized.

The sensing phase design has been conducted via a utility maximization approach, thus the scheme derived has been called Cognitive Utility Maximization Multiple Access (CUMMA).

The framework described in the last part of the thesis is inspired by stochastic network optimization tools and dynamics.

While convergence of the proposed approach remains an open problem, the numerical results here presented suggest the capability of the algorithm to handle traffic fluctuations across operators, while respecting different time and economic constraints.

The scheme has been named Decomposition of Infrastructure-based Dynamic Resource Allocation (DIDRA).
Date Created
2017
Agent

New and Provable Results for Network Inference Problems and Multi-agent Optimization Algorithms

155971-Thumbnail Image.png
Description
Our ability to understand networks is important to many applications, from the analysis and modeling of biological networks to analyzing social networks. Unveiling network dynamics allows us to make predictions and decisions. Moreover, network dynamics models have inspired new ideas

Our ability to understand networks is important to many applications, from the analysis and modeling of biological networks to analyzing social networks. Unveiling network dynamics allows us to make predictions and decisions. Moreover, network dynamics models have inspired new ideas for computational methods involving multi-agent cooperation, offering effective solutions for optimization tasks. This dissertation presents new theoretical results on network inference and multi-agent optimization, split into two parts -

The first part deals with modeling and identification of network dynamics. I study two types of network dynamics arising from social and gene networks. Based on the network dynamics, the proposed network identification method works like a `network RADAR', meaning that interaction strengths between agents are inferred by injecting `signal' into the network and observing the resultant reverberation. In social networks, this is accomplished by stubborn agents whose opinions do not change throughout a discussion. In gene networks, genes are suppressed to create desired perturbations. The steady-states under these perturbations are characterized. In contrast to the common assumption of full rank input, I take a laxer assumption where low-rank input is used, to better model the empirical network data. Importantly, a network is proven to be identifiable from low rank data of rank that grows proportional to the network's sparsity. The proposed method is applied to synthetic and empirical data, and is shown to offer superior performance compared to prior work. The second part is concerned with algorithms on networks. I develop three consensus-based algorithms for multi-agent optimization. The first method is a decentralized Frank-Wolfe (DeFW) algorithm. The main advantage of DeFW lies on its projection-free nature, where we can replace the costly projection step in traditional algorithms by a low-cost linear optimization step. I prove the convergence rates of DeFW for convex and non-convex problems. I also develop two consensus-based alternating optimization algorithms --- one for least square problems and one for non-convex problems. These algorithms exploit the problem structure for faster convergence and their efficacy is demonstrated by numerical simulations.

I conclude this dissertation by describing future research directions.
Date Created
2017
Agent

Wireless Sensor Data Transport, Aggregation and Security

155821-Thumbnail Image.png
Description
Wireless sensor networks (WSN) and the communication and the security therein have been gaining further prominence in the tech-industry recently, with the emergence of the so called Internet of Things (IoT). The steps from acquiring data and making a

Wireless sensor networks (WSN) and the communication and the security therein have been gaining further prominence in the tech-industry recently, with the emergence of the so called Internet of Things (IoT). The steps from acquiring data and making a reactive decision base on the acquired sensor measurements are complex and requires careful execution of several steps. In many of these steps there are still technological gaps to fill that are due to the fact that several primitives that are desirable in a sensor network environment are bolt on the networks as application layer functionalities, rather than built in them. For several important functionalities that are at the core of IoT architectures we have developed a solution that is analyzed and discussed in the following chapters.

The chain of steps from the acquisition of sensor samples until these samples reach a control center or the cloud where the data analytics are performed, starts with the acquisition of the sensor measurements at the correct time and, importantly, synchronously among all sensors deployed. This synchronization has to be network wide, including both the wired core network as well as the wireless edge devices. This thesis studies a decentralized and lightweight solution to synchronize and schedule IoT devices over wireless and wired networks adaptively, with very simple local signaling. Furthermore, measurement results have to be transported and aggregated over the same interface, requiring clever coordination among all nodes, as network resources are shared, keeping scalability and fail-safe operation in mind. Furthermore ensuring the integrity of measurements is a complicated task. On the one hand Cryptography can shield the network from outside attackers and therefore is the first step to take, but due to the volume of sensors must rely on an automated key distribution mechanism. On the other hand cryptography does not protect against exposed keys or inside attackers. One however can exploit statistical properties to detect and identify nodes that send false information and exclude these attacker nodes from the network to avoid data manipulation. Furthermore, if data is supplied by a third party, one can apply automated trust metric for each individual data source to define which data to accept and consider for mentioned statistical tests in the first place. Monitoring the cyber and physical activities of an IoT infrastructure in concert is another topic that is investigated in this thesis.
Date Created
2017
Agent