Improved Bi-criteria Approximation for the All-or-Nothing Multicommodity Flow Problem in Arbitrary Networks

158544-Thumbnail Image.png
Description
This thesis addresses the following fundamental maximum throughput routing problem: Given an arbitrary edge-capacitated n-node directed network and a set of k commodities, with source-destination pairs (s_i,t_i) and demands d_i> 0, admit and route the largest possible number of commodities

This thesis addresses the following fundamental maximum throughput routing problem: Given an arbitrary edge-capacitated n-node directed network and a set of k commodities, with source-destination pairs (s_i,t_i) and demands d_i> 0, admit and route the largest possible number of commodities -- i.e., the maximum throughput -- to satisfy their demands.

The main contributions of this thesis are three-fold: First, a bi-criteria approximation algorithm is presented for this all-or-nothing multicommodity flow (ANF) problem. This algorithm is the first to achieve a constant approximation of the maximum throughput with an edge capacity violation ratio that is at most logarithmic in n, with high probability. The approach used is based on a version of randomized rounding that keeps splittable flows, rather than approximating those via a non-splittable path for each commodity: This allows it to work for arbitrary directed edge-capacitated graphs, unlike most of the prior work on the ANF problem. The algorithm also works if a weighted throughput is considered, where the benefit gained by fully satisfying the demand for commodity i is determined by a given weight w_i>0. Second, a derandomization of the algorithm is presented that maintains the same approximation bounds, using novel pessimistic estimators for Bernstein's inequality. In addition, it is shown how the framework can be adapted to achieve a polylogarithmic fraction of the maximum throughput while maintaining a constant edge capacity violation, if the network capacity is large enough. Lastly, one important aspect of the randomized and derandomized algorithms is their simplicity, which lends to efficient implementations in practice. The implementations of both randomized rounding and derandomized algorithms for the ANF problem are presented and show their efficiency in practice.
Date Created
2020
Agent

Comparison of Team Robot Localization by Input Difference for Deep Neural Network Model

158141-Thumbnail Image.png
Description
In a multi-robot system, locating a team robot is an important issue. If robots

can refer to the location of team robots based on information through passive action

recognition without explicit communication, various advantages (e.g. improving security

for military purposes) can be obtained.

In a multi-robot system, locating a team robot is an important issue. If robots

can refer to the location of team robots based on information through passive action

recognition without explicit communication, various advantages (e.g. improving security

for military purposes) can be obtained. Specifically, when team robots follow

the same motion rule based on information about adjacent robots, associations can

be found between robot actions. If the association can be analyzed, this can be a clue

to the remote robot. Using these clues, it is possible to infer remote robots which are

outside of the sensor range.

In this paper, a multi-robot system is constructed using a combination of Thymio

II robotic platforms and Raspberry pi controllers. Robots moving in chain-formation

take action using motion rules based on information obtained through passive action

recognition. To find associations between robots, a regression model is created using

Deep Neural Network (DNN) and Long Short-Term Memory (LSTM), one of state-of-art technologies.

The input data of the regression model is divided into historical data, which

are consecutive positions of the robot, and observed data, which is information about the

observed robot. Historical data is sequence data that is analyzed through the LSTM

layer. The accuracy of the regression model designed using DNN can vary depending

on the quantity and quality of the input. In this thesis, three different input situations

are assumed for comparison. First, the amount of observed data is different, second, the

type of observed data is different, and third, the history length is different. Comparative

models are constructed for each case, and prediction accuracy is compared to analyze

the effect of input data on the regression model. This exploration validates that these

methods from deep learning can reduce the communication demands in coordinated

motion of multi-robot systems
Date Created
2020
Agent

Maximizing Routing Throughput with Applications to Delay Tolerant Networks

156648-Thumbnail Image.png
Description
Many applications require efficient data routing and dissemination in Delay Tolerant Networks (DTNs) in order to maximize the throughput of data in the network, such as providing healthcare to remote communities, and spreading related information in Mobile Social Networks (MSNs).

Many applications require efficient data routing and dissemination in Delay Tolerant Networks (DTNs) in order to maximize the throughput of data in the network, such as providing healthcare to remote communities, and spreading related information in Mobile Social Networks (MSNs). In this thesis, the feasibility of using boats in the Amazon Delta Riverine region as data mule nodes is investigated and a robust data routing algorithm based on a fountain code approach is designed to ensure fast and timely data delivery considering unpredictable boat delays, break-downs, and high transmission failures. Then, the scenario of providing healthcare in Amazon Delta Region is extended to a general All-or-Nothing (Splittable) Multicommodity Flow (ANF) problem and a polynomial time constant approximation algorithm is designed for the maximum throughput routing problem based on a randomized rounding scheme with applications to DTNs. In an MSN, message content is closely related to users’ preferences, and can be used to significantly impact the performance of data dissemination. An interest- and content-based algorithm is developed where the contents of the messages, along with the network structural information are taken into consideration when making message relay decisions in order to maximize data throughput in an MSN. Extensive experiments show the effectiveness of the above proposed data dissemination algorithm by comparing it with state-of-the-art techniques.
Date Created
2018
Agent

Covering arrays: algorithms and asymptotics

155047-Thumbnail Image.png
Description
Modern software and hardware systems are composed of a large number of components. Often different components of a system interact with each other in unforeseen and undesired ways to cause failures. Covering arrays are a useful mathematical tool for testing

Modern software and hardware systems are composed of a large number of components. Often different components of a system interact with each other in unforeseen and undesired ways to cause failures. Covering arrays are a useful mathematical tool for testing all possible t-way interactions among the components of a system.

The two major issues concerning covering arrays are explicit construction of a covering array, and exact or approximate determination of the covering array number---the minimum size of a covering array. Although these problems have been investigated extensively for the last couple of decades, in this thesis we present significant improvements on both of these questions using tools from the probabilistic method and randomized algorithms.

First, a series of improvements is developed on the previously known upper bounds on covering array numbers. An estimate for the discrete Stein-Lovász-Johnson bound is derived and the Stein- Lovász -Johnson bound is improved upon using an alteration strategy. Then group actions on the set of symbols are explored to establish two asymptotic upper bounds on covering array numbers that are tighter than any of the presently known bounds.

Second, an algorithmic paradigm, called the two-stage framework, is introduced for covering array construction. A number of concrete algorithms from this framework are analyzed, and it is shown that they outperform current methods in the range of parameter values that are of practical relevance. In some cases, a reduction in the number of tests by more than 50% is achieved.

Third, the Lovász local lemma is applied on covering perfect hash families to obtain an upper bound on covering array numbers that is tightest of all known bounds. This bound leads to a Moser-Tardos type algorithm that employs linear algebraic computation over finite fields to construct covering arrays. In some cases, this algorithm outperforms currently used methods by more than an 80% margin.

Finally, partial covering arrays are introduced to investigate a few practically relevant relaxations of the covering requirement. Using probabilistic methods, bounds are obtained on partial covering arrays that are significantly smaller than for covering arrays. Also, randomized algorithms are provided that construct such arrays in expected polynomial time.
Date Created
2016
Agent

Covering arrays: generation and post-optimization

154160-Thumbnail Image.png
Description
Exhaustive testing is generally infeasible except in the smallest of systems. Research

has shown that testing the interactions among fewer (up to 6) components is generally

sufficient while retaining the capability to detect up to 99% of defects. This leads to a

substantial

Exhaustive testing is generally infeasible except in the smallest of systems. Research

has shown that testing the interactions among fewer (up to 6) components is generally

sufficient while retaining the capability to detect up to 99% of defects. This leads to a

substantial decrease in the number of tests. Covering arrays are combinatorial objects

that guarantee that every interaction is tested at least once.

In the absence of direct constructions, forming small covering arrays is generally

an expensive computational task. Algorithms to generate covering arrays have been

extensively studied yet no single algorithm provides the smallest solution. More

recently research has been directed towards a new technique called post-optimization.

These algorithms take an existing covering array and attempt to reduce its size.

This thesis presents a new idea for post-optimization by representing covering

arrays as graphs. Some properties of these graphs are established and the results are

contrasted with existing post-optimization algorithms. The idea is then generalized to

close variants of covering arrays with surprising results which in some cases reduce

the size by 30%. Applications of the method to generation and test prioritization are

studied and some interesting results are reported.
Date Created
2015
Agent

Infinite cacheflow: a rule-caching solution for software defined networks

152849-Thumbnail Image.png
Description
New OpenFlow switches support a wide range of network applications, such as firewalls, load balancers, routers, and traffic monitoring. While ternary content addressable memory (TCAM) allows switches to process packets at high speed based on multiple header fields, today's commodity

New OpenFlow switches support a wide range of network applications, such as firewalls, load balancers, routers, and traffic monitoring. While ternary content addressable memory (TCAM) allows switches to process packets at high speed based on multiple header fields, today's commodity switches support just thousands to tens of thousands of forwarding rules. To allow for finer-grained policies on this hardware, efficient ways to support the abstraction of a switch are needed with arbitrarily large rule tables. To do so, a hardware-software hybrid switch is designed that relies on rule caching to provide large rule tables at low cost. Unlike traditional caching solutions, neither individual rules are cached (to respect rule dependencies) nor compressed (to preserve the per-rule traffic counts). Instead long dependency chains are ``spliced'' to cache smaller groups of rules while preserving the semantics of the network policy. The proposed hybrid switch design satisfies three criteria: (1) responsiveness, to allow rapid changes to the cache with minimal effect on traffic throughput; (2) transparency, to faithfully support native OpenFlow semantics; (3) correctness, to cache rules while preserving the semantics of the original policy. The evaluation of the hybrid switch on large rule tables suggest that it can effectively expose the benefits of both hardware and software switches to the controller and to applications running on top of it.
Date Created
2014
Agent

Robust and efficient medium access despite jamming

151063-Thumbnail Image.png
Description
Interference constitutes a major challenge for communication networks operating over a shared medium where availability is imperative. This dissertation studies the problem of designing and analyzing efficient medium access protocols which are robust against strong adversarial jamming. More specifically, four

Interference constitutes a major challenge for communication networks operating over a shared medium where availability is imperative. This dissertation studies the problem of designing and analyzing efficient medium access protocols which are robust against strong adversarial jamming. More specifically, four medium access (MAC) protocols (i.e., JADE, ANTIJAM, COMAC, and SINRMAC) which aim to achieve high throughput despite jamming activities under a variety of network and adversary models are presented. We also propose a self-stabilizing leader election protocol, SELECT, that can effectively elect a leader in the network with the existence of a strong adversary. Our protocols can not only deal with internal interference without the exact knowledge on the number of participants in the network, but they are also robust to unintentional or intentional external interference, e.g., due to co-existing networks or jammers. We model the external interference by a powerful adaptive and/or reactive adversary which can jam a (1 − ε)-portion of the time steps, where 0 < ε ≤ 1 is an arbitrary constant. We allow the adversary to be adaptive and to have complete knowledge of the entire protocol history. Moreover, in case the adversary is also reactive, it uses carrier sensing to make informed decisions to disrupt communications. Among the proposed protocols, JADE, ANTIJAM and COMAC are able to achieve Θ(1)-competitive throughput with the presence of the strong adversary; while SINRMAC is the first attempt to apply SINR model (i.e., Signal to Interference plus Noise Ratio), in robust medium access protocols design; the derived principles are also useful to build applications on top of the MAC layer, and we present SELECT, which is an exemplary study for leader election, which is one of the most fundamental tasks in distributed computing.
Date Created
2012
Agent

Detecting sybil nodes in static and dynamic networks

149501-Thumbnail Image.png
Description
Peer-to-peer systems are known to be vulnerable to the Sybil attack. The lack of a central authority allows a malicious user to create many fake identities (called Sybil nodes) pretending to be independent honest nodes. The goal of the malicious

Peer-to-peer systems are known to be vulnerable to the Sybil attack. The lack of a central authority allows a malicious user to create many fake identities (called Sybil nodes) pretending to be independent honest nodes. The goal of the malicious user is to influence the system on his/her behalf. In order to detect the Sybil nodes and prevent the attack, a reputation system is used for the nodes, built through observing its interactions with its peers. The construction makes every node a part of a distributed authority that keeps records on the reputation and behavior of the nodes. Records of interactions between nodes are broadcast by the interacting nodes and honest reporting proves to be a Nash Equilibrium for correct (non-Sybil) nodes. In this research is argued that in realistic communication schedule scenarios, simple graph-theoretic queries such as the computation of Strongly Connected Components and Densest Subgraphs, help in exposing those nodes most likely to be Sybil, which are then proved to be Sybil or not through a direct test executed by some peers.
Date Created
2010
Agent