Transparent Integration of IoT devices in a 5G ORAN Network

193371-Thumbnail Image.png
Description
The fifth generation (5G) of cellular communication is migrating towards higher frequenciesto cater to the demand for higher data rate applications. However, in higher frequency ranges, like mmWave and terahertz, physical blockage poses a significant challenge to the large-scale deployment of this

The fifth generation (5G) of cellular communication is migrating towards higher frequenciesto cater to the demand for higher data rate applications. However, in higher frequency ranges, like mmWave and terahertz, physical blockage poses a significant challenge to the large-scale deployment of this new technology. Reconfigurable Intelligent Surfaces (RISs) have shown promising potential in extending the signal coverage and overcoming signal blockages in wireless communications. However, RIS integration in networks requires high coordination between network notes, resulting in barriers to the wide adoption of RISs and similar IoT devices. To this end, this work introduces a practical study of integrating a remotely controlled RIS in an Open RAN (ORAN) compliant 5G private network with minimal software stack modifications. This thesis proposes using cloud technologies and ORAN features, such as the Radio Intelligent Controller (RIC) and eXternal Applications (xApps), to coordinate the RIS transparently with a 5G base station operation. The proposed framework has been integrated into a proof-of-concept hardware prototype with a 5.8 GHz RIS. Experimental results demonstrate that the framework can control the beam steering in the RIS accurately within the network. The proposed framework shows promising potential for near real-time RIS beamforming control with minimal power consumption overhead.
Date Created
2024
Agent

R^2IM: Reliable and Robust Intersection Manager Robust to Rogue Vehicles

157771-Thumbnail Image.png
Description
At modern-day intersections, traffic lights and stop signs assist human drivers to cross the intersection safely. Traffic congestion in urban road networks is a costly problem that affects all major cities. Efficiently operating intersections is largely dependent on accuracy and

At modern-day intersections, traffic lights and stop signs assist human drivers to cross the intersection safely. Traffic congestion in urban road networks is a costly problem that affects all major cities. Efficiently operating intersections is largely dependent on accuracy and precision of human drivers, engendering a lingering uncertainty of attaining safety and high throughput. To improve the efficiency of the existing traffic network and mitigate the effects of human error in the intersection, many studies have proposed autonomous, intelligent transportation systems. These studies often involve utilizing connected autonomous vehicles, implementing a supervisory system, or both. Implementing a supervisory system is relatively more popular due to the security concerns of vehicle-to-vehicle communication. Even though supervisory systems are a step in the right direction for security, many supervisory systems’ safe operation solely relies on the promise of connected data being correct, making system reliability difficult to achieve. To increase fault-tolerance and decrease the effects of position uncertainty, this thesis proposes the Reliable and Robust Intersection Manager, a supervisory system that uses a separate surveillance system to dependably detect vehicles present in the intersection in order to create data redundancy for more accurate scheduling of connected autonomous vehicles. Adding the Surveillance System ensures that the temporal safety buffers between arrival times of connected autonomous vehicles are maintained. This guarantees that connected autonomous vehicles can traverse the intersection safely in the event of large vehicle controller error, a single rogue car entering the intersection, or a sybil attack. To test the proposed system given these fault-models, MATLAB® was used to create simulations in order to observe the functionality of R2IM compared to the state-of-the-art supervisory system, Robust Intersection Manager. Though R2IM is less efficient than the Robust Intersection Manager, it considers more fault models. The Robust Intersection Manager failed to maintain safety in the event of large vehicle controller errors and rogue cars, however R2IM resulted in zero collisions.
Date Created
2019
Agent

Constructing Locating arrays with Constraints using Constraint Satisfaction

132876-Thumbnail Image.png
Description
When designing screening experiments for many factors, two problems quickly arise. The first is that testing all the different combinations of the factors and interactions creates an experiment that is too large to conduct in a practical amount of

When designing screening experiments for many factors, two problems quickly arise. The first is that testing all the different combinations of the factors and interactions creates an experiment that is too large to conduct in a practical amount of time. One way this problem is solved is with a combinatorial design called a locating array (LA) which can efficiently identify the factors and interactions most influential on a response. The second problem is how to ensure that combinations that prohibit some particular tests are absent, a requirement that is common in real-world systems. This research proposes a solution to the second problem using constraint satisfaction.
Date Created
2019-05
Agent

Smoothed Airtime Linear Tuning and Optimized REACT with Multi-hop Extensions

156392-Thumbnail Image.png
Description
Medium access control (MAC) is a fundamental problem in wireless networks.

In ad-hoc wireless networks especially, many of the performance and scaling issues

these networks face can be attributed to their use of the core IEEE 802.11 MAC

protocol: distributed coordination function (DCF).

Medium access control (MAC) is a fundamental problem in wireless networks.

In ad-hoc wireless networks especially, many of the performance and scaling issues

these networks face can be attributed to their use of the core IEEE 802.11 MAC

protocol: distributed coordination function (DCF). Smoothed Airtime Linear Tuning

(SALT) is a new contention window tuning algorithm proposed to address some of the

deficiencies of DCF in 802.11 ad-hoc networks. SALT works alongside a new user level

and optimized implementation of REACT, a distributed resource allocation protocol,

to ensure that each node secures the amount of airtime allocated to it by REACT.

The algorithm accomplishes that by tuning the contention window size parameter

that is part of the 802.11 backoff process. SALT converges more tightly on airtime

allocations than a contention window tuning algorithm from previous work and this

increases fairness in transmission opportunities and reduces jitter more than either

802.11 DCF or the other tuning algorithm. REACT and SALT were also extended

to the multi-hop flow scenario with the introduction of a new airtime reservation

algorithm. With a reservation in place multi-hop TCP throughput actually increased

when running SALT and REACT as compared to 802.11 DCF, and the combination of

protocols still managed to maintain its fairness and jitter advantages. All experiments

were performed on a wireless testbed, not in simulation.
Date Created
2018
Agent

Graph Analysis of Arctic Ice

137627-Thumbnail Image.png
Description
Polar ice masses can be valuable indicators of trends in global climate. In an effort to better understand the dynamics of Arctic ice, this project analyzes sea ice concentration anomaly data collected over gridded regions (cells) and builds graphs based

Polar ice masses can be valuable indicators of trends in global climate. In an effort to better understand the dynamics of Arctic ice, this project analyzes sea ice concentration anomaly data collected over gridded regions (cells) and builds graphs based upon high correlations between cells. These graphs offer the opportunity to use metrics such as clustering coefficients and connected components to isolate representative trends in ice masses. Based upon this analysis, the structure of sea ice graphs differs at a statistically significant level from random graphs, and several regions show erratically decreasing trends in sea ice concentration.
Date Created
2013-05
Agent

Exploration of Sea Ice Concentrations using Graph Metrics

136615-Thumbnail Image.png
Description
As an example of "big data," we consider a repository of Arctic sea ice concentration data collected from satellites over the years 1979-2005. The data is represented by a graph, where vertices correspond to measurement points, and an edge is

As an example of "big data," we consider a repository of Arctic sea ice concentration data collected from satellites over the years 1979-2005. The data is represented by a graph, where vertices correspond to measurement points, and an edge is inserted between two vertices if the Pearson correlation coefficient between them exceeds a threshold. We investigate new questions about the structure of the graph related to betweenness, closeness centrality, vertex degrees, and characteristic path length. We also investigate whether an offset of weeks and years in graph generation results in a cosine similarity value that differs significantly from expected values. Finally, we relate the computational results to trends in Arctic ice.
Date Created
2015-05
Agent

Testbed Implementation of the Meta-MAC Protocol

135810-Thumbnail Image.png
Description
The meta-MAC protocol is a systematic and automatic method to dynamically combine any set of existing Medium Access Control (MAC) protocols into a single higher level MAC protocol. The meta-MAC concept was proposed more than a decade ago, but until

The meta-MAC protocol is a systematic and automatic method to dynamically combine any set of existing Medium Access Control (MAC) protocols into a single higher level MAC protocol. The meta-MAC concept was proposed more than a decade ago, but until now has not been implemented in a testbed environment due to a lack of suitable hardware. This thesis presents a proof-of-concept implementation of the meta-MAC protocol by utilizing a programmable radio platform, the Wireless MAC Processor (WMP), in combination with a host-level software module. The implementation of this host module, and the requirements and challenges faced therein, is the primary subject of this thesis. This implementation can combine, with certain constraints, a set of protocols each represented as an extended finite state machine for easy programmability. To illustrate the combination principle, protocols of the same type but with varying parameters are combined in a testbed environment, in what is termed parameter optimization. Specifically, a set of TDMA protocols with differing slot assignments are experimentally combined. This experiment demonstrates that the meta-MAC implementation rapidly converges to non-conflicting TDMA slot assignments for the nodes, with similar results to those in simulation. This both validates that the presented implementation properly implements the meta-MAC protocol, and verifies that the meta-MAC protocol can be as effective on real wireless hardware as it is in simulation.
Date Created
2016-05
Agent

Policy Conflict Management in Distributed SDN Environments

155696-Thumbnail Image.png
Description
The ease of programmability in Software-Defined Networking (SDN) makes it a great platform for implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in a multi-tenant data center environment. However, implementing security solutions in

The ease of programmability in Software-Defined Networking (SDN) makes it a great platform for implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in a multi-tenant data center environment. However, implementing security solutions in such an environment is fraught with policy conflicts and consistency issues with the hardness of this problem being affected by the distribution scheme for the SDN controllers.

In this dissertation, a formalism for flow rule conflicts in SDN environments is introduced. This formalism is realized in Brew, a security policy analysis framework implemented on an OpenDaylight SDN controller. Brew has comprehensive conflict detection and resolution modules to ensure that no two flow rules in a distributed SDN-based cloud environment have conflicts at any layer; thereby assuring consistent conflict-free security policy implementation and preventing information leakage. Techniques for global prioritization of flow rules in a decentralized environment are presented, using which all SDN flow rule conflicts are recognized and classified. Strategies for unassisted resolution of these conflicts are also detailed. Alternately, if administrator input is desired to resolve conflicts, a novel visualization scheme is implemented to help the administrators view the conflicts in an aesthetic manner. The correctness, feasibility and scalability of the Brew proof-of-concept prototype is demonstrated. Flow rule conflict avoidance using a buddy address space management technique is studied as an alternate to conflict detection and resolution in highly dynamic cloud systems attempting to implement an SDN-based Moving Target Defense (MTD) countermeasures.
Date Created
2017
Agent

Pingo: A Framework for the Management of Storage of Intermediate Outputs of Computational Workflows

155634-Thumbnail Image.png
Description
Scientific workflows allow scientists to easily model and express the entire data processing steps, typically as a directed acyclic graph (DAG). These scientific workflows are made of a collection of tasks that usually take a long time to compute and

Scientific workflows allow scientists to easily model and express the entire data processing steps, typically as a directed acyclic graph (DAG). These scientific workflows are made of a collection of tasks that usually take a long time to compute and that produce a considerable amount of intermediate datasets. Because of the nature of scientific exploration, a scientific workflow can be modified and re-run multiple times, or new scientific workflows are created that might make use of past intermediate datasets. Storing intermediate datasets has the potential to save time in computations. Since storage is limited, one main problem that needs a solution is determining which intermediate datasets need to be saved at creation time in order to minimize the computational time of the workflows to be run in the future. This research thesis proposes the design and implementation of Pingo, a system that is capable of managing the computations of scientific workflows as well as the storage, provenance and deletion of intermediate datasets. Pingo uses the history of workflows submitted to the system to predict the most likely datasets to be needed in the future, and subjects the decision of dataset deletion to the optimization of the computational time of future workflows.
Date Created
2017
Agent

Analysis and design of native file system enhancements for storage class memory

154501-Thumbnail Image.png
Description
As persistent non-volatile memory solutions become integrated in the computing ecosystem and landscape, traditional commodity file systems architected and developed for traditional block I/O based memory solutions must be reevaluated. A majority of commodity file systems have been architected

As persistent non-volatile memory solutions become integrated in the computing ecosystem and landscape, traditional commodity file systems architected and developed for traditional block I/O based memory solutions must be reevaluated. A majority of commodity file systems have been architected and designed with the goal of managing data on non-volatile storage devices such as hard disk drives (HDDs) and solid state drives (SSDs). HDDs and SSDs are attached to a computing system via a controller or I/O hub, often referred to as the southbridge. The point of HDD and SSD attachment creates multiple levels of translation for any data managed by the CPU that must be stored in non-volatile memory (NVM) on an HDD or SSD. Storage Class Memory (SCM) devices provide the ability to store data at the CPU and DRAM level of a computing system. A novel set of modifications to the ext2 and ext4 commodity file systems to address the needs of SCM will be presented and discussed. An in-depth analysis of many existing file systems, from multiple sources, will be presented along with an analysis to identify key modifications and extensions that would be necessary to execute file system on SCM devices. From this analysis, modifications and extensions have been applied to the FAT commodity file system for key functional tests that will be presented to demonstrate the operation and execution of the file system extensions.
Date Created
2016
Agent