MADM-Based Smart Parking Guidance Algorithm

127877-Thumbnail Image.png
Description

In smart parking environments, how to choose suitable parking facilities with various attributes to satisfy certain criteria is an important decision issue. Based on the multiple attributes decision making (MADM) theory, this study proposed a smart parking guidance algorithm by

In smart parking environments, how to choose suitable parking facilities with various attributes to satisfy certain criteria is an important decision issue. Based on the multiple attributes decision making (MADM) theory, this study proposed a smart parking guidance algorithm by considering three representative decision factors (i.e., walk duration, parking fee, and the number of vacant parking spaces) and various preferences of drivers. In this paper, the expected number of vacant parking spaces is regarded as an important attribute to reflect the difficulty degree of finding available parking spaces, and a queueing theory-based theoretical method was proposed to estimate this expected number for candidate parking facilities with different capacities, arrival rates, and service rates. The effectiveness of the MADM-based parking guidance algorithm was investigated and compared with a blind search-based approach in comprehensive scenarios with various distributions of parking facilities, traffic intensities, and user preferences. Experimental results show that the proposed MADM-based algorithm is effective to choose suitable parking resources to satisfy users’ preferences. Furthermore, it has also been observed that this newly proposed Markov Chain-based availability attribute is more effective to represent the availability of parking spaces than the arrival rate-based availability attribute proposed in existing research.

Date Created
2017-12-13
Agent

An Approach to Recovery of Critical Data of Smart Cities Using Blockchain

156040-Thumbnail Image.png
Description
Smart cities are the next wave of rapid expansion of Internet of Things (IoT). A smart city is a designation given to a city that incorporates information and communication technologies (ICT) to enhance the quality and performance of urban services,

Smart cities are the next wave of rapid expansion of Internet of Things (IoT). A smart city is a designation given to a city that incorporates information and communication technologies (ICT) to enhance the quality and performance of urban services, such as energy, transportation, healthcare, communications, entertainments, education, e-commerce, businesses, city management, and utilities, to reduce resource consumption, wastage and overall costs. The overarching aim of a smart city is to enhance the quality of living for its residents and businesses, through technology. In a large ecosystem, like a smart city, many organizations and companies collaborate with the smart city government to improve the smart city. These entities may need to store and share critical data with each other. A smart city has several thousands of smart devices and sensors deployed across the city. Storing critical data in a secure and scalable manner is an important issue in a smart city. While current cloud-based services, like Splunk and ELK (Elasticsearch-Logstash-Kibana), offer a centralized view and control over the IT operations of these smart devices, it is still prone to insider attacks, data tampering, and rogue administrator problems. In this thesis, we present an approach using blockchain to recovering critical data from unauthorized modifications. We use extensive simulations based on complex adaptive system theory, for evaluation of our approach. Through mathematical proof we proved that the approach always detects an unauthorized modification of critical data.
Date Created
2017
Agent

From Understanding Telephone Scams to Implementing Authenticated Caller ID Transmission

155954-Thumbnail Image.png
Description
The telephone network is used by almost every person in the modern world. With the rise of Internet access to the PSTN, the telephone network today is rife with telephone spam and scams. Spam calls are significant annoyances for telephone

The telephone network is used by almost every person in the modern world. With the rise of Internet access to the PSTN, the telephone network today is rife with telephone spam and scams. Spam calls are significant annoyances for telephone users, unlike email spam, spam calls demand immediate attention. They are not only significant annoyances but also result in significant financial losses in the economy. According to complaint data from the FTC, complaints on illegal calls have made record numbers in recent years. Americans lose billions to fraud due to malicious telephone communication, despite various efforts to subdue telephone spam, scam, and robocalls.

In this dissertation, a study of what causes the users to fall victim to telephone scams is presented, and it demonstrates that impersonation is at the heart of the problem. Most solutions today primarily rely on gathering offending caller IDs, however, they do not work effectively when the caller ID has been spoofed. Due to a lack of authentication in the PSTN caller ID transmission scheme, fraudsters can manipulate the caller ID to impersonate a trusted entity and further a variety of scams. To provide a solution to this fundamental problem, a novel architecture and method to authenticate the transmission of the caller ID is proposed. The solution enables the possibility of a security indicator which can provide an early warning to help users stay vigilant against telephone impersonation scams, as well as provide a foundation for existing and future defenses to stop unwanted telephone communication based on the caller ID information.
Date Created
2017
Agent

Evaluation of Storage Systems for Big Data Analytics

155951-Thumbnail Image.png
Description
Recent trends in big data storage systems show a shift from disk centric models to memory centric models. The primary challenges faced by these systems are speed, scalability, and fault tolerance. It is interesting to investigate the performance of these

Recent trends in big data storage systems show a shift from disk centric models to memory centric models. The primary challenges faced by these systems are speed, scalability, and fault tolerance. It is interesting to investigate the performance of these two models with respect to some big data applications. This thesis studies the performance of Ceph (a disk centric model) and Alluxio (a memory centric model) and evaluates whether a hybrid model provides any performance benefits with respect to big data applications. To this end, an application TechTalk is created that uses Ceph to store data and Alluxio to perform data analytics. The functionalities of the application include offline lecture storage, live recording of classes, content analysis and reference generation. The knowledge base of videos is constructed by analyzing the offline data using machine learning techniques. This training dataset provides knowledge to construct the index of an online stream. The indexed metadata enables the students to search, view and access the relevant content. The performance of the application is benchmarked in different use cases to demonstrate the benefits of the hybrid model.
Date Created
2017
Agent

MobiVPN: Towards a Reliable and Efficient Mobile VPN

155925-Thumbnail Image.png
Description
A Virtual Private Network (VPN) is the traditional approach for an end-to-end secure connection between two endpoints. Most existing VPN solutions are intended for wired networks with reliable connections. In a mobile environment, network connections are less reliable and devices

A Virtual Private Network (VPN) is the traditional approach for an end-to-end secure connection between two endpoints. Most existing VPN solutions are intended for wired networks with reliable connections. In a mobile environment, network connections are less reliable and devices experience intermittent network disconnections due to either switching from one network to another or experiencing a gap in coverage during roaming. These disruptive events affects traditional VPN performance, resulting in possible termination of applications, data loss, and reduced productivity. Mobile VPNs bridge the gap between what users and applications expect from a wired network and the realities of mobile computing.

In this dissertation, MobiVPN, which was built by modifying the widely-used OpenVPN so that the requirements of a mobile VPN were met, was designed and developed. The aim in MobiVPN was for it to be a reliable and efficient VPN for mobile environments. In order to achieve these objectives, MobiVPN introduces the following features: 1) Fast and lightweight VPN session resumption, where MobiVPN is able decrease the time it takes to resume a VPN tunnel after a mobility event by an average of 97.19\% compared to that of OpenVPN. 2) Persistence of TCP sessions of the tunneled applications allowing them to survive VPN tunnel disruptions due to a gap in network coverage no matter how long the coverage gap is. MobiVPN also has mechanisms to suspend and resume TCP flows during and after a network disconnection with a packet buffering option to maintain the TCP sending rate. MobiVPN was able to provide fast resumption of TCP flows after reconnection with improved TCP performance when multiple disconnections occur with an average of 30.08\% increase in throughput in the experiments where buffering was used, and an average of 20.93\% of increased throughput for flows that were not buffered. 3) A fine-grained, flow-based adaptive compression which allows MobiVPN to treat each tunneled flow independently so that compression can be turned on for compressible flows, and turned off for incompressible ones. The experiments showed that the flow-based adaptive compression outperformed OpenVPN's compression options in terms of effective throughput, data reduction, and lesser compression operations.
Date Created
2017
Agent

Moving Target Defense Using Live Migration of Docker Containers

155819-Thumbnail Image.png
Description
Today the information technology systems have addresses, software stacks and other configuration remaining unchanged for a long period of time. This paves way for malicious attacks in the system from unknown vulnerabilities. The attacker can take advantage of this situation

Today the information technology systems have addresses, software stacks and other configuration remaining unchanged for a long period of time. This paves way for malicious attacks in the system from unknown vulnerabilities. The attacker can take advantage of this situation and plan their attacks with sufficient time. To protect our system from this threat, Moving Target Defense is required where the attack surface is dynamically changed, making it difficult to strike.

In this thesis, I incorporate live migration of Docker container using CRIU (checkpoint restore) for moving target defense. There are 460K Dockerized applications, a 3100% growth over 2 years[1]. Over 4 billion containers have been pulled so far from Docker hub. Docker is supported by a large and fast growing community of contributors and users. As an example, there are 125K Docker Meetup members worldwide. As we see industry adapting to Docker rapidly, a moving target defense solution involving containers is beneficial for being robust and fast. A proof of concept implementation is included for studying performance attributes of Docker migration.

The detection of attack is using a scenario involving definitions of normal events on servers. By defining system activities, and extracting syslog in centralized server, attack can be detected via extracting abnormal activates and this detection can be a trigger for the Docker migration.
Date Created
2017
Agent

Policy Conflict Management in Distributed SDN Environments

155696-Thumbnail Image.png
Description
The ease of programmability in Software-Defined Networking (SDN) makes it a great platform for implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in a multi-tenant data center environment. However, implementing security solutions in

The ease of programmability in Software-Defined Networking (SDN) makes it a great platform for implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in a multi-tenant data center environment. However, implementing security solutions in such an environment is fraught with policy conflicts and consistency issues with the hardness of this problem being affected by the distribution scheme for the SDN controllers.

In this dissertation, a formalism for flow rule conflicts in SDN environments is introduced. This formalism is realized in Brew, a security policy analysis framework implemented on an OpenDaylight SDN controller. Brew has comprehensive conflict detection and resolution modules to ensure that no two flow rules in a distributed SDN-based cloud environment have conflicts at any layer; thereby assuring consistent conflict-free security policy implementation and preventing information leakage. Techniques for global prioritization of flow rules in a decentralized environment are presented, using which all SDN flow rule conflicts are recognized and classified. Strategies for unassisted resolution of these conflicts are also detailed. Alternately, if administrator input is desired to resolve conflicts, a novel visualization scheme is implemented to help the administrators view the conflicts in an aesthetic manner. The correctness, feasibility and scalability of the Brew proof-of-concept prototype is demonstrated. Flow rule conflict avoidance using a buddy address space management technique is studied as an alternate to conflict detection and resolution in highly dynamic cloud systems attempting to implement an SDN-based Moving Target Defense (MTD) countermeasures.
Date Created
2017
Agent

Multi-Tenancy and Sub-Tenancy Architecture in Software-As-A-Service (Saas)

155671-Thumbnail Image.png
Description
Multi-tenancy architecture (MTA) is often used in Software-as-a-Service (SaaS) and

the central idea is that multiple tenant applications can be developed using compo

nents stored in the SaaS infrastructure. Recently, MTA has been extended where

a tenant application can have its own sub-tenants

Multi-tenancy architecture (MTA) is often used in Software-as-a-Service (SaaS) and

the central idea is that multiple tenant applications can be developed using compo

nents stored in the SaaS infrastructure. Recently, MTA has been extended where

a tenant application can have its own sub-tenants as the tenant application acts

like a SaaS infrastructure. In other words, MTA is extended to STA (Sub-Tenancy

Architecture ). In STA, each tenant application not only need to develop its own

functionalities, but also need to prepare an infrastructure to allow its sub-tenants to

develop customized applications. This dissertation formulates eight models for STA,

and proposes a Variant Point based customization model to help tenants and sub

tenants customize tenant and sub-tenant applications. In addition, this dissertation

introduces Crowd- sourcing to become the core of STA component development life

cycle. To discover fit tenant developers or components to help building and com

posing new components, dynamic and static ranking models are proposed. Further,

rank computation architecture is presented to deal with the case when the number of

tenants and components becomes huge. At last, an experiment is performed to prove

rank models and the rank computation architecture work as design.
Date Created
2017
Agent

Pingo: A Framework for the Management of Storage of Intermediate Outputs of Computational Workflows

155634-Thumbnail Image.png
Description
Scientific workflows allow scientists to easily model and express the entire data processing steps, typically as a directed acyclic graph (DAG). These scientific workflows are made of a collection of tasks that usually take a long time to compute and

Scientific workflows allow scientists to easily model and express the entire data processing steps, typically as a directed acyclic graph (DAG). These scientific workflows are made of a collection of tasks that usually take a long time to compute and that produce a considerable amount of intermediate datasets. Because of the nature of scientific exploration, a scientific workflow can be modified and re-run multiple times, or new scientific workflows are created that might make use of past intermediate datasets. Storing intermediate datasets has the potential to save time in computations. Since storage is limited, one main problem that needs a solution is determining which intermediate datasets need to be saved at creation time in order to minimize the computational time of the workflows to be run in the future. This research thesis proposes the design and implementation of Pingo, a system that is capable of managing the computations of scientific workflows as well as the storage, provenance and deletion of intermediate datasets. Pingo uses the history of workflows submitted to the system to predict the most likely datasets to be needed in the future, and subjects the decision of dataset deletion to the optimization of the computational time of future workflows.
Date Created
2017
Agent

An investigation of machine learning for password evaluation

155079-Thumbnail Image.png
Description
Passwords are ubiquitous and are poised to stay that way due to their relative usability, security and deployability when compared with alternative authentication schemes. Unfortunately, humans struggle with some of the assumptions or requirements that are necessary for truly strong

Passwords are ubiquitous and are poised to stay that way due to their relative usability, security and deployability when compared with alternative authentication schemes. Unfortunately, humans struggle with some of the assumptions or requirements that are necessary for truly strong passwords. As administrators try to push users towards password complexity and diversity, users still end up using predictable mangling patterns on old passwords and reusing the same passwords across services; users even inadvertently converge on the same patterns to a surprising degree, making an attacker’s job easier. This work explores using machine learning techniques to pick out strong passwords from weak ones, from a dataset of 10 million passwords, based on how structurally similar they were to the rest of the set.
Date Created
2016
Agent