A Novel Historical Safety Metric for Evaluating Road Networks

134185-Thumbnail Image.png
Description
37,461 automobile accident fatalities occured in the United States in 2016 ("Quick Facts 2016", 2017). Improving the safety of roads has traditionally been approached by governmental agencies including the National Highway Traffic Safety Administration and State Departments of Transporation. In

37,461 automobile accident fatalities occured in the United States in 2016 ("Quick Facts 2016", 2017). Improving the safety of roads has traditionally been approached by governmental agencies including the National Highway Traffic Safety Administration and State Departments of Transporation. In past literature, automobile crash data is analyzed using time-series prediction technicques to identify road segments and/or intersections likely to experience future crashes (Lord & Mannering, 2010). After dangerous zones have been identified road modifications can be implemented improving public safety. This project introduces a historical safety metric for evaluating the relative danger of roads in a road network. The historical safety metric can be used to update routing choices of individual drivers improving public safety by avoiding historically more dangerous routes. The metric is constructed using crash frequency, severity, location and traffic information. An analysis of publically-available crash and traffic data in Allgeheny County, Pennsylvania is used to generate the historical safety metric for a specific road network. Methods for evaluating routes based on the presented historical safety metric are included using the Mann Whitney U Test to evaluate the significance of routing decisions. The evaluation method presented requires routes have at least 20 crashes to be compared with significance testing. The safety of the road network is visualized using a heatmap to present distribution of the metric throughout Allgeheny County.
Date Created
2017-12
Agent

Maroon and Gold: Mobile Application

135458-Thumbnail Image.png
Description
Currently, students at Arizona State University are restricted to cards when using their college's local currency. This currency, Maroon and Gold dollars (M&G), is a primary source of meal plans for many students. When relying on card readers, students risk

Currently, students at Arizona State University are restricted to cards when using their college's local currency. This currency, Maroon and Gold dollars (M&G), is a primary source of meal plans for many students. When relying on card readers, students risk security and convenience. The security is risked due to the constant student id number on each card. A student's identification number never changes and is located on each card. If the student loses their card, their account information is permanently compromised. Convenience is an issue because, currently, students must make a purchase in order to see their current account balance. Another major issue is that businesses must purchase external hardware in order to use the M&G System. An online or mobile system would eliminate the need for a physical card and allow businesses to function without external card readers. Such a system would have access to financial information of businesses and students at ASU. Thus, the system require severe scrutiny by a well-trusted team of professionals before being implemented. My objective was to help bring such a system to life. To do this, I decided to make a mobile application prototype to serve as a baseline and to demonstrate the features of such a system. As a baseline, it needed to have a realistic, professional appearance, with the ability to accurately demonstrate feature functionality. Before developing the app, I set out to determine the User Interactions and User Experience designs (UI/UX) by conducting a series of informal interviews with local students and businesses. After the designs were finalized, I started implementation of the actual application in Android Studio. This creative project consists of a mobile application, a contained database, a GUI (Graphics User Interface) prototype, and a technical document.
Date Created
2016-05
Agent

Investigation in Prolog-based Machine Translation with English-Hungarian Case Study

136283-Thumbnail Image.png
Description
This undergraduate thesis explores the efficacy of developing a translator generator in the Prolog programming language using Lexical Functional Grammars. A bidirectional machine translator between English and Hungarian, developed as a proof-of-concept case study, is discussed and assessed. The benefits

This undergraduate thesis explores the efficacy of developing a translator generator in the Prolog programming language using Lexical Functional Grammars. A bidirectional machine translator between English and Hungarian, developed as a proof-of-concept case study, is discussed and assessed. The benefits and drawbacks of this approach as generalized to Machine Translation systems are also discussed, along with possible areas of future work.
Date Created
2015-05
Agent

A comparative analysis of graph vs relational database for instructional module development system

155799-Thumbnail Image.png
Description
In today's data-driven world, every datum is connected to a large amount of data. Relational databases have been proving itself a pioneer in the field of data storage and manipulation since 1970s. But more recently they have been challenged by

In today's data-driven world, every datum is connected to a large amount of data. Relational databases have been proving itself a pioneer in the field of data storage and manipulation since 1970s. But more recently they have been challenged by NoSQL graph databases in handling data models which have an inherent graphical representation. Graph databases with the ability to store physical relationships between two nodes and native graph processing technique have been doing exceptionally well in graph data storage and management for applications like recommendation engines, biological modeling, network modeling, social media applications, etc.

Instructional Module Development System (IMODS) is a web-based software system that guides STEM instructors through the complex task of curriculum design, ensures tight alignment between various components of a course (i.e., learning objectives, content, assessments), and provides relevant information about research-based pedagogical and assessment strategies. The data model of IMODS is highly connected and has an inherent graphical representation between all its entities with numerous relationships between them. This thesis focuses on developing an algorithm to determine completeness of course design developed using IMODS. As part of this research objective, the study also analyzes the data model for best fit database to run these algorithms. As part of this thesis, two separate applications abstracting the data model of IMODS have been developed - one with Neo4j (graph database) and another with PostgreSQL (relational database). The research objectives of the thesis are as follows: (i) evaluate the performance of Neo4j and PostgreSQL in handling complex queries that will be fired throughout the life cycle of the course design process; (ii) devise an algorithm to determine the completeness of a course design developed using IMODS. This thesis presents the process of creating data model for PostgreSQL and converting it into a graph data model to be abstracted by Neo4j, creating SQL and CYPHER scripts for undertaking experiments on both platforms, testing and elaborate analysis of the results and evaluation of the databases in the context of IMODS.
Date Created
2017
Agent

Minimizing Dataset Size Requirements for Machine Learning

155559-Thumbnail Image.png
Description
Machine learning methodologies are widely used in almost all aspects of software engineering. An effective machine learning model requires large amounts of data to achieve high accuracy. The data used for classification is mostly labeled, which is difficult to obtain.

Machine learning methodologies are widely used in almost all aspects of software engineering. An effective machine learning model requires large amounts of data to achieve high accuracy. The data used for classification is mostly labeled, which is difficult to obtain. The dataset requires both high costs and effort to accurately label the data into different classes. With abundance of data, it becomes necessary that all the data should be labeled for its proper utilization and this work focuses on reducing the labeling effort for large dataset. The thesis presents a comparison of different classifiers performance to test if small set of labeled data can be utilized to build accurate models for high prediction rate. The use of small dataset for classification is then extended to active machine learning methodology where, first a one class classifier will predict the outliers in the data and then the outlier samples are added to a training set for support vector machine classifier for labeling the unlabeled data. The labeling of dataset can be scaled up to avoid manual labeling and building more robust machine learning methodologies.
Date Created
2017
Agent

3D - Patch Based Machine Learning Systems for Alzheimer’s Disease classification via 18F-FDG PET Analysis

Description
Alzheimer’s disease (AD), is a chronic neurodegenerative disease that usually starts slowly and gets worse over time. It is the cause of 60% to 70% of cases of dementia. There is growing interest in identifying brain image biomarkers that hel

Alzheimer’s disease (AD), is a chronic neurodegenerative disease that usually starts slowly and gets worse over time. It is the cause of 60% to 70% of cases of dementia. There is growing interest in identifying brain image biomarkers that help evaluate AD risk pre-symptomatically. High-dimensional non-linear pattern classification methods have been applied to structural magnetic resonance images (MRI’s) and used to discriminate between clinical groups in Alzheimers progression. Using Fluorodeoxyglucose (FDG) positron emission tomography (PET) as the pre- ferred imaging modality, this thesis develops two independent machine learning based patch analysis methods and uses them to perform six binary classification experiments across different (AD) diagnostic categories. Specifically, features were extracted and learned using dimensionality reduction and dictionary learning & sparse coding by taking overlapping patches in and around the cerebral cortex and using them as fea- tures. Using AdaBoost as the preferred choice of classifier both methods try to utilize 18F-FDG PET as a biological marker in the early diagnosis of Alzheimer’s . Addi- tional we investigate the involvement of rich demographic features (ApoeE3, ApoeE4 and Functional Activities Questionnaires (FAQ)) in classification. The experimental results on Alzheimer’s Disease Neuroimaging initiative (ADNI) dataset demonstrate the effectiveness of both the proposed systems. The use of 18F-FDG PET may offer a new sensitive biomarker and enrich the brain imaging analysis toolset for studying the diagnosis and prognosis of AD.
Date Created
2017
Agent

Optimizing Performance Measures in Classification Using Ensemble Learning Methods

155468-Thumbnail Image.png
Description
Ensemble learning methods like bagging, boosting, adaptive boosting, stacking have traditionally shown promising results in improving the predictive accuracy in classification. These techniques have recently been widely used in various domains and applications owing to the improvements in computational efficiency

Ensemble learning methods like bagging, boosting, adaptive boosting, stacking have traditionally shown promising results in improving the predictive accuracy in classification. These techniques have recently been widely used in various domains and applications owing to the improvements in computational efficiency and distributed computing advances. However, with the advent of wide variety of applications of machine learning techniques to class imbalance problems, further focus is needed to evaluate, improve and optimize other performance measures such as sensitivity (true positive rate) and specificity (true negative rate) in classification. This thesis demonstrates a novel approach to evaluate and optimize the performance measures (specifically sensitivity and specificity) using ensemble learning methods for classification that can be especially useful in class imbalanced datasets. In this thesis, ensemble learning methods (specifically bagging and boosting) are used to optimize the performance measures (sensitivity and specificity) on a UC Irvine (UCI) 130 hospital diabetes dataset to predict if a patient will be readmitted to the hospital based on various feature vectors. From the experiments conducted, it can be empirically concluded that, by using ensemble learning methods, although accuracy does improve to some margin, both sensitivity and specificity are optimized significantly and consistently over different cross validation approaches. The implementation and evaluation has been done on a subset of the large UCI 130 hospital diabetes dataset. The performance measures of ensemble learners are compared to the base machine learning classification algorithms such as Naive Bayes, Logistic Regression, k Nearest Neighbor, Decision Trees and Support Vector Machines.
Date Created
2017
Agent

Analysis and Performance Optimization of a GPGPU Implementation of Image Quality Assessment (IQA) Algorithm VSNR

155292-Thumbnail Image.png
Description
Image processing has changed the way we store, view and share images. One important component of sharing images over the networks is image compression. Lossy image compression techniques compromise the quality of images to reduce their size. To ensure that

Image processing has changed the way we store, view and share images. One important component of sharing images over the networks is image compression. Lossy image compression techniques compromise the quality of images to reduce their size. To ensure that the distortion of images due to image compression is not highly detectable by humans, the perceived quality of an image needs to be maintained over a certain threshold. Determining this threshold is best done using human subjects, but that is impractical in real-world scenarios. As a solution to this issue, image quality assessment (IQA) algorithms are used to automatically compute a fidelity score of an image.

However, poor performance of IQA algorithms has been observed due to complex statistical computations involved. General Purpose Graphics Processing Unit (GPGPU) programming is one of the solutions proposed to optimize the performance of these algorithms.

This thesis presents a Compute Unified Device Architecture (CUDA) based optimized implementation of full reference IQA algorithm, Visual Signal to Noise Ratio (VSNR) that uses M-level 2D Discrete Wavelet Transform (DWT) with 9/7 biorthogonal filters among other statistical computations. The presented implementation is tested upon four different image quality databases containing images with multiple distortions and sizes ranging from 512 x 512 to 1600 x 1280. The CUDA implementation of VSNR shows a speedup of over 32x for 1600 x 1280 images. It is observed that the speedup scales with the increase in size of images. The results showed that the implementation is fast enough to use VSNR on high definition videos with a frame rate of 60 fps. This work presents the optimizations made due to the use of GPU’s constant memory and reuse of allocated memory on the GPU. Also, it shows the performance improvement using profiler driven GPGPU development in CUDA. The presented implementation can be deployed in production combined with existing applications.
Date Created
2017
Agent

A semantic framework for integrating and publishing linked data on the Web

154834-Thumbnail Image.png
Description
Semantic web is the web of data that provides a common framework and technologies for sharing and reusing data in various applications. In semantic web terminology, linked data is the term used to describe a method of exposing and connecting

Semantic web is the web of data that provides a common framework and technologies for sharing and reusing data in various applications. In semantic web terminology, linked data is the term used to describe a method of exposing and connecting data on the web from different sources. The purpose of linked data and semantic web is to publish data in an open and standard format and to link this data with existing data on the Linked Open Data Cloud. The goal of this thesis to come up with a semantic framework for integrating and publishing linked data on the web. Traditionally integrating data from multiple sources usually involves an Extract-Transform-Load (ETL) framework to generate datasets for analytics and visualization. The thesis proposes introducing a semantic component in the ETL framework to semi-automate the generation and publishing of linked data. In this thesis, various existing ETL tools and data integration techniques have been analyzed and deficiencies have been identified. This thesis proposes a set of requirements for the semantic ETL framework by conducting a manual process to integrate data from various sources such as weather, holidays, airports, flight arrival, departure and delays. The research questions that are addressed are: (i) to what extent can the integration, generation, and publishing of linked data to the cloud using a semantic ETL framework be automated; (ii) does use of semantic technologies produce a richer data model and integrated data. Details of the methodology, data collection, and application that uses the linked data generated are presented. Evaluation is done by comparing traditional data integration approach with semantic ETL approach in terms of effort involved in integration, data model generated and querying the data generated.
Date Created
2016
Agent

A composite natural language processing and information retrieval approach to question answering against a structured knowledge base

154818-Thumbnail Image.png
Description
With the inception of World Wide Web, the amount of data present on the internet is tremendous. This makes the task of navigating through this enormous amount of data quite difficult for the user. As users struggle to navigate through

With the inception of World Wide Web, the amount of data present on the internet is tremendous. This makes the task of navigating through this enormous amount of data quite difficult for the user. As users struggle to navigate through this wealth of information, the need for the development of an automated system that can extract the required information becomes urgent. The aim of this thesis is to develop a Question Answering system to ease the process of information retrieval.

Question Answering systems have been around for quite some time and are a sub-field of information retrieval and natural language processing. The task of any Question Answering system is to seek an answer to a free form factual question. The difficulty of pinpointing and verifying the precise answer makes question answering more challenging than simple information retrieval done by search engines. Text REtrieval Conference (TREC) is a yearly conference which provides large - scale infrastructure and resources to support research in information retrieval domain. TREC has a question answering track since 1999 where the questions dataset contains a list of factual questions (Vorhees & Tice, 1999). DBpedia (Bizer et al., 2009) is a community driven effort to extract and structure the data present in Wikipedia.

The research objective of this thesis is to develop a novel approach to Question Answering based on a composition of conventional approaches of Information Retrieval and Natural Language processing. The focus is also on exploring the use of a structured and annotated knowledge base as opposed to an unstructured knowledge base. The knowledge base used here is DBpedia and the final system is evaluated on the TREC 2004 questions dataset.
Date Created
2016
Agent