Exploring the Virtual Reality Threshold with the Oculus Rift

136298-Thumbnail Image.png
Description
This paper will explore what makes ‘good’ virtual reality, that is, what constitutes the virtual reality threshold. It will explain what this has to do with the temporary death of virtual reality, and argue that that threshold has now been

This paper will explore what makes ‘good’ virtual reality, that is, what constitutes the virtual reality threshold. It will explain what this has to do with the temporary death of virtual reality, and argue that that threshold has now been crossed and true virtual reality is now possible, as evidenced by the current wave of virtual reality catalyzed by the Oculus Rift. The Rift will be used as a case study for examining specific aspects of the virtual reality threshold.
Date Created
2015-05
Agent

Post-Mortem of a Tactical Strategy Game

135958-Thumbnail Image.png
Description
With the increasing popularity of video games and the emergence of game streaming brought about by platforms such as Youtube and Twitch, combined with the multitude of ways to learn how to code from schools and online resources including Codecademy

With the increasing popularity of video games and the emergence of game streaming brought about by platforms such as Youtube and Twitch, combined with the multitude of ways to learn how to code from schools and online resources including Codecademy and Treehouse, game development has become incredibly approachable. Yet that does not mean it is simple. Developing a game requires a substantial amount of work, even before a design is considered worth making into a complete game. Over the course of this thesis, I created eight designs with accompanying prototypes. Only one was made into a fully functional release. I sought to make a game with a great design while increasing my understanding of game development and the code needed to finish a game. I came out realizing that I was in over my head. With the amount of work involved in creating an entire game, iteration is key to finding an idea that is capable of becoming a game that feels complete and enjoyable. A game's design must be fleshed out before technical work can truly begin, yet the design can take nearly as much time and effort as the code. In this thesis, each design is detailed and associated with why it seemed great and why it was replaced, with extra focus on the final design and how players felt about it. These designs are followed by what I learned about game development over the course of the thesis, including both the technical and emotional sides of developing a video game.
Date Created
2015-12
Agent

Keyboard Input Biometric Authentication Spoofing

135822-Thumbnail Image.png
Description
Keyboard input biometric authentication systems are software systems which record keystroke information and use it to identify a typist. The primary statistics used to determine the accuracy of a keyboard biometric authentication system are the false acceptance rate (FAR) and

Keyboard input biometric authentication systems are software systems which record keystroke information and use it to identify a typist. The primary statistics used to determine the accuracy of a keyboard biometric authentication system are the false acceptance rate (FAR) and false rejection rate (FRR), which are aimed to be as low as possible [1]. However, even if a system has a low FAR and FRR, there is nothing stopping an attacker from also monitoring an individual's typing habits in the same way a legitimate authentication system would, and using its knowledge of their habits to recreate virtual keyboard events for typing arbitrary text, with precise timing mimicking those habits, which would theoretically spoof a legitimate keyboard biometric authentication system into thinking it is the intended user doing the typing. A proof of concept of this very attack, called keyboard input biometric authentication spoofing, is the focus of this paper, with the purpose being to show that even if a biometric authentication system is reasonably accurate, with a low FAR and FRR, it can still potentially be very vulnerable to a well-crafted spoofing system. A rudimentary keyboard input biometric authentication system was written in C and C++ which drew influence from already existing methods and attempted new methods of authentication as well. A spoofing system was then built which exploited the authentication system's statistical representation of a user's typing habits to recreate keyboard events as described above. This proof of concept is aimed at raising doubts about the idea of relying too heavily upon keyboard input based biometric authentication systems since the user's typing input can demonstrably be spoofed in this way if an attacker has full access to the system, even if the system itself is accurate. The results are that the authentication system built for this study, when ran on a database of typing event logs recorded from 15 users in 4 sessions, had a 0% FAR and FRR (more detailed analysis of FAR and FRR is also presented), yet it was still very susceptible to being spoofed, with a 44% to 71% spoofing rate in some instances.
Date Created
2016-05
Agent

Enhancing Student Learning Through Adaptive Sentence Generation

Description
Education of any skill based subject, such as mathematics or language, involves a significant amount of repetition and pratice. According to the National Survey of Student Engagements, students spend on average 17 hours per week reviewing and practicing material previously

Education of any skill based subject, such as mathematics or language, involves a significant amount of repetition and pratice. According to the National Survey of Student Engagements, students spend on average 17 hours per week reviewing and practicing material previously learned in a classroom, with higher performing students showing a tendency to spend more time practicing. As such, learning software has emerged in the past several decades focusing on providing a wide range of examples, practice problems, and situations for users to exercise their skills. Notably, math students have benefited from software that procedurally generates a virtually infinite number of practice problems and their corresponding solutions. This allows for instantaneous feedback and automatic generation of tests and quizzes. Of course, this is only possible because software is capable of generating and verifying a virtually endless supply of sample problems across a wide range of topics within mathematics. While English learning software has progressed in a similar manner, it faces a series of hurdles distinctly different from those of mathematics. In particular, there is a wide range of exception cases present in English grammar. Some words have unique spellings for their plural forms, some words have identical spelling for plural forms, and some words are conjugated differently for only one particular tense or person-of-speech. These issues combined make the problem of generating grammatically correct sentences complicated. To compound to this problem, the grammar rules in English are vast, and often depend on the context in which they are used. Verb-tense agreement (e.g. "I eat" vs "he eats"), and conjugation of irregular verbs (e.g. swim -> swam) are common examples. This thesis presents an algorithm designed to randomly generate a virtually infinite number of practice problems for students of English as a second language. This approach differs from other generation approaches by generating based on a context set by educators, so that problems can be generated in the context of what students are currently learning. The algorithm is validated through a study in which over 35 000 sentences generated by the algorithm are verified by multiple grammar checking algorithms, and a subset of the sentences are validated against 3 education standards by a subject matter expert in the field. The study found that this approach has a significantly reduced grammar error ratio compared to other generation algorithms, and shows potential where context specification is concerned.
Date Created
2016-05
Agent

A Person-Centric Design Framework for At-Home Motor Learning in Serious Games

155800-Thumbnail Image.png
Description
In motor learning, real-time multi-modal feedback is a critical element in guided training. Serious games have been introduced as a platform for at-home motor training due to their highly interactive and multi-modal nature. This dissertation explores the design of a

In motor learning, real-time multi-modal feedback is a critical element in guided training. Serious games have been introduced as a platform for at-home motor training due to their highly interactive and multi-modal nature. This dissertation explores the design of a multimodal environment for at-home training in which an autonomous system observes and guides the user in the place of a live trainer, providing real-time assessment, feedback and difficulty adaptation as the subject masters a motor skill. After an in-depth review of the latest solutions in this field, this dissertation proposes a person-centric approach to the design of this environment, in contrast to the standard techniques implemented in related work, to address many of the limitations of these approaches. The unique advantages and restrictions of this approach are presented in the form of a case study in which a system entitled the "Autonomous Training Assistant" consisting of both hardware and software for guided at-home motor learning is designed and adapted for a specific individual and trainer.

In this work, the design of an autonomous motor learning environment is approached from three areas: motor assessment, multimodal feedback, and serious game design. For motor assessment, a 3-dimensional assessment framework is proposed which comprises of 2 spatial (posture, progression) and 1 temporal (pacing) domains of real-time motor assessment. For multimodal feedback, a rod-shaped device called the "Intelligent Stick" is combined with an audio-visual interface to provide feedback to the subject in three domains (audio, visual, haptic). Feedback domains are mapped to modalities and feedback is provided whenever the user's performance deviates from the ideal performance level by an adaptive threshold. Approaches for multi-modal integration and feedback fading are discussed. Finally, a novel approach for stealth adaptation in serious game design is presented. This approach allows serious games to incorporate motor tasks in a more natural way, facilitating self-assessment by the subject. An evaluation of three different stealth adaptation approaches are presented and evaluated using the flow-state ratio metric. The dissertation concludes with directions for future work in the integration of stealth adaptation techniques across the field of exergames.
Date Created
2017
Agent

Minimizing Dataset Size Requirements for Machine Learning

155559-Thumbnail Image.png
Description
Machine learning methodologies are widely used in almost all aspects of software engineering. An effective machine learning model requires large amounts of data to achieve high accuracy. The data used for classification is mostly labeled, which is difficult to obtain.

Machine learning methodologies are widely used in almost all aspects of software engineering. An effective machine learning model requires large amounts of data to achieve high accuracy. The data used for classification is mostly labeled, which is difficult to obtain. The dataset requires both high costs and effort to accurately label the data into different classes. With abundance of data, it becomes necessary that all the data should be labeled for its proper utilization and this work focuses on reducing the labeling effort for large dataset. The thesis presents a comparison of different classifiers performance to test if small set of labeled data can be utilized to build accurate models for high prediction rate. The use of small dataset for classification is then extended to active machine learning methodology where, first a one class classifier will predict the outliers in the data and then the outlier samples are added to a training set for support vector machine classifier for labeling the unlabeled data. The labeling of dataset can be scaled up to avoid manual labeling and building more robust machine learning methodologies.
Date Created
2017
Agent

Feature Adaptive Ray Tracing of Subdivision Surfaces

155544-Thumbnail Image.png
Description
Subdivision surfaces have gained more and more traction since it became the standard surface representation in the movie industry for many years. And Catmull-Clark subdivision scheme is the most popular one for handling polygonal meshes. After its introduction, Catmull-Clark surfaces

Subdivision surfaces have gained more and more traction since it became the standard surface representation in the movie industry for many years. And Catmull-Clark subdivision scheme is the most popular one for handling polygonal meshes. After its introduction, Catmull-Clark surfaces have been extended to several eminent ways, including the handling of boundaries, infinitely sharp creases, semi-sharp creases, and hierarchically defined detail. For ray tracing of subdivision surfaces, a common way is to construct spatial bounding volume hierarchies on top of input control mesh. However, a high-level refined subdivision surface not only requires a substantial amount of memory storage, but also causes slow and inefficient ray tracing. In this thesis, it presents a new way to improve the efficiency of ray tracing of subdivision surfaces, while the quality is not as good as general methods.
Date Created
2017
Agent

An Architecture for Designing Content Agnostic Game Mechanics for Educational Burst Games

155515-Thumbnail Image.png
Description
Currently, educational games are designed with the educational content as the primary factor driving the design of the game. While this may seem to be the optimal approach, this design paradigm causes multiple issues. For one, the games themselves are

Currently, educational games are designed with the educational content as the primary factor driving the design of the game. While this may seem to be the optimal approach, this design paradigm causes multiple issues. For one, the games themselves are often not engaging as game design principles were put aside in favor of increasing the educational value of the game. The other issue is that the code base of the game is mostly or completely unusable for any other games as the game mechanics are too strongly connected to the educational content being taught. This means that the mechanics are impossible to reuse in future projects without major revisions, and starting over is often more time and cost efficient.

This thesis presents the Content Agnostic Game Engineering (CAGE) model for designing educational games. CAGE is a way to separate the educational content from the game mechanics without compromising the educational value of the game. This is done by designing mechanics that can have multiple educational contents layered on top of them which can be switched out at any time. CAGE allows games to be designed with a game design first approach which allows them to maintain higher engagement levels. In addition, since the mechanics are not tied to the educational content several different educational topics can reuse the same set of mechanics without requiring major revisions to the existing code.

Results show that CAGE greatly reduces the amount of code needed to make additional versions of educational games, and speeds up the development process. The CAGE model is also shown to not induce high levels of cognitive load, allowing for more in depth topic work than was attempted in this thesis. However, engagement was low and switching the active content does interrupt the game flow considerably. Altering the difficulty of the game in real time in response to the affective state of the player was not shown to increase engagement. Potential causes of the issues with CAGE games and potential fixes are discussed.
Date Created
2017
Agent

Optimizing Performance Measures in Classification Using Ensemble Learning Methods

155468-Thumbnail Image.png
Description
Ensemble learning methods like bagging, boosting, adaptive boosting, stacking have traditionally shown promising results in improving the predictive accuracy in classification. These techniques have recently been widely used in various domains and applications owing to the improvements in computational efficiency

Ensemble learning methods like bagging, boosting, adaptive boosting, stacking have traditionally shown promising results in improving the predictive accuracy in classification. These techniques have recently been widely used in various domains and applications owing to the improvements in computational efficiency and distributed computing advances. However, with the advent of wide variety of applications of machine learning techniques to class imbalance problems, further focus is needed to evaluate, improve and optimize other performance measures such as sensitivity (true positive rate) and specificity (true negative rate) in classification. This thesis demonstrates a novel approach to evaluate and optimize the performance measures (specifically sensitivity and specificity) using ensemble learning methods for classification that can be especially useful in class imbalanced datasets. In this thesis, ensemble learning methods (specifically bagging and boosting) are used to optimize the performance measures (sensitivity and specificity) on a UC Irvine (UCI) 130 hospital diabetes dataset to predict if a patient will be readmitted to the hospital based on various feature vectors. From the experiments conducted, it can be empirically concluded that, by using ensemble learning methods, although accuracy does improve to some margin, both sensitivity and specificity are optimized significantly and consistently over different cross validation approaches. The implementation and evaluation has been done on a subset of the large UCI 130 hospital diabetes dataset. The performance measures of ensemble learners are compared to the base machine learning classification algorithms such as Naive Bayes, Logistic Regression, k Nearest Neighbor, Decision Trees and Support Vector Machines.
Date Created
2017
Agent

Analysis and Performance Optimization of a GPGPU Implementation of Image Quality Assessment (IQA) Algorithm VSNR

155292-Thumbnail Image.png
Description
Image processing has changed the way we store, view and share images. One important component of sharing images over the networks is image compression. Lossy image compression techniques compromise the quality of images to reduce their size. To ensure that

Image processing has changed the way we store, view and share images. One important component of sharing images over the networks is image compression. Lossy image compression techniques compromise the quality of images to reduce their size. To ensure that the distortion of images due to image compression is not highly detectable by humans, the perceived quality of an image needs to be maintained over a certain threshold. Determining this threshold is best done using human subjects, but that is impractical in real-world scenarios. As a solution to this issue, image quality assessment (IQA) algorithms are used to automatically compute a fidelity score of an image.

However, poor performance of IQA algorithms has been observed due to complex statistical computations involved. General Purpose Graphics Processing Unit (GPGPU) programming is one of the solutions proposed to optimize the performance of these algorithms.

This thesis presents a Compute Unified Device Architecture (CUDA) based optimized implementation of full reference IQA algorithm, Visual Signal to Noise Ratio (VSNR) that uses M-level 2D Discrete Wavelet Transform (DWT) with 9/7 biorthogonal filters among other statistical computations. The presented implementation is tested upon four different image quality databases containing images with multiple distortions and sizes ranging from 512 x 512 to 1600 x 1280. The CUDA implementation of VSNR shows a speedup of over 32x for 1600 x 1280 images. It is observed that the speedup scales with the increase in size of images. The results showed that the implementation is fast enough to use VSNR on high definition videos with a frame rate of 60 fps. This work presents the optimizations made due to the use of GPU’s constant memory and reuse of allocated memory on the GPU. Also, it shows the performance improvement using profiler driven GPGPU development in CUDA. The presented implementation can be deployed in production combined with existing applications.
Date Created
2017
Agent