Real Time Cross Platform Collaboration Between Virtual Reality & Mixed Reality

Description
Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR

Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR and MR. The idea of this thesis study is to develop and test a real time collaboration system between VR and MR. The system works similar to a Google document where two or more users can see what others are doing i.e. writing, modifying, viewing, etc. Similarly, the system developed during this study will enable users in VR and MR to collaborate in real time.

The study of developing a real-time cross-platform collaboration system between VR and MR takes into consideration a scenario in which multiple device users are connected to a multiplayer network where they are guided to perform various tasks concurrently.

Usability testing was conducted to evaluate participant perceptions of the system. Users were required to assemble a chair in alternating turns; thereafter users were required to fill a survey and give an audio interview. Results collected from the participants showed positive feedback towards using VR and MR for collaboration. However, there are several limitations with the current generation of devices that hinder mass adoption. Devices with better performance factors will lead to wider adoption.
Date Created
2017
Agent

What Predicts Student Comprehension in Language Learning? Augmenting Student Action with Elapsed Time in an Educational Data Mining Approach

155500-Thumbnail Image.png
Description
Reading comprehension is a critical aspect of life in America, but many English language learners struggle with this skill. Enhanced Moved by Reading to Accelerate Comprehension in English (EMBRACE) is a tablet-based interactive learning environment is designed to improve reading

Reading comprehension is a critical aspect of life in America, but many English language learners struggle with this skill. Enhanced Moved by Reading to Accelerate Comprehension in English (EMBRACE) is a tablet-based interactive learning environment is designed to improve reading comprehension. During use of EMBRACE, all interactions with the system are logged, including correct and incorrect behaviors and help requests. These interactions could potentially be used to predict the child’s reading comprehension, providing an online measure of understanding. In addition, time-related features have been used for predicting learning by educational data mining models in mathematics and science, and may be relevant in this context. This project investigated the predictive value of data mining models based on user actions for reading comprehension, with and without timing information. Contradictory results of the investigation were obtained. The KNN and SVM models indicated that elapsed time is an important feature, but the linear regression models indicated that elapsed time is not an important feature. Finally, a new statistical test was performed on the KNN algorithm which indicated that the feature selection process may have caused overfitting, where features were chosen due coincidental alignment with the participants’ performance. These results provide important insights which will aid in the development of a reading comprehension predictor that improves the EMBRACE system’s ability to better serve ELLs.
Date Created
2017
Agent

Computer Science Education: A Game to Teach Children about Programming

155352-Thumbnail Image.png
Description
Computational thinking, the fundamental way of thinking in computer science, including information sourcing and problem solving behind programming, is considered vital to children who live in a digital era. Most of current educational games designed to teach children about coding

Computational thinking, the fundamental way of thinking in computer science, including information sourcing and problem solving behind programming, is considered vital to children who live in a digital era. Most of current educational games designed to teach children about coding either rely on external curricular materials or are too complicated to work well with young children. In this thesis project, Guardy, an iOS tower defense game, was developed to help children over 8 years old learn about and practice using basic concepts in programming. The game is built with the SpriteKit, a graphics rendering and animation infrastructure in Apple’s integrated development environment Xcode. It simplifies switching among different game scenes and animating game sprites in the development. In a typical game, a sequence of operations is arranged by players to destroy incoming enemy minions. Basic coding concepts like looping, sequencing, conditionals, and classification are integrated in different levels. In later levels, players are required to type in commands and put them in an order to keep playing the game. To reduce the difficulty of the usability testing, a method combining questionnaires and observation was conducted with two groups of college students who either have no programming experience or are familiar with coding. The results show that Guardy has the potential to help children learn programming and practice computational thinking.
Date Created
2017
Agent

Building adaptation and error feedback in an intelligent tutoring system for reading comprehension of English language learners

155225-Thumbnail Image.png
Description
Many English Language Learner (ELL) children struggle with knowledge of vocabulary and syntax. Enhanced Moved by Reading to Accelerate Comprehension in English (EMBRACE) is an interactive storybook application that teaches children to read by moving pictures on the screen to

Many English Language Learner (ELL) children struggle with knowledge of vocabulary and syntax. Enhanced Moved by Reading to Accelerate Comprehension in English (EMBRACE) is an interactive storybook application that teaches children to read by moving pictures on the screen to act out the sentences in the text. However, EMBRACE presents the same level of text to all users, and it is limited in its ability to provide error feedback, as it can only determine whether a user action is right or wrong. EMBRACE could help readers learn more effectively if it personalized its instruction with texts that fit their current reading level and feedback that addresses ways to correct their mistakes. Improvements were made to the system by applying design principles of intelligent tutoring systems (ITSs). The new system added features to track the student’s reading comprehension skills, including vocabulary, syntax, and usability, based on various user actions, as well as features to adapt text complexity and provide more specific error feedback using the skills. A pilot study was conducted with 7 non-ELL students to evaluate the functionality and effectiveness of these features. The results revealed both strengths and weaknesses of the ITS. While skill updates appeared most accurate when users made particular kinds of vocabulary and syntax errors, it was not able to correctly identify other kinds of syntax errors or provide feedback when skill values became too high. Additionally, vocabulary error feedback and adapting the complexity of syntax were helpful, but syntax error feedback and adapting the complexity of vocabulary were not as helpful. Overall, children enjoy using EMBRACE, and building an intelligent tutoring system into the application presents a promising approach to make reading a both fun and effective experience.
Date Created
2017
Agent

Supporting self-experimentation of behavior change strategies

155187-Thumbnail Image.png
Description
Desirable outcomes such as health and wellbeing are tightly linked to people’s behaviors, thus inspiring research on technologies that support productively changing those behaviors. Many behavior change technologies are designed by Human-Computer Interaction experts, but this approach makes it difficult

Desirable outcomes such as health and wellbeing are tightly linked to people’s behaviors, thus inspiring research on technologies that support productively changing those behaviors. Many behavior change technologies are designed by Human-Computer Interaction experts, but this approach makes it difficult to personalize support to each user’s unique goals and needs. As an alternative to the provision of expert-developed pre-fabricated behavior change solutions, the present study aims to empower users’ self-experimentation for behavior change. To this end, two levels of supports were explored. First, the provision of interactive digital materials to support users’ creation of behavioral plans was developed. In the initial step, a tutorial for self-experimentation for behavior change that was fully scripted with images in succession was created. The tutorial focuses on facilitating users’ learning and applying behavior change techniques. Second, users were equipped with a tool to support their implementation of context-aware just-in-time interventions. This tool enables prototyping of sensor-based responsive systems for home environments, integrating simple sensors (two-state magnetic sensors, etc.) and media event components (wireless sound, etc.).

To evaluate the effectiveness of these two approaches, a between-subject trial comparing the approaches to a sleep education control was conducted with 27 participants over 7 weeks. Although results did not reveal significant difference in sleep quality improvement between the conditions, trends indicating greater effectiveness in the two treatment groups were observed. Analysis of the plans participants created and their revision performance also indicated that the two treatment groups developed more specific and personalized plans compared with the control group.
Date Created
2016
Agent

Towards building cyber-human systems for individuals with visual impairment

154986-Thumbnail Image.png
Description
A lot of strides have been made in enabling technologies to aid individuals with visual impairment live an independent life. The advent of smart devices and participatory web has especially facilitated the possibility of new interactions to aide everyday tasks.

A lot of strides have been made in enabling technologies to aid individuals with visual impairment live an independent life. The advent of smart devices and participatory web has especially facilitated the possibility of new interactions to aide everyday tasks. Current systems however tend to be complex and require multiple cumbersome devices which invariably come with steep learning curves. Building new cyber-human systems with simple integrated interfaces while keeping in mind the specific requirements of the target users would help alleviate their mundane yet significant daily needs. Navigation is one such significant need that forms an integral part of everyday life and is one of the areas where individuals with visual impairment face the most discomfort. There is little technology out there to help travelers with navigating new routes. A number of research prototypes have been proposed but none of them are available to the general population. This may be due to the need for special equipment that needs expertise before deployment, or trained professionals needing to calibrate devices or because of the fact that the systems are just not scalable. Another area that needs assistance is the field of education. Lot of the classroom material and textbook material is not readily available in alternate formats for use. Another such area that requires attention is information delivery in the age of web 2.0. Popular websites like Facebook, Amazon, etc are designed with sighted people as target audience. While the mobile editions with their pared down versions make it easier to navigate with screen readers, the truth remains that there is still a long way to go in making such websites truly accessible.
Date Created
2016
Agent

Student modeling for English language learners in a moved by reading intervention

154915-Thumbnail Image.png
Description
EMBRACE (Enhanced Moved By Reading to Accelerate Comprehension in English) is an IPad application that uses the Moved By Reading strategy to help improve the reading comprehension skills of bilingual (Spanish speaking) English Language Learners (ELLs). In EMBRACE, students read

EMBRACE (Enhanced Moved By Reading to Accelerate Comprehension in English) is an IPad application that uses the Moved By Reading strategy to help improve the reading comprehension skills of bilingual (Spanish speaking) English Language Learners (ELLs). In EMBRACE, students read the text of a story and then move images corresponding to the text that they read. According to the embodied cognition theory, this grounds reading comprehension in physical experiences and thus is more engaging.

In this thesis, I used the log data from 20 students in grades 2-5 to design a skill model for a student using EMBRACE. A skill model is the set of knowledge components that a student needs to master in order to comprehend the text in EMBRACE. A good skill model will improve understanding of the mistakes students make and thus aid in the design of useful feedback for the student.. In this context, the skill model consists of vocabulary and syntax associated with the steps that students performed. I mapped each step in EMBRACE to one or more skills (vocabulary and syntax) from the model. After every step, the skill level is updated in the model. Thus, if a student answered the previous step incorrectly, the corresponding skills are decremented and if the student answered the previous question correctly, the corresponding skills are incremented, through the Bayesian Knowledge Tracing algorithm.

I then correlated the students’ predicted scores (computed from their skill levels) to their posttest scores. I evaluated the students’ predicted scores (computed from their skill levels) by comparing them to their posttest scores. The two sets of scores were not highly correlated, but the results gave insights into potential improvements that could be made to the system with respect to user interaction, posttest scores and modeling algorithm.
Date Created
2016
Agent

An adaptive time reduction technique for video lectures

154698-Thumbnail Image.png
Description
Lecture videos are a widely used resource for learning. A simple way to create

videos is to record live lectures, but these videos end up being lengthy, include long

pauses and repetitive words making the viewing experience time consuming. While

pauses are useful

Lecture videos are a widely used resource for learning. A simple way to create

videos is to record live lectures, but these videos end up being lengthy, include long

pauses and repetitive words making the viewing experience time consuming. While

pauses are useful in live learning environments where students take notes, I question

the value of pauses in video lectures. Techniques and algorithms that can shorten such

videos can have a huge impact in saving students’ time and reducing storage space.

I study this problem of shortening videos by removing long pauses and adaptively

modifying the playback rate by emphasizing the most important sections of the video

and its effect on the student community. The playback rate is designed in such a

way to play uneventful sections faster and significant sections slower. Important and

unimportant sections of a video are identified using textual analysis. I use an existing

speech-to-text algorithm to extract the transcript and apply latent semantic analysis

and standard information retrieval techniques to identify the relevant segments of

the video. I compute relevance scores of different segments and propose a variable

playback rate for each of these segments. The aim is to reduce the amount of time

students spend on passive learning while watching videos without harming their ability

to follow the lecture. I validate the approach by conducting a user study among

computer science students and measuring their engagement. The results indicate

no significant difference in their engagement when this method is compared to the

original unedited video.
Date Created
2016
Agent

Exploring generic features for online large-scale discussion forum comments

154632-Thumbnail Image.png
Description
Online discussion forums have become an integral part of education and are large repositories of valuable information. They facilitate exploratory learning by allowing users to review and respond to the work of others and approach learning in diverse ways. This

Online discussion forums have become an integral part of education and are large repositories of valuable information. They facilitate exploratory learning by allowing users to review and respond to the work of others and approach learning in diverse ways. This research investigates the different comment semantic features and the effect they have on the quality of a post in a large-scale discussion forum. We survey the relevant literature and employ the key content quality identification features. We then construct comment semantics features and build several regression models to explore the value of comment semantics dynamics. The results reconfirm the usefulness of several essential quality predictors, including time, reputation, length, and editorship. We also found that comment semantics are valuable to shape the answer quality. Specifically, the diversity of comments significantly contributes to the answer quality. In addition, when searching for good quality answers, it is important to look for global semantics dynamics (diversity), rather than observe local differences (disputable content). Finally, the presence of comments shepherd the community to revise the posts by attracting attentions to the posts and eventually facilitate the editing process.
Date Created
2016
Agent

Bridging cyber and physical programming classes: an application of semantic visual analytics for programming exams

154605-Thumbnail Image.png
Description
With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas

With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can be of immense help. But the majority of traditional, non-virtual classes lack the ability to uncover such information that can serve as a feedback to the effectiveness of teaching. In majority of the schools paper exams and assignments provide the only form of assessment to measure the success of the students in achieving the course objectives. The overall grade obtained in paper exams and assignments need not present a complete picture of a student’s strengths and weaknesses. In part, this can be addressed by incorporating research-based technology into the classrooms to obtain real-time updates on students' progress. But introducing technology to provide real-time, class-wide engagement involves a considerable investment both academically and financially. This prevents the adoption of such technology thereby preventing the ideal, technology-enabled classrooms. With increasing class sizes, it is becoming impossible for teachers to keep a persistent track of their students progress and to provide personalized feedback. What if we can we provide technology support without adding more burden to the existing pedagogical approach? How can we enable semantic enrichment of exams that can translate to students' understanding of the topics taught in the class? Can we provide feedback to students that goes beyond only numbers and reveal areas that need their focus. In this research I focus on bringing the capability of conducting insightful analysis to paper exams with a less intrusive learning analytics approach that taps into the generic classrooms with minimum technology introduction. Specifically, the work focuses on automatic indexing of programming exam questions with ontological semantics. The thesis also focuses on designing and evaluating a novel semantic visual analytics suite for in-depth course monitoring. By visualizing the semantic information to illustrate the areas that need a student’s focus and enable teachers to visualize class level progress, the system provides a richer feedback to both sides for improvement.
Date Created
2016
Agent