Recommender System using Reinforcement Learning

158322-Thumbnail Image.png
Description
Currently, recommender systems are used extensively to find the right audience with the "right" content over various platforms. Recommendations generated by these systems aim to offer relevant items to users. Different approaches have been suggested to solve this problem mainly

Currently, recommender systems are used extensively to find the right audience with the "right" content over various platforms. Recommendations generated by these systems aim to offer relevant items to users. Different approaches have been suggested to solve this problem mainly by using the rating history of the user or by identifying the preferences of similar users. Most of the existing recommendation systems are formulated in an identical fashion, where a model is trained to capture the underlying preferences of users over different kinds of items. Once it is deployed, the model suggests personalized recommendations precisely, and it is assumed that the preferences of users are perfectly reflected by the historical data. However, such user data might be limited in practice, and the characteristics of users may constantly evolve during their intensive interaction between recommendation systems.

Moreover, most of these recommender systems suffer from the cold-start problems where insufficient data for new users or products results in reduced overall recommendation output. In the current study, we have built a recommender system to recommend movies to users. Biclustering algorithm is used to cluster the users and movies simultaneously at the beginning to generate explainable recommendations, and these biclusters are used to form a gridworld where Q-Learning is used to learn the policy to traverse through the grid. The reward function uses the Jaccard Index, which is a measure of common users between two biclusters. Demographic details of new users are used to generate recommendations that solve the cold-start problem too.

Lastly, the implemented algorithm is examined with a real-world dataset against the widely used recommendation algorithm and the performance for the cold-start cases.
Date Created
2020
Agent

Exploring the Efficacy of Using Augmented Reality to Alleviate Common Misconceptions about Natural Selection

157685-Thumbnail Image.png
Description
Evidence suggests that Augmented Reality (AR) may be a powerful tool for

alleviating certain, lightly held scientific misconceptions. However, many

misconceptions surrounding the theory of evolution are deeply held and resistant to

change. This study examines whether AR can

Evidence suggests that Augmented Reality (AR) may be a powerful tool for

alleviating certain, lightly held scientific misconceptions. However, many

misconceptions surrounding the theory of evolution are deeply held and resistant to

change. This study examines whether AR can serve as an effective tool for alleviating

these misconceptions by comparing the change in the number of misconceptions

expressed by users of a tablet-based version of a well-established classroom simulation to

the change in the number of misconceptions expressed by users of AR versions of the

simulation.

The use of realistic representations of objects is common for many AR

developers. However, this contradicts well-tested practices of multimedia design that

argue against the addition of unnecessary elements. This study also compared the use of

representational visualizations in AR, in this case, models of ladybug beetles, to symbolic

representations, in this case, colored circles.

To address both research questions, a one-factor, between-subjects experiment

was conducted with 189 participants randomly assigned to one of three conditions: non

AR, symbolic AR, and representational AR. Measures of change in the number and types

of misconceptions expressed, motivation, and time on task were examined using a pair of

planned orthogonal contrasts designed to test the study’s two research questions.

Participants in the AR-based condition showed a significantly smaller change in

the number of total misconceptions expressed after the treatment as well as in the number

of misconceptions related to intentionality; none of the other misconceptions examined

showed a significant difference. No significant differences were found in the total

number of misconceptions expressed between participants in the representative and

symbolic AR-based conditions, or on motivation. Contrary to the expectation that the

simulation would alleviate misconceptions, the average change in the number of

misconceptions expressed by participants increased. This is theorized to be due to the

juxtaposition of virtual and real-world entities resulting in a reduction in assumed

intentionality.
Date Created
2019
Agent

Real-Time Affective Support to Promote Learner’s Engagement

156774-Thumbnail Image.png
Description
Research has shown that the learning processes can be enriched and enhanced with the presence of affective interventions. The goal of this dissertation was to design, implement, and evaluate an affective agent that provides affective support in real-time in order

Research has shown that the learning processes can be enriched and enhanced with the presence of affective interventions. The goal of this dissertation was to design, implement, and evaluate an affective agent that provides affective support in real-time in order to enrich the student’s learning experience and performance by inducing and/or maintaining a productive learning path. This work combined research and best practices from affective computing, intelligent tutoring systems, and educational technology to address the design and implementation of an affective agent and corresponding pedagogical interventions. It included the incorporation of the affective agent into an Exploratory Learning Environment (ELE) adapted for this research.

A gendered, three-dimensional, animated, human-like character accompanied by text- and speech-based dialogue visually represented the proposed affective agent. The agent’s pedagogical interventions considered inputs from the ELE (interface, model building, and performance events) and from the user (emotional and cognitive events). The user’s emotional events captured by biometric sensors and processed by a decision-level fusion algorithm for a multimodal system in combination with the events from the ELE informed the production-rule-based behavior engine to define and trigger pedagogical interventions. The pedagogical interventions were focused on affective dimensions and occurred in the form of affective dialogue prompts and animations.

An experiment was conducted to assess the impact of the affective agent, Hope, on the student’s learning experience and performance. In terms of the student’s learning experience, the effect of the agent was analyzed in four components: perception of the instructional material, perception of the usefulness of the agent, ELE usability, and the affective responses from the agent triggered by the student’s affective states.

Additionally, in terms of the student’s performance, the effect of the agent was analyzed in five components: tasks completed, time spent solving a task, planning time while solving a task, usage of the provided help, and attempts to successfully complete a task. The findings from the experiment did not provide the anticipated results related to the effect of the agent; however, the results provided insights to improve diverse components in the design of affective agents as well as for the design of the behavior engines and algorithms to detect, represent, and handle affective information.
Date Created
2018
Agent

Exploring the use of self-explanation prompts in a collaborative learning environment

156508-Thumbnail Image.png
Description
A recorded tutorial dialogue can produce positive learning gains, when observed and used to promote discussion between a pair of learners; however, this same effect does not typically occur when an leaner observes a tutorial dialogue by himself or herself.

A recorded tutorial dialogue can produce positive learning gains, when observed and used to promote discussion between a pair of learners; however, this same effect does not typically occur when an leaner observes a tutorial dialogue by himself or herself. One potential approach to enhancing learning in the latter situation is by incorporating self-explanation prompts, a proven technique for encouraging students to engage in active learning and attend to the material in a meaningful way. This study examined whether learning from observing recorded tutorial dialogues could be made more effective by adding self-explanation prompts in computer-based learning environment. The research questions in this two-experiment study were (a) Do self-explanation prompts help support student learning while watching a recorded dialogue? and (b) Does collaboratively observing (in dyads) a tutorial dialogue with self-explanation prompts help support student learning while watching a recorded dialogue? In Experiment 1, 66 participants were randomly assigned as individuals to a physics lesson (a) with self-explanation prompts (Condition 1) or (b) without self-explanation prompts (Condition 2). In Experiment 2, 20 participants were randomly assigned in 10 pairs to the same physics lesson (a) with self-explanation prompts (Condition 1) or (b) without self-explanation prompts (Condition 2). Pretests and posttests were administered, as well as other surveys that measured motivation and system usability. Although supplemental analyses showed some significant differences among individual scale items or factors, neither primary results for Experiment 1 or Experiment 2 were significant for changes in posttest scores from pretest scores for learning, motivation, or system usability assessments.
Date Created
2018
Agent

The Usefulness of Multi-Sensor Affect Detection on User Experience: An Application of Biometric Measurement Systems on Online Purchasing

156463-Thumbnail Image.png
Description
Traditional usability methods in Human-Computer Interaction (HCI) have been extensively used to understand the usability of products. Measurements of user experience (UX) in traditional HCI studies mostly rely on task performance and observable user interactions with the product or services,

Traditional usability methods in Human-Computer Interaction (HCI) have been extensively used to understand the usability of products. Measurements of user experience (UX) in traditional HCI studies mostly rely on task performance and observable user interactions with the product or services, such as usability tests, contextual inquiry, and subjective self-report data, including questionnaires, interviews, and usability tests. However, these studies fail to directly reflect a user’s psychological involvement and further fail to explain the cognitive processing and the related emotional arousal. Thus, capturing how users think and feel when they are using a product remains a vital challenge of user experience evaluation studies. Conversely, recent research has revealed that sensor-based affect detection technologies, such as eye tracking, electroencephalography (EEG), galvanic skin response (GSR), and facial expression analysis, effectively capture affective states and physiological responses. These methods are efficient indicators of cognitive involvement and emotional arousal and constitute effective strategies for a comprehensive measurement of UX. The literature review shows that the impacts of sensor-based affect detection systems to the UX can be categorized in two groups: (1) confirmatory to validate the results obtained from the traditional usability methods in UX evaluations; and (2) complementary to enhance the findings or provide more precise and valid evidence. Both provided comprehensive findings to uncover the issues related to mental and physiological pathways to enhance the design of product and services. Therefore, this dissertation claims that it can be efficient to integrate sensor-based affect detection technologies to solve the current gaps or weaknesses of traditional usability methods. The dissertation revealed that the multi-sensor-based UX evaluation approach through biometrics tools and software corroborated user experience identified by traditional UX methods during an online purchasing task. The use these systems enhanced the findings and provided more precise and valid evidence to predict the consumer purchasing preferences. Thus, their impact was “complementary” on overall UX evaluation. The dissertation also provided information of the unique contributions of each tool and recommended some ways user experience researchers can combine both sensor-based and traditional UX approaches to explain consumer purchasing preferences.
Date Created
2018
Agent

Improving Usability and Adoption of Tablet-based Electronic Health Record (EHR) Applications

156121-Thumbnail Image.png
Description
The technological revolution has caused the entire world to migrate to a digital environment and health care is no exception to this. Electronic Health Records (EHR) or Electronic Medical Records (EMR) are the digital repository for health data of patients.

The technological revolution has caused the entire world to migrate to a digital environment and health care is no exception to this. Electronic Health Records (EHR) or Electronic Medical Records (EMR) are the digital repository for health data of patients. Nation wide efforts have been made by the federal government to promote the usage of EHRs as they have been found to improve quality of health service. Although EHR systems have been implemented almost everywhere, active use of EHR applications have not replaced paper documentation. Rather, they are often used to store transcribed data from paper documentation after each clinical procedure. This process is found to be prone to errors such as data omission, incomplete data documentation and is also time consuming. This research aims to help improve adoption of real-time EHRs usage while documenting data by improving the usability of an iPad based EHR application that is used during resuscitation process in the intensive care unit. Using Cognitive theories and HCI frameworks, this research identified areas of improvement and customizations in the application that were required to exclusively match the work flow of the resuscitation team at the Mayo Clinic. In addition to this, a Handwriting Recognition Engine (HRE) was integrated into the application to support a stylus based information input into EHR, which resembles our target users’ traditional pen and paper based documentation process. The EHR application was updated and then evaluated with end users at the Mayo clinic. The users found the application to be efficient, usable and they showed preference in using this application over the paper-based documentation.
Date Created
2018
Agent

Experimental evaluation of DEFUSE: online de-escalation training for law enforcement intervening in mental health crises

155650-Thumbnail Image.png
Description
Training for law enforcement on effective ways of intervening in mental health crises is limited. What is available tends to be costly for implementation, labor-intensive, and requires officers to opt-in. DEFUSE, an interactive online training program, was specifically developed to

Training for law enforcement on effective ways of intervening in mental health crises is limited. What is available tends to be costly for implementation, labor-intensive, and requires officers to opt-in. DEFUSE, an interactive online training program, was specifically developed to train law enforcement on mental illness and de-escalation skills. Derived from a stress inoculation framework, the curriculum provides education, skills training, and rehearsal; it is brief, cost-effective, and scalable to officers across the country. Participants were randomly assigned to either the experimental or delayed treatment control conditions. A multivariate analysis of variance yielded a significant treatment-by-repeated-measures interaction and univariate analyses confirmed improvement on all of the measures (e.g., empathy, stigma, self-efficacy, behavioral outcomes, knowledge). Replication dependent t-test analyses conducted on the control condition following completion of DEFUSE confirmed significant improvement on four of the measures and marginal significance on the fifth. Participant responses to BPAD video vignettes revealed significant differences in objective behavioral proficiency for those participants who completed the online course. DEFUSE is a powerful tool for training law enforcement on mental illness and effective strategies for intervening in mental health crises. Considerations for future study are discussed.
Date Created
2017
Agent

Analytical Methods for High Dimensional Physiological Sensors

155361-Thumbnail Image.png
Description
This dissertation proposes a new set of analytical methods for high dimensional physiological sensors. The methodologies developed in this work were motivated by problems in learning science, but also apply to numerous disciplines where high dimensional signals are present. In

This dissertation proposes a new set of analytical methods for high dimensional physiological sensors. The methodologies developed in this work were motivated by problems in learning science, but also apply to numerous disciplines where high dimensional signals are present. In the education field, more data is now available from traditional sources and there is an important need for analytical methods to translate this data into improved learning. Affecting Computing which is the study of new techniques that develop systems to recognize and model human emotions is integrating different physiological signals such as electroencephalogram (EEG) and electromyogram (EMG) to detect and model emotions which later can be used to improve these learning systems.

The first contribution proposes an event-crossover (ECO) methodology to analyze performance in learning environments. The methodology is relevant to studies where it is desired to evaluate the relationships between sentinel events in a learning environment and a physiological measurement which is provided in real time.

The second contribution introduces analytical methods to study relationships between multi-dimensional physiological signals and sentinel events in a learning environment. The methodology proposed learns physiological patterns in the form of node activations near time of events using different statistical techniques.

The third contribution addresses the challenge of performance prediction from physiological signals. Features from the sensors which could be computed early in the learning activity were developed for input to a machine learning model. The objective is to predict success or failure of the student in the learning environment early in the activity. EEG was used as the physiological signal to train a pattern recognition algorithm in order to derive meta affective states.

The last contribution introduced a methodology to predict a learner's performance using Bayes Belief Networks (BBNs). Posterior probabilities of latent nodes were used as inputs to a predictive model in real-time as evidence was accumulated in the BBN.

The methodology was applied to data streams from a video game and from a Damage Control Simulator which were used to predict and quantify performance. The proposed methods provide cognitive scientists with new tools to analyze subjects in learning environments.
Date Created
2017
Agent

Exploring the use of augmented reality to support cognitive modeling in art education

155180-Thumbnail Image.png
Description
The present study explored the use of augmented reality (AR) technology to support cognitive modeling in an art-based learning environment. The AR application used in this study made visible the thought processes and observational techniques of art experts for the

The present study explored the use of augmented reality (AR) technology to support cognitive modeling in an art-based learning environment. The AR application used in this study made visible the thought processes and observational techniques of art experts for the learning benefit of novices through digital annotations, overlays, and side-by-side comparisons that when viewed on mobile device appear directly on works of art.

Using a 2 x 3 factorial design, this study compared learner outcomes and motivation across technologies (audio-only, video, AR) and groupings (individuals, dyads) with 182 undergraduate and graduate students who were self-identified art novices. Learner outcomes were measured by post-activity spoken responses to a painting reproduction with the pre-activity response as a moderating variable. Motivation was measured by the sum score of a reduced version of the Instructional Materials Motivational Survey (IMMS), accounting for attention, relevance, confidence, and satisfaction, with total time spent in learning activity as the moderating variable. Information on participant demographics, technology usage, and art experience was also collected.

Participants were randomly assigned to one of six conditions that differed by technology and grouping before completing a learning activity where they viewed four high-resolution, printed-to-scale painting reproductions in a gallery-like setting while listening to audio-recorded conversations of two experts discussing the actual paintings. All participants listened to expert conversations but the video and AR conditions received visual supports via mobile device.

Though no main effects were found for technology or groupings, findings did include statistically significant higher learner outcomes in the elements of design subscale (characteristics most represented by the visual supports of the AR application) than the audio-only conditions. When participants saw digital representations of line, shape, and color directly on the paintings, they were more likely to identify those same features in the post-activity painting. Seeing what the experts see, in a situated environment, resulted in evidence that participants began to view paintings in a manner similar to the experts. This is evidence of the value of the temporal and spatial contiguity afforded by AR in cognitive modeling learning environments.
Date Created
2016
Agent

Improving adolescent writing quality and motivation with Sparkfolio, a social media based writing tool

153124-Thumbnail Image.png
Description
Writing instruction poses both cognitive and affective challenges, particularly for adolescents. American teens not only fall short of national writing standards, but also tend to lack motivation for school writing, claiming it is too challenging and that they have nothing

Writing instruction poses both cognitive and affective challenges, particularly for adolescents. American teens not only fall short of national writing standards, but also tend to lack motivation for school writing, claiming it is too challenging and that they have nothing interesting to write about. Yet, teens enthusiastically immerse themselves in informal writing via text messaging, email, and social media, regularly sharing their thoughts and experiences with a real audience. While these activities are, in fact, writing, research indicates that teens instead view them as simply "communication" or "being social." Accordingly, the aim of this work was to infuse formal classroom writing with naturally engaging elements of informal social media writing to positively impact writing quality and the motivation to write, resulting in the development and implementation of Sparkfolio, an online prewriting tool that: a) addresses affective challenges by allowing students to choose personally relevant topics using their own social media data; and b) provides cognitive support with a planner that helps develop and organize ideas in preparation for writing a first draft. This tool was evaluated in a study involving 46 eleventh-grade English students writing three personal narratives each, and including three experimental conditions: a) using self-authored social media post data while planning with Sparkfolio; b) using only data from posts authored by one's friends while planning with Sparkfolio; and c) a control group that did not use Sparkfolio. The dependent variables were the change in writing motivation and the change in writing quality that occurred before and after the intervention. A scaled pre/posttest measured writing motivation, and the first and third narratives were used as writing quality pre/posttests. A usability scale, logged Sparkfolio data, and qualitative measures were also analyzed. Results indicated that participants who used Sparkfolio had statistically significantly higher gains in writing quality than the control group, validating Sparkfolio as effective. Additionally, while nonsignificant, results suggested that planning with self-authored data provided more writing quality and motivational benefits than data authored by others. This work provides initial empirical evidence that leveraging students' own social media data (securely) holds potential in fostering meaningful personalized learning.
Date Created
2014
Agent