Human Factors Affecting Video Streaming Service Selection
- Author (aut): Wallace, Sydney
- Thesis advisor (ths): Roscoe, Rod
- Committee member: Cooke, Nancy
- Committee member: Craig, Scotty
- Publisher (pbl): Arizona State University
Online learning has been more prevalent since the rapid increase of the technology field, this paper examines if the interactivity of an online learning website can affect learning, usability, and time spent interacting. Participants were collected from Amazon Mechanical Turk, and they were compensated $1.00 for their time. 39 participants received one of three online learning conditions on the ideal gas law with varying levels of interactivity (video, simulation, quiz). The participants took a pretest, interacted with their condition for a set time, then completed a posttest and a usability survey. An ANOVA was conducted on time, usability, and posttest transfer scores. A repeated measures ANOVA was conducted on pretest and posttest recall scores. There was no significance found for learning, usability, or time spent interacting with the online learning platform. Further studies should consider exposing participants to learning materials for longer periods of time.
The pandemic that hit in 2020 has boosted the growth of online learning that involves the booming of Massive Open Online Course (MOOC). To support this situation, it will be helpful to have tools that can help students in choosing between the different courses and can help instructors to understand what the students need. One of those tools is an online course ratings predictor. Using the predictor, online course instructors can learn the qualities that majority course takers deem as important, and thus they can adjust their lesson plans to fit those qualities. Meanwhile, students will be able to use it to help them in choosing the course to take by comparing the ratings. This research aims to find the best way to predict the rating of online courses using machine learning (ML). To create the ML model, different combinations of the length of the course, the number of materials it contains, the price of the course, the number of students taking the course, the course’s difficulty level, the usage of jargons or technical terms in the course description, the course’s instructors’ rating, the number of reviews the instructors got, and the number of classes the instructors have created on the same platform are used as the inputs. Meanwhile, the output of the model would be the average rating of a course. Data from 350 courses are used for this model, where 280 of them are used for training, 35 for testing, and the last 35 for validation. After trying out different machine learning models, wide neural networks model constantly gives the best training results while the medium tree model gives the best testing results. However, further research needs to be conducted as none of the results are not accurate, with 0.51 R-squared test result for the tree model.