Gesture.js: A Cloud-Deployable Framework for Building Embodied Experiences

171730-Thumbnail Image.png
Description
Emerging body movement detection and gesture recognition software have opened a gateway of possibilities to make technology more intuitive, engaging, and accessible for people. A vast areaof natural user interfaces is leveraging body motion tracking and gesture recognition technologies and

Emerging body movement detection and gesture recognition software have opened a gateway of possibilities to make technology more intuitive, engaging, and accessible for people. A vast areaof natural user interfaces is leveraging body motion tracking and gesture recognition technologies and a human’s readily expressive body to extend interactions with software beyond mouse clicks and scrolls. However, these interfaces have been limited by hardware and software expenses, high development time and costs, and learning curves. This paper explores different approaches to providing both software developers and designers with easier ways to incorporate computer vision-based body and gesture detection solutions into the development of embodied experiences without suppressing creativity. Gesture.js is a JavaScript framework as a service (FaaS) that is both a thin library on top of the Document Object Model (DOM) consisting of a collection of tools for developing embodied-enabled applications on the web and a landmark computation and processing application programming interface. It wraps MediaPipe, an open-source collection of machine-learning solutions that perform inference over arbitrary sensory data, and additional landmark processing frameworks such as KalidoKit, a 3D model rigging solution, and ports the necessary information through either an object-oriented or an API-oriented implementation. It also comes with its web-based graphical interface for easy connection between Gesture.js and other application clients with little to no JavaScript code. This thesis also details a collection of example applications that demonstrate the usability, capacity, and potential of this framework.
Date Created
2022
Agent

Nurturing Open Design: Challenges and Opportunities for HCI to Support Crowd-driven Hardware Design

158890-Thumbnail Image.png
Description
Open Design is a crowd-driven global ecosystem which tries to challenge and alter contemporary modes of capitalistic hardware production. It strives to build on the collective skills, expertise and efforts of people regardless of their educational, social or political backgrounds

Open Design is a crowd-driven global ecosystem which tries to challenge and alter contemporary modes of capitalistic hardware production. It strives to build on the collective skills, expertise and efforts of people regardless of their educational, social or political backgrounds to develop and disseminate physical products, machines and systems. In contrast to capitalistic hardware production, Open Design practitioners publicly share design files, blueprints and knowhow through various channels including internet platforms and in-person workshops. These designs are typically replicated, modified, improved and reshared by individuals and groups who are broadly referred to as ‘makers’.

This dissertation aims to expand the current scope of Open Design within human-computer interaction (HCI) research through a long-term exploration of Open Design’s socio-technical processes. I examine Open Design from three perspectives: the functional—materials, tools, and platforms that enable crowd-driven open hardware production, the critical—materially-oriented engagements within open design as a site for sociotechnical discourse, and the speculative—crowd-driven critical envisioning of future hardware.

More specifically, this dissertation first explores the growing global scene of Open Design through a long-term ethnographic study of the open science hardware (OScH) movement, a genre of Open Design. This long-term study of OScH provides a focal point for HCI to deeply understand Open Design's growing global landscape. Second, it examines the application of Critical Making within Open Design through an OScH workshop with designers, engineers, artists and makers from local communities. This work foregrounds the role of HCI researchers as facilitators of collaborative critical engagements within Open Design. Third, this dissertation introduces the concept of crowd-driven Design Fiction through the development of a publicly accessible online Design Fiction platform named Dream Drones. Through a six month long development and a study with drone related practitioners, it offers several pragmatic insights into the challenges and opportunities for crowd-driven Design Fiction. Through these explorations, I highlight the broader implications and novel research pathways for HCI to shape and be shaped by the global Open Design movement.
Date Created
2020
Agent

Discoverable Free Space Gesture Sets for Walk-Up-and-Use Interactions

157027-Thumbnail Image.png
Description
Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by the rise of systems such as the Microsoft Kinect. However,

Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by the rise of systems such as the Microsoft Kinect. However, much of the interaction design research for such systems is still focused on desktop and touch interactions. Current thinking in free-space gestures are limited in capability and imagination and most gesture studies have not attempted to identify gestures appropriate for public walk-up-and-use applications. A walk-up-and-use display must be discoverable, such that first-time users can use the system without any training, flexible, and not fatiguing, especially in the case of longer-term interactions. One mechanism for defining gesture sets for walk-up-and-use interactions is a participatory design method called gesture elicitation. This method has been used to identify several user-generated gesture sets and shown that user-generated sets are preferred by users over those defined by system designers. However, for these studies to be successfully implemented in walk-up-and-use applications, there is a need to understand which components of these gestures are semantically meaningful (i.e. do users distinguish been using their left and right hand, or are those semantically the same thing?). Thus, defining a standardized gesture vocabulary for coding, characterizing, and evaluating gestures is critical. This dissertation presents three gesture elicitation studies for walk-up-and-use displays that employ a novel gesture elicitation methodology, alongside a novel coding scheme for gesture elicitation data that focuses on features most important to users’ mental models. Generalizable design principles, based on the three studies, are then derived and presented (e.g. changes in speed are meaningful for scroll actions in walk up and use displays but not for paging or selection). The major contributions of this work are: (1) an elicitation methodology that aids users in overcoming biases from existing interaction modalities; (2) a better understanding of the gestural features that matter, e.g. that capture the intent of the gestures; and (3) generalizable design principles for walk-up-and-use public displays.
Date Created
2019
Agent