Adaptive mixed reality rehabilitation for stroke

151399-Thumbnail Image.png
Description
Millions of Americans live with motor impairments resulting from a stroke and the best way to administer rehabilitative therapy to achieve recovery is not well understood. Adaptive mixed reality rehabilitation (AMRR) is a novel integration of motion capture technology and

Millions of Americans live with motor impairments resulting from a stroke and the best way to administer rehabilitative therapy to achieve recovery is not well understood. Adaptive mixed reality rehabilitation (AMRR) is a novel integration of motion capture technology and high-level media computing that provides precise kinematic measurements and engaging multimodal feedback for self-assessment during a therapeutic task. The AMRR system was evaluated in a small (N=3) cohort of stroke survivors to determine best practices for administering adaptive, media-based therapy. A proof of concept study followed, examining changes in clinical scale and kinematic performances among a group of stroke survivors who received either a month of AMRR therapy (N = 11) or matched dosing of traditional repetitive task therapy (N = 10). Both groups demonstrated statistically significant improvements in Wolf Motor Function Test and upper-extremity Fugl-Meyer Assessment scores, indicating increased function after the therapy. However, only participants who received AMRR therapy showed a consistent improvement in their kinematic measurements, including those measured in the trained reaching task (reaching to grasp a cone) and in an untrained reaching task (reaching to push a lighted button). These results suggest that that the AMRR system can be used as a therapy tool to enhance both functionality and reaching kinematics that quantify movement quality. Additionally, the AMRR concepts are currently being transitioned to a home-based training application. An inexpensive, easy-to-use, toolkit of tangible objects has been developed to sense, assess and provide feedback on hand function during different functional activities. These objects have been shown to accurately and consistently track hand function in people with unimpaired movements and will be tested with stroke survivors in the future.
Date Created
2012
Agent

The internal representation of arm position revealed through the spatial pattern of hand location estimation errors

151390-Thumbnail Image.png
Description
Our ability to estimate the position of our body parts in space, a fundamentally proprioceptive process, is crucial for interacting with the environment and movement control. For proprioception to support these actions, the Central Nervous System has to rely on

Our ability to estimate the position of our body parts in space, a fundamentally proprioceptive process, is crucial for interacting with the environment and movement control. For proprioception to support these actions, the Central Nervous System has to rely on a stored internal representation of the body parts in space. However, relatively little is known about this internal representation of arm position. To this end, I developed a method to map proprioceptive estimates of hand location across a 2-d workspace. In this task, I moved each subject's hand to a target location while the subject's eyes were closed. After returning the hand, subjects opened their eyes to verbally report the location of where their fingertip had been. Then, I reconstructed and analyzed the spatial structure of the pattern of estimation errors. In the first couple of experiments I probed the structure and stability of the pattern of errors by manipulating the hand used and tactile feedback provided when the hand was at each target location. I found that the resulting pattern of errors was systematically stable across conditions for each subject, subject-specific, and not uniform across the workspace. These findings suggest that the observed structure of pattern of errors has been constructed through experience, which has resulted in a systematically stable internal representation of arm location. Moreover, this representation is continuously being calibrated across the workspace. In the next two experiments, I aimed to probe the calibration of this structure. To this end, I used two different perturbation paradigms: 1) a virtual reality visuomotor adaptation to induce a local perturbation, 2) and a standard prism adaptation paradigm to induce a global perturbation. I found that the magnitude of the errors significantly increased to a similar extent after each perturbation. This small effect indicates that proprioception is recalibrated to a similar extent regardless of how the perturbation is introduced, suggesting that sensory and motor changes may be two independent processes arising from the perturbation. Moreover, I propose that the internal representation of arm location might be constructed with a global solution and not capable of local changes.
Date Created
2012
Agent