Can Startle Elicit Sequential Movements in Highly Trained Individuals?

133601-Thumbnail Image.png
Description
Most daily living tasks consist of pairing a series of sequential movements, e.g., reaching to a cup, grabbing the cup, lifting and returning the cup to your mouth. The process by which we control and mediate the smooth progression of

Most daily living tasks consist of pairing a series of sequential movements, e.g., reaching to a cup, grabbing the cup, lifting and returning the cup to your mouth. The process by which we control and mediate the smooth progression of these tasks is not well understood. One method which we can use to further evaluate these motions is known as Startle Evoked Movements (SEM). SEM is an established technique to probe the motor learning and planning processes by detecting muscle activation of the sternocleidomastoid muscles of the neck prior to 120ms after a startling stimulus is presented. If activation of these muscles was detected following a stimulus in the 120ms window, the movement is classified as Startle+ whereas if no sternocleidomastoid activation is detected after a stimulus in the allotted time the movement is considered Startle-. For a movement to be considered SEM, the activation of movements for Startle+ trials must be faster than the activation of Startle- trials. The objective of this study was to evaluate the effect that expertise has on sequential movements as well as determining if startle can distinguish when the consolidation of actions, known as chunking, has occurred. We hypothesized that SEM could distinguish words that were solidified or chunked. Specifically, SEM would be present when expert typists were asked to type a common word but not during uncommon letter combinations. The results from this study indicated that the only word that was susceptible to SEM, where Startle+ trials were initiated faster than Startle-, was an uncommon task "HET" while the common words "AND" and "THE" were not. Additionally, the evaluation of the differences between each keystroke for common and uncommon words showed that Startle was unable to distinguish differences in motor chunking between Startle+ and Startle- trials. Explanations into why these results were observed could be related to hand dominance in expert typists. No proper research has been conducted to evaluate the susceptibility of the non-dominant hand's fingers to SEM, and the results of future studies into this as well as the results from this study can impact our understanding of sequential movements.
Date Created
2018-05
Agent

Soft Robotic Haptic Interface With Variable Stiffness for Rehabilitation of Neurologically Impaired Hand Function

127825-Thumbnail Image.png
Description

The human hand comprises complex sensorimotor functions that can be impaired by neurological diseases and traumatic injuries. Effective rehabilitation can bring the impaired hand back to a functional state because of the plasticity of the central nervous system to relearn

The human hand comprises complex sensorimotor functions that can be impaired by neurological diseases and traumatic injuries. Effective rehabilitation can bring the impaired hand back to a functional state because of the plasticity of the central nervous system to relearn and remodel the lost synapses in the brain. Current rehabilitation therapies focus on strengthening motor skills, such as grasping, employ multiple objects of varying stiffness so that affected persons can experience a wide range of strength training. These devices have limited range of stiffness due to the rigid mechanisms employed in their variable stiffness actuators. This paper presents a novel soft robotic haptic device for neuromuscular rehabilitation of the hand, which is designed to offer adjustable stiffness and can be utilized in both clinical and home settings. The device eliminates the need for multiple objects by employing a pneumatic soft structure made with highly compliant materials that act as the actuator of the haptic interface. It is made with interchangeable sleeves that can be customized to include materials of varying stiffness to increase the upper limit of the stiffness range. The device is fabricated using existing 3D printing technologies, and polymer molding and casting techniques, thus keeping the cost low and throughput high. The haptic interface is linked to either an open-loop system that allows for an increased pressure during usage or closed-loop system that provides pressure regulation in accordance to the stiffness the user specifies. Preliminary evaluation is performed to characterize the effective controllable region of variance in stiffness. It was found that the region of controllable stiffness was between points 3 and 7, where the stiffness appeared to plateau with each increase in pressure. The two control systems are tested to derive relationships between internal pressure, grasping force exertion on the surface, and displacement using multiple probing points on the haptic device. Additional quantitative evaluation is performed with study participants and juxtaposed to a qualitative analysis to ensure adequate perception in compliance variance. The qualitative evaluation showed that greater than 60% of the trials resulted in the correct perception of stiffness in the haptic device.

Date Created
2017-12-20
Agent

On the Role of Physical Interaction on Performance of Object Manipulation by Dyads

127855-Thumbnail Image.png
Description

Human physical interactions can be intrapersonal, e.g., manipulating an object bimanually, or interpersonal, e.g., transporting an object with another person. In both cases, one or two agents are required to coordinate their limbs to attain the task goal. We investigated

Human physical interactions can be intrapersonal, e.g., manipulating an object bimanually, or interpersonal, e.g., transporting an object with another person. In both cases, one or two agents are required to coordinate their limbs to attain the task goal. We investigated the physical coordination of two hands during an object-balancing task performed either bimanually by one agent or jointly by two agents. The task consisted of a series of static (holding) and dynamic (moving) phases, initiated by auditory cues. We found that task performance of dyads was not affected by different pairings of dominant and non-dominant hands. However, the spatial configuration of the two agents (side-by-side vs. face-to-face) appears to play an important role, such that dyads performed better side-by-side than face-to-face. Furthermore, we demonstrated that only individuals with worse solo performance can benefit from interpersonal coordination through physical couplings, whereas the better individuals do not. The present work extends ongoing investigations on human-human physical interactions by providing new insights about factors that influence dyadic performance. Our findings could potentially impact several areas, including robotic-assisted therapies, sensorimotor learning and human performance augmentation.

Date Created
2017-11-07
Agent

On Neuromechanical Approaches for the Study of Biological and Robotic Grasp and Manipulation

127859-Thumbnail Image.png
Description

Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is

Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank and open-minded assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas at the interface of neuromechanics, neuroscience, rehabilitation and robotics.

Date Created
2017-10-09
Agent

Improving Fine Control of Grasping Force During Hand–Object Interactions for a Soft Synergy-Inspired Myoelectric Prosthetic Hand

127888-Thumbnail Image.png
Description

The concept of postural synergies of the human hand has been shown to potentially reduce complexity in the neuromuscular control of grasping. By merging this concept with soft robotics approaches, a multi degrees of freedom soft-synergy prosthetic hand [SoftHand-Pro (SHP)]

The concept of postural synergies of the human hand has been shown to potentially reduce complexity in the neuromuscular control of grasping. By merging this concept with soft robotics approaches, a multi degrees of freedom soft-synergy prosthetic hand [SoftHand-Pro (SHP)] was created. The mechanical innovation of the SHP enables adaptive and robust functional grasps with simple and intuitive myoelectric control from only two surface electromyogram (sEMG) channels. However, the current myoelectric controller has very limited capability for fine control of grasp forces. We addressed this challenge by designing a hybrid-gain myoelectric controller that switches control gains based on the sensorimotor state of the SHP. This controller was tested against a conventional single-gain (SG) controller, as well as against native hand in able-bodied subjects. We used the following tasks to evaluate the performance of grasp force control: (1) pick and place objects with different size, weight, and fragility levels using power or precision grasp and (2) squeezing objects with different stiffness. Sensory feedback of the grasp forces was provided to the user through a non-invasive, mechanotactile haptic feedback device mounted on the upper arm. We demonstrated that the novel hybrid controller enabled superior task completion speed and fine force control over SG controller in object pick-and-place tasks. We also found that the performance of the hybrid controller qualitatively agrees with the performance of native human hands.

Date Created
2018-01-10
Agent

Neural mechanisms of sensory integration: frequency domain analysis of spike and field potential activity during arm position maintenance with and without visual feedback

156093-Thumbnail Image.png
Description
Understanding where our bodies are in space is imperative for motor control, particularly for actions such as goal-directed reaching. Multisensory integration is crucial for reducing uncertainty in arm position estimates. This dissertation examines time and frequency-domain correlates of

Understanding where our bodies are in space is imperative for motor control, particularly for actions such as goal-directed reaching. Multisensory integration is crucial for reducing uncertainty in arm position estimates. This dissertation examines time and frequency-domain correlates of visual-proprioceptive integration during an arm-position maintenance task. Neural recordings were obtained from two different cortical areas as non-human primates performed a center-out reaching task in a virtual reality environment. Following a reach, animals maintained the end-point position of their arm under unimodal (proprioception only) and bimodal (proprioception and vision) conditions. In both areas, time domain and multi-taper spectral analysis methods were used to quantify changes in the spiking, local field potential (LFP), and spike-field coherence during arm-position maintenance.

In both areas, individual neurons were classified based on the spectrum of their spiking patterns. A large proportion of cells in the SPL that exhibited sensory condition-specific oscillatory spiking in the beta (13-30Hz) frequency band. Cells in the IPL typically had a more diverse mix of oscillatory and refractory spiking patterns during the task in response to changing sensory condition. Contrary to the assumptions made in many modelling studies, none of the cells exhibited Poisson-spiking statistics in SPL or IPL.

Evoked LFPs in both areas exhibited greater effects of target location than visual condition, though the evoked responses in the preferred reach direction were generally suppressed in the bimodal condition relative to the unimodal condition. Significant effects of target location on evoked responses were observed during the movement period of the task well.

In the frequency domain, LFP power in both cortical areas was enhanced in the beta band during the position estimation epoch of the task, indicating that LFP beta oscillations may be important for maintaining the ongoing state. This was particularly evident at the population level, with clear increase in alpha and beta power. Differences in spectral power between conditions also became apparent at the population level, with power during bimodal trials being suppressed relative to unimodal. The spike-field coherence showed confounding results in both the SPL and IPL, with no clear correlation between incidence of beta oscillations and significant beta coherence.
Date Created
2017
Agent

Techniques to Assess Balance and Mobility in Lower-Limb Prosthesis Users

155964-Thumbnail Image.png
Description
Lower-limb prosthesis users have commonly-recognized deficits in gait and posture control. However, existing methods in balance and mobility analysis fail to provide sufficient sensitivity to detect changes in prosthesis users' postural control and mobility in response to clinical intervention or

Lower-limb prosthesis users have commonly-recognized deficits in gait and posture control. However, existing methods in balance and mobility analysis fail to provide sufficient sensitivity to detect changes in prosthesis users' postural control and mobility in response to clinical intervention or experimental manipulations and often fail to detect differences between prosthesis users and non-amputee control subjects. This lack of sensitivity limits the ability of clinicians to make informed clinical decisions and presents challenges with insurance reimbursement for comprehensive clinical care and advanced prosthetic devices. These issues have directly impacted clinical care by restricting device options, increasing financial burden on clinics, and limiting support for research and development. This work aims to establish experimental methods and outcome measures that are more sensitive than traditional methods to balance and mobility changes in prosthesis users. Methods and analysis techniques were developed to probe aspects of balance and mobility control that may be specifically impacted by use of a prosthesis and present challenges similar to those experienced in daily life that could improve the detection of balance and mobility changes. Using the framework of cognitive resource allocation and dual-tasking, this work identified unique characteristics of prosthesis users’ postural control and developed sensitive measures of gait variability. The results also provide broader insight into dual-task analysis and the motor-cognitive response to demanding conditions. Specifically, this work identified altered motor behavior in prosthesis users and high cognitive demand of using a prosthesis. The residual standard deviation method was developed and demonstrated to be more effective than traditional gait variability measures at detecting the impact of dual-tasking. Additionally, spectral analysis of the center of pressure while standing identified altered somatosensory control in prosthesis users. These findings provide a new understanding of prosthetic use and new, highly sensitive techniques to assess balance and mobility in prosthesis users.
Date Created
2017
Agent

Cortical sensorimotor mechanisms for neural control of skilled manipulation

155960-Thumbnail Image.png
Description
The human hand is a complex biological system. Humans have evolved a unique ability to use the hand for a wide range of tasks, including activities of daily living such as successfully grasping and manipulating objects, i.e., lifting a cu

The human hand is a complex biological system. Humans have evolved a unique ability to use the hand for a wide range of tasks, including activities of daily living such as successfully grasping and manipulating objects, i.e., lifting a cup of coffee without spilling. Despite the ubiquitous nature of hand use in everyday activities involving object manipulations, there is currently an incomplete understanding of the cortical sensorimotor mechanisms underlying this important behavior. One critical aspect of natural object grasping is the coordination of where the fingers make contact with an object and how much force is applied following contact. Such force-to-position modulation is critical for successful manipulation. However, the neural mechanisms underlying these motor processes remain less understood, as previous experiments have utilized protocols with fixed contact points which likely rely on different neural mechanisms from those involved in grasping at unconstrained contacts. To address this gap in the motor neuroscience field, transcranial magnetic stimulation (TMS) and electroencephalography (EEG) were used to investigate the role of primary motor cortex (M1), as well as other important cortical regions in the grasping network, during the planning and execution of object grasping and manipulation. The results of virtual lesions induced by TMS and EEG revealed grasp context-specific cortical mechanisms underlying digit force-to-position coordination, as well as the spatial and temporal dynamics of cortical activity during planning and execution. Together, the present findings provide the foundation for a novel framework accounting for how the central nervous system controls dexterous manipulation. This new knowledge can potentially benefit research in neuroprosthetics and improve the efficacy of neurorehabilitation techniques for patients affected by sensorimotor impairments.
Date Created
2017
Agent

Assessing Performance, Role Sharing, and Control Mechanisms in Human-Human Physical Interaction for Object Manipulation

155935-Thumbnail Image.png
Description
Object manipulation is a common sensorimotor task that humans perform to interact with the physical world. The first aim of this dissertation was to characterize and identify the role of feedback and feedforward mechanisms for force control in object manipulation

Object manipulation is a common sensorimotor task that humans perform to interact with the physical world. The first aim of this dissertation was to characterize and identify the role of feedback and feedforward mechanisms for force control in object manipulation by introducing a new feature based on force trajectories to quantify the interaction between feedback- and feedforward control. This feature was applied on two grasp contexts: grasping the object at either (1) predetermined or (2) self-selected grasp locations (“constrained” and “unconstrained”, respectively), where unconstrained grasping is thought to involve feedback-driven force corrections to a greater extent than constrained grasping. This proposition was confirmed by force feature analysis. The second aim of this dissertation was to quantify whether force control mechanisms differ between dominant and non-dominant hands. The force feature analysis demonstrated that manipulation by the dominant hand relies on feedforward control more than the non-dominant hand. The third aim was to quantify coordination mechanisms underlying physical interaction by dyads in object manipulation. The results revealed that only individuals with worse solo performance benefit from interpersonal coordination through physical couplings, whereas the better individuals do not. This work showed that naturally emerging leader-follower roles, whereby the leader in dyadic manipulation exhibits significant greater force changes than the follower. Furthermore, brain activity measured through electroencephalography (EEG) could discriminate leader and follower roles as indicated power modulation in the alpha frequency band over centro-parietal areas. Lastly, this dissertation suggested that the relation between force and motion (arm impedance) could be an important means for communicating intended movement direction between biological agents.
Date Created
2017
Agent

Human-Robot Interaction Utilizing Asymmetric Cooperation and the Brain

155910-Thumbnail Image.png
Description
The interaction between humans and robots has become an important area of research as the diversity of robotic applications has grown. The cooperation of a human and robot to achieve a goal is an important area within the physical human-robot

The interaction between humans and robots has become an important area of research as the diversity of robotic applications has grown. The cooperation of a human and robot to achieve a goal is an important area within the physical human-robot interaction (pHRI) field. The expansion of this field is toward moving robotics into applications in unstructured environments. When humans cooperate with each other, often there are leader and follower roles. These roles may change during the task. This creates a need for the robotic system to be able to exchange roles with the human during a cooperative task. The unstructured nature of the new applications in the field creates a need for robotic systems to be able to interact in six degrees of freedom (DOF). Moreover, in these unstructured environments, the robotic system will have incomplete information. This means that it will sometimes perform an incorrect action and control methods need to be able to correct for this. However, the most compelling applications for robotics are where they have capabilities that the human does not, which also creates the need for robotic systems to be able to correct human action when it detects an error. Activity in the brain precedes human action. Utilizing this activity in the brain can classify the type of interaction desired by the human. For this dissertation, the cooperation between humans and robots is improved in two main areas. First, the ability for electroencephalogram (EEG) to determine the desired cooperation role with a human is demonstrated with a correct classification rate of 65%. Second, a robotic controller is developed to allow the human and robot to cooperate in six DOF with asymmetric role exchange. This system allowed human-robot cooperation to perform a cooperative task at 100% correct rate. High, medium, and low levels of robotic automation are shown to affect performance, with the human making the greatest numbers of errors when the robotic system has a medium level of automation.
Date Created
2017
Agent