Current joint action problems and solutions in robotics-based stroke upper limb rehabilitation

136487-Thumbnail Image.png
Description
Robotic rehabilitation for upper limb post-stroke recovery is a developing technology. However, there are major issues in the implementation of this type of rehabilitation, issues which decrease efficacy. Two of the major solutions currently being explored to the upper limb

Robotic rehabilitation for upper limb post-stroke recovery is a developing technology. However, there are major issues in the implementation of this type of rehabilitation, issues which decrease efficacy. Two of the major solutions currently being explored to the upper limb post-stroke rehabilitation problem are the use of socially assistive rehabilitative robots, robots which directly interact with patients, and the use of exoskeleton-based systems of rehabilitation. While there is great promise in both of these techniques, they currently lack sufficient efficacy to objectively justify their costs. The overall efficacy to both of these techniques is about the same as conventional therapy, yet each has higher overhead costs that conventional therapy does. However there are associated long-term cost savings in each case, meaning that the actual current viability of either of these techniques is somewhat nebulous. In both cases, the problems which decrease technique viability are largely related to joint action, the interaction between robot and human in completing specific tasks, and issues in robot adaptability that make joint action difficult. As such, the largest part of current research into rehabilitative robotics aims to make robots behave in more "human-like" manners or to bypass the joint action problem entirely.
Date Created
2015-05
Agent

Tracking sonic flows during fast head movements of marmoset monkeys

135844-Thumbnail Image.png
Description
Head turning is a common sound localization strategy in primates. A novel system that can track head movement and acoustic signals received at the entrance to the ear canal was tested to obtain binaural sound localization information during fast head

Head turning is a common sound localization strategy in primates. A novel system that can track head movement and acoustic signals received at the entrance to the ear canal was tested to obtain binaural sound localization information during fast head movement of marmoset monkey. Analysis of binaural information was conducted with a focus on inter-aural level difference (ILD) and inter-aural time difference (ITD) at various head positions over time. The results showed that during fast head turns, the ITDs showed significant and clear changes in trajectory in response to low frequency stimuli. However, significant phase ambiguity occurred at frequencies greater than 2 kHz. Analysis of ITD and ILD information with animal vocalization as the stimulus was also tested. The results indicated that ILDs may provide more information in understanding the dynamics of head movement in response to animal vocalizations in the environment. The primary significance of this experimentation is the successful implementation of a system capable of simultaneously recording head movement and acoustic signals at the ear canals. The collected data provides insight into the usefulness of ITD and ILD as binaural cues during head movement.
Date Created
2016-05
Agent

Visual Behavior and Planning for Object Manipulation: Gaze Patterns for Altered Center of Mass

155864-Thumbnail Image.png
Description
The interaction between visual fixations during planning and performance in a

dexterous task was analyzed. An eye-tracking device was affixed to subjects during

sequences of null (salient center of mass) and weighted (non salient center of mass) trials

with unconstrained precision grasp. Subjects

The interaction between visual fixations during planning and performance in a

dexterous task was analyzed. An eye-tracking device was affixed to subjects during

sequences of null (salient center of mass) and weighted (non salient center of mass) trials

with unconstrained precision grasp. Subjects experienced both expected and unexpected

perturbations, with the task of minimizing object roll. Unexpected perturbations were

controlled by switching weights between trials, expected perturbations were controlled by

asking subjects to rotate the object themselves. In all cases subjects were able to

minimize the roll of the object within three trials. Eye fixations were correlated with

object weight for the initial context and for known shifts in center of mass. In subsequent

trials with unexpected weight shifts, subjects appeared to scan areas of interest from both

contexts even after learning present orientation.
Date Created
2017
Agent

Neural Correlates of Learning and Trajectory Planning in the Posterior Parietal Cortex

128225-Thumbnail Image.png
Description

The posterior parietal cortex (PPC) is thought to play an important role in the planning of visually-guided reaching movements. However, the relative roles of the various subdivisions of the PPC in this function are still poorly understood. For example, studies

The posterior parietal cortex (PPC) is thought to play an important role in the planning of visually-guided reaching movements. However, the relative roles of the various subdivisions of the PPC in this function are still poorly understood. For example, studies of dorsal area 5 point to a representation of reaches in both extrinsic (endpoint) and intrinsic (joint or muscle) coordinates, as evidenced by partial changes in preferred directions and positional discharge with changes in arm posture. In contrast, recent findings suggest that the adjacent medial intraparietal area (MIP) is involved in more abstract representations, e.g., encoding reach target in visual coordinates. Such a representation is suitable for planning reach trajectories involving shortest distance paths to targets straight ahead. However, it is currently unclear how MIP contributes to the planning of other types of trajectories, including those with various degrees of curvature. Such curved trajectories recruit different joint excursions and might help us address whether their representation in the PPC is purely in extrinsic coordinates or in intrinsic ones as well. Here we investigated the role of the PPC in these processes during an obstacle avoidance task for which the animals had not been explicitly trained. We found that PPC planning activity was predictive of both the spatial and temporal aspects of upcoming trajectories. The same PPC neurons predicted the upcoming trajectory in both endpoint and joint coordinates. The predictive power of these neurons remained stable and accurate despite concomitant motor learning across task conditions. These findings suggest the role of the PPC can be extended from specifying abstract movement goals to expressing these plans as corresponding trajectories in both endpoint and joint coordinates. Thus, the PPC appears to contribute to reach planning and approach-avoidance arm motions at multiple levels of representation.

Date Created
2013-05-17
Agent

The Proprioceptive Map of the Arm Is Systematic and Stable, But Idiosyncratic

128790-Thumbnail Image.png
Description

Visual and somatosensory signals participate together in providing an estimate of the hand's spatial location. While the ability of subjects to identify the spatial location of their hand based on visual and proprioceptive signals has previously been characterized, relatively few

Visual and somatosensory signals participate together in providing an estimate of the hand's spatial location. While the ability of subjects to identify the spatial location of their hand based on visual and proprioceptive signals has previously been characterized, relatively few studies have examined in detail the spatial structure of the proprioceptive map of the arm. Here, we reconstructed and analyzed the spatial structure of the estimation errors that resulted when subjects reported the location of their unseen hand across a 2D horizontal workspace. Hand position estimation was mapped under four conditions: with and without tactile feedback, and with the right and left hands. In the task, we moved each subject's hand to one of 100 targets in the workspace while their eyes were closed. Then, we either a) applied tactile stimulation to the fingertip by allowing the index finger to touch the target or b) as a control, hovered the fingertip 2 cm above the target. After returning the hand to a neutral position, subjects opened their eyes to verbally report where their fingertip had been. We measured and analyzed both the direction and magnitude of the resulting estimation errors. Tactile feedback reduced the magnitude of these estimation errors, but did not change their overall structure. In addition, the spatial structure of these errors was idiosyncratic: each subject had a unique pattern of errors that was stable between hands and over time. Finally, we found that at the population level the magnitude of the estimation errors had a characteristic distribution over the workspace: errors were smallest closer to the body. The stability of estimation errors across conditions and time suggests the brain constructs a proprioceptive map that is reliable, even if it is not necessarily accurate. The idiosyncrasy across subjects emphasizes that each individual constructs a map that is unique to their own experiences.

Date Created
2011-11-16
Agent

Multisensory Interactions Influence Neuronal Spike Train Dynamics in the Posterior Parietal Cortex

128815-Thumbnail Image.png
Description

Although significant progress has been made in understanding multisensory interactions at the behavioral level, their underlying neural mechanisms remain relatively poorly understood in cortical areas, particularly during the control of action. In recent experiments where animals reached to and actively

Although significant progress has been made in understanding multisensory interactions at the behavioral level, their underlying neural mechanisms remain relatively poorly understood in cortical areas, particularly during the control of action. In recent experiments where animals reached to and actively maintained their arm position at multiple spatial locations while receiving either proprioceptive or visual-proprioceptive position feedback, multisensory interactions were shown to be associated with reduced spiking (i.e. subadditivity) as well as reduced intra-trial and across-trial spiking variability in the superior parietal lobule (SPL). To further explore the nature of such interaction-induced changes in spiking variability we quantified the spike train dynamics of 231 of these neurons. Neurons were classified as Poisson, bursty, refractory, or oscillatory (in the 13–30 Hz “beta-band”) based on their spike train power spectra and autocorrelograms. No neurons were classified as Poisson-like in either the proprioceptive or visual-proprioceptive conditions. Instead, oscillatory spiking was most commonly observed with many neurons exhibiting these oscillations under only one set of feedback conditions. The results suggest that the SPL may belong to a putative beta-synchronized network for arm position maintenance and that position estimation may be subserved by different subsets of neurons within this network depending on available sensory information. In addition, the nature of the observed spiking variability suggests that models of multisensory interactions in the SPL should account for both Poisson-like and non-Poisson variability.

Date Created
2016-12-29
Agent

Interconnects and packaging to enable autonomous movable MEMS microelectrodes to record and stimulate neurons in deep brain structures

154664-Thumbnail Image.png
Description
Long-term monitoring of deep brain structures using microelectrode implants is critical for the success of emerging clinical applications including cortical neural prostheses, deep brain stimulation and other neurobiology studies such as progression of disease states, learning and memory, brain mapping

Long-term monitoring of deep brain structures using microelectrode implants is critical for the success of emerging clinical applications including cortical neural prostheses, deep brain stimulation and other neurobiology studies such as progression of disease states, learning and memory, brain mapping etc. However, current microelectrode technologies are not capable enough of reaching those clinical milestones given their inconsistency in performance and reliability in long-term studies. In all the aforementioned applications, it is important to understand the limitations & demands posed by technology as well as biological processes. Recent advances in implantable Micro Electro Mechanical Systems (MEMS) technology have tremendous potential and opens a plethora of opportunities for long term studies which were not possible before. The overall goal of the project is to develop large scale autonomous, movable, micro-scale interfaces which can seek and monitor/stimulate large ensembles of precisely targeted neurons and neuronal networks that can be applied for brain mapping in behaving animals. However, there are serious technical (fabrication) challenges related to packaging and interconnects, examples of which include: lack of current industry standards in chip-scale packaging techniques for silicon chips with movable microstructures, incompatible micro-bonding techniques to elongate current micro-electrode length to reach deep brain structures, inability to achieve hermetic isolation of implantable devices from biological tissue and fluids (i.e. cerebrospinal fluid (CSF), blood, etc.). The specific aims are to: 1) optimize & automate chip scale packaging of MEMS devices with unique requirements not amenable to conventional industry standards with respect to bonding, process temperature and pressure in order to achieve scalability 2) develop a novel micro-bonding technique to extend the length of current polysilicon micro-electrodes to reach and monitor deep brain structures 3) design & develop high throughput packaging mechanism for constructing a dense array of movable microelectrodes. Using a combination of unique micro-bonding technique which involves conductive thermosetting epoxy’s with hermetically sealed support structures and a highly optimized, semi-automated, 90-minute flip-chip packaging process, I have now extended the repertoire of previously reported movable microelectrode arrays to bond conventional stainless steel and Pt/Ir microelectrode arrays of desired lengths to steerable polysilicon shafts. I tested scalable prototypes in rigorous bench top tests including Impedance measurements, accelerated aging and non-destructive testing to assess electrical and mechanical stability of micro-bonds under long-term implantation. I propose a 3D printed packaging method allows a wide variety of electrode configurations to be realized such as a rectangular or circular array configuration or other arbitrary geometries optimal for specific regions of the brain with inter-electrode distance as low as 25 um with an unprecedented capability of seeking and recording/stimulating targeted single neurons in deep brain structures up to 10 mm deep (with 6 μm displacement resolution). The advantage of this computer controlled moveable deep brain electrodes facilitates potential capabilities of moving past glial sheath surrounding microelectrodes to restore neural connection, counter the variabilities in signal amplitudes, and enable simultaneous recording/stimulation at precisely targeted layers of brain.
Date Created
2016
Agent

Learning joint actions in human-human interactions

Description
Understanding human-human interactions during the performance of joint motor tasks is critical for developing rehabilitation robots that could aid therapists in providing effective treatments for motor problems. However, there is a lack of understanding of strategies (cooperative or competitive) adopted

Understanding human-human interactions during the performance of joint motor tasks is critical for developing rehabilitation robots that could aid therapists in providing effective treatments for motor problems. However, there is a lack of understanding of strategies (cooperative or competitive) adopted by humans when interacting with other individuals. Previous studies have investigated the cues (auditory, visual and haptic) that support these interactions but understanding how these unconscious interactions happen even without those cues is yet to be explained. To address this issue, in this study, a paradigm that tests the parallel efforts of pairs of individuals (dyads) to complete a jointly performed virtual reaching task, without any auditory or visual information exchange was employed. Motion was tracked with a NDI OptoTrak 3D motion tracking system that captured each subject’s movement kinematics, through which we could measure the level of synchronization between two subjects in space and time. For the spatial analyses, the movement amplitudes and direction errors at peak velocities and at endpoints were analyzed. Significant differences in the movement amplitudes were found for subjects in 4 out of 6 dyads which were expected due to the lack of feedback between the subjects. Interestingly, subjects in this study also planned their movements in different directions in order to counteract the visuomotor rotation offered in the test blocks, which suggests the difference in strategies for the subjects in each dyad. Also, the level of de-adaptation in the control blocks in which no visuomotor rotation was offered to the subjects was measured. To further validate the results obtained through spatial analyses, a temporal analyses was done in which the movement times for the two subjects were compared. With the help of these results, numerous interaction scenarios that are possible in the human joint actions in without feedback were analyzed.
Date Created
2016
Agent

Haptic discrimination of object size using vibratory sensory substitution

154617-Thumbnail Image.png
Description
Humans constantly rely on a complex interaction of a variety of sensory modalities in order to complete even the simplest of daily tasks. For reaching and grasping to interact with objects, the visual, tactile, and proprioceptive senses provide the majority

Humans constantly rely on a complex interaction of a variety of sensory modalities in order to complete even the simplest of daily tasks. For reaching and grasping to interact with objects, the visual, tactile, and proprioceptive senses provide the majority of the information used. While vision is often relied on for many tasks, most people are able to accomplish common daily rituals without constant visual attention, instead relying mainly on tactile and proprioceptive cues. However, amputees using prosthetic arms do not have access to these cues, making tasks impossible without vision. Even tasks with vision can be incredibly difficult as prosthesis users are unable to modify grip force using touch, and thus tend to grip objects excessively hard to make sure they don’t slip.

Methods such as vibratory sensory substitution have shown promise for providing prosthesis users with a sense of contact and have proved helpful in completing motor tasks. In this thesis, two experiments were conducted to determine whether vibratory cues could be useful in discriminating between sizes. In the first experiment, subjects were asked to grasp a series of hidden virtual blocks of varying sizes with vibrations on the fingertips as indication of contact and compare the size of consecutive boxes. Vibratory haptic feedback significantly increased the accuracy of size discrimination over objects with only visual indication of contact, though accuracy was not as great as for typical grasping tasks with physical blocks. In the second, subjects were asked to adjust their virtual finger position around a series of virtual boxes with vibratory feedback on the fingertips using either finger movement or EMG. It was found that EMG control allowed for significantly less accuracy in size discrimination, implying that, while proprioceptive feedback alone is not enough to determine size, direct kinesthetic information about finger position is still needed.
Date Created
2016
Agent

Stress and strain propagation in soft viscoelastic tissue while tracking microscale targets

154263-Thumbnail Image.png
Description
Tracking microscale targets in soft tissue using implantable probes is important in clinical applications such as neurosurgery, chemotherapy and in neurophysiological application such as brain monitoring. In most of these applications, such tracking is done with visual feedback involving some

Tracking microscale targets in soft tissue using implantable probes is important in clinical applications such as neurosurgery, chemotherapy and in neurophysiological application such as brain monitoring. In most of these applications, such tracking is done with visual feedback involving some imaging modality that helps localization of the targets through images that are co-registered with stereotaxic coordinates. However, there are applications in brain monitoring where precision targeting of microscale targets such as single neurons need to be done in the absence of such visual feedback. In all of the above mentioned applications, it is important to understand the dynamics of mechanical stress and strain induced by the movement of implantable, often microscale probes in soft viscoelastic tissue. Propagation of such stresses and strains induce inaccuracies in positioning if they are not adequately compensated. The aim of this research is to quantitatively assess (a) the lateral propagation of stress and (b) the spatio-temporal distribution of strain induced by the movement of microscale probes in soft viscoelastic tissue. Using agarose hydrogel and a silicone derivative as two different bench-top models of brain tissue, we measured stress propagation during movement of microscale probes using a sensitive load cell. We further used a solution of microscale beads and the silicone derivative to quantitatively map the strain fields using video microscopy. The above measurements were done under two different types of microelectrode movement – first, a unidirectional movement and second, a bidirectional (inch-worm like) movement both of 30 μm step-size with 3min inter-movement interval. Results indicate movements of microscale probes can induce significant stresses as far as 500 μm laterally from the location of the probe. Strain fields indicate significantly high levels of displacements (in the order of 100 μm) within 100 μm laterally from the surface of the probes. The above measurements will allow us to build precise mechanical models of soft tissue and compensators that will enhance the accuracy of tracking microscale targets in soft tissue.
Date Created
2015
Agent