Neurolinguistic Revelations of Logographic Scripts

136652-Thumbnail Image.png
Description
Language, as an abstract, is one of the most sophisticated inventions even devised by human beings. Reading alone is a multi-faceted problem, and understanding how the brain solves it can offer enormous benefits for scientists and language-enthusiasts alike. In order

Language, as an abstract, is one of the most sophisticated inventions even devised by human beings. Reading alone is a multi-faceted problem, and understanding how the brain solves it can offer enormous benefits for scientists and language-enthusiasts alike. In order to gain a more complete picture of how language and the brain relate, Chinese, an East Asian logographic language, and English, an alphabetic language, were compared and contrasted using all available scientific literature in both psychology and neuroimaging. Taken together, these findings are used to generalize the processing of written language. It was found that the hypothesis of a neuroplastically adaptable network that recruits brain areas based on the demands of a specific language has stronger support in current research than does the model of a fixed language network that is merely tuned for different languages. These findings reiterate the need for meticulous control of variables in order to reasonably compare language tasks and also demands more precise localization and labeling of brain regions for the purpose of determining function of individual areas.
Date Created
2014-05
Agent

Current joint action problems and solutions in robotics-based stroke upper limb rehabilitation

136487-Thumbnail Image.png
Description
Robotic rehabilitation for upper limb post-stroke recovery is a developing technology. However, there are major issues in the implementation of this type of rehabilitation, issues which decrease efficacy. Two of the major solutions currently being explored to the upper limb

Robotic rehabilitation for upper limb post-stroke recovery is a developing technology. However, there are major issues in the implementation of this type of rehabilitation, issues which decrease efficacy. Two of the major solutions currently being explored to the upper limb post-stroke rehabilitation problem are the use of socially assistive rehabilitative robots, robots which directly interact with patients, and the use of exoskeleton-based systems of rehabilitation. While there is great promise in both of these techniques, they currently lack sufficient efficacy to objectively justify their costs. The overall efficacy to both of these techniques is about the same as conventional therapy, yet each has higher overhead costs that conventional therapy does. However there are associated long-term cost savings in each case, meaning that the actual current viability of either of these techniques is somewhat nebulous. In both cases, the problems which decrease technique viability are largely related to joint action, the interaction between robot and human in completing specific tasks, and issues in robot adaptability that make joint action difficult. As such, the largest part of current research into rehabilitative robotics aims to make robots behave in more "human-like" manners or to bypass the joint action problem entirely.
Date Created
2015-05
Agent

The Role of Precision Grip Aperture in Hardness Differentiation of Cube-Like Objects

136361-Thumbnail Image.png
Description
Determining the characteristics of an object during a grasping task requires a combination of mechanoreceptors in the muscles and fingertips. The width of a person's finger aperture during the grasp may affect the accuracy of how that person determines hardness,

Determining the characteristics of an object during a grasping task requires a combination of mechanoreceptors in the muscles and fingertips. The width of a person's finger aperture during the grasp may affect the accuracy of how that person determines hardness, as well. These experiments aim to investigate how an individual perceives hardness amongst a gradient of varying hardness levels. The trend in the responses is assumed to follow a general psychometric function. This will provide information about subjects' abilities to differentiate between two largely different objects, and their tendencies towards guess-chances upon the presentation of two similar objects. After obtaining this data, it is then important to additionally test varying finger apertures in an object-grasping task. This will allow an insight into the effect of aperture on the obtained psychometric function, thus ultimately providing information about tactile and haptic feedback for further application in neuroprosthetic devices. Three separate experiments were performed in order to test the effect of finger aperture on object hardness differentiation. The first experiment tested a one-finger pressing motion among a hardness gradient of ballistic gelatin cubes. Subjects were asked to compare the hardness of one cube to another, which produced the S-curve that accurately portrayed the psychometric function. The second experiment utilized the Phantom haptic device in a similar setup, using the precision grip grasping motion, instead. This showed a more linear curve; the percentage reported harder increased as the hardness of the second presented cube increased, which was attributed to both the experimental setup limitations and the scale of the general hardness gradient. The third experiment then progressed to test the effect of three finger apertures in the same experimental setup. By providing three separate testing scenarios in the precision grip task, the experiment demonstrated that the level of finger aperture has no significant effect on an individual's ability to perceive hardness.
Date Created
2015-05
Agent

Digit Control During Object Handover

136230-Thumbnail Image.png
Description
Currently, assistive robots and prosthesis have a difficult time giving and receiving objects to and from humans. While many attempts have been made to program handover scenarios into robotic control algorithms, the algorithms are typically lacking in at least one

Currently, assistive robots and prosthesis have a difficult time giving and receiving objects to and from humans. While many attempts have been made to program handover scenarios into robotic control algorithms, the algorithms are typically lacking in at least one important feature: intuitiveness, safety, and efficiency. By performing a study to better understand human-to-human handovers, we observe trends that could inspire controllers for object handovers with robots. Ten pairs of human subjects handed over a cellular phone-shaped, instrumented object using a key pinch while 3D force and motion tracking data were recorded. It was observed that during handovers, humans apply a compressive force on the object and employ linear grip force to load force ratios when two agents are grasping an object (referred to as the "mutual grasp period"). Results also suggested that object velocity during the mutual grasp period is driven by the receiver, while the duration of the mutual grasp period is driven by the preference of the slowest agent involved in the handover. Ultimately, these findings will inspire the development of robotic handover controllers to advance seamless physical interactions between humans and robots.
Date Created
2015-05
Agent

Algorithms for neural prosthetic applications

155473-Thumbnail Image.png
Description
In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to

In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to control a motor prosthesis using their residual neural pathways (central or peripheral). Recent studies in non-human primates and humans have shown the possibility of controlling a prosthesis for accomplishing varied tasks such as self-feeding, typing, reaching, grasping, and performing fine dexterous movements. A neural decoding system comprises mainly of three components: (i) sensors to record neural signals, (ii) an algorithm to map neural recordings to upper limb kinematics and (iii) a prosthetic arm actuated by control signals generated by the algorithm. Machine learning algorithms that map input neural activity to the output kinematics (like finger trajectory) form the core of the neural decoding system. The choice of the algorithm is thus, mainly imposed by the neural signal of interest and the output parameter being decoded. The various parts of a neural decoding system are neural data, feature extraction, feature selection, and machine learning algorithm. There have been significant advances in the field of neural prosthetic applications. But there are challenges for translating a neural prosthesis from a laboratory setting to a clinical environment. To achieve a fully functional prosthetic device with maximum user compliance and acceptance, these factors need to be addressed and taken into consideration. Three challenges in developing robust neural decoding systems were addressed by exploring neural variability in the peripheral nervous system for dexterous finger movements, feature selection methods based on clinically relevant metrics and a novel method for decoding dexterous finger movements based on ensemble methods.
Date Created
2017
Agent

Uniform and Non-Uniform Perturbations in Brain-Machine Interface Task Elicit Similar Neural Strategies

128579-Thumbnail Image.png
Description

The neural mechanisms that take place during learning and adaptation can be directly probed with brain-machine interfaces (BMIs). We developed a BMI controlled paradigm that enabled us to enforce learning by introducing perturbations which changed the relationship between neural activity

The neural mechanisms that take place during learning and adaptation can be directly probed with brain-machine interfaces (BMIs). We developed a BMI controlled paradigm that enabled us to enforce learning by introducing perturbations which changed the relationship between neural activity and the BMI's output. We introduced a uniform perturbation to the system, through a visuomotor rotation (VMR), and a non-uniform perturbation, through a decorrelation task. The controller in the VMR was essentially unchanged, but produced an output rotated at 30° from the neurally specified output. The controller in the decorrelation trials decoupled the activity of neurons that were highly correlated in the BMI task by selectively forcing the preferred directions of these cell pairs to be orthogonal. We report that movement errors were larger in the decorrelation task, and subjects needed more trials to restore performance back to baseline. During learning, we measured decreasing trends in preferred direction changes and cross-correlation coefficients regardless of task type. Conversely, final adaptations in neural tunings were dependent on the type controller used (VMR or decorrelation). These results hint to the similar process the neural population might engage while adapting to new tasks, and how, through a global process, the neural system can arrive to individual solutions.

Date Created
2016-08-23
Agent

Discriminability of Single and Multichannel Intracortical Microstimulation Within Somatosensory Cortex

128598-Thumbnail Image.png
Description

The addition of tactile and proprioceptive feedback to neuroprosthetic limbs is expected to significantly improve the control of these devices. Intracortical microstimulation (ICMS) of somatosensory cortex is a promising method of delivering this sensory feedback. To date, the main focus

The addition of tactile and proprioceptive feedback to neuroprosthetic limbs is expected to significantly improve the control of these devices. Intracortical microstimulation (ICMS) of somatosensory cortex is a promising method of delivering this sensory feedback. To date, the main focus of somatosensory ICMS studies has been to deliver discriminable signals, corresponding to varying intensity, to a single location in cortex. However, multiple independent and simultaneous streams of sensory information will need to be encoded by ICMS to provide functionally relevant feedback for a neuroprosthetic limb (e.g., encoding contact events and pressure on multiple digits). In this study, we evaluated the ability of an awake, behaving non-human primate (Macaca mulatta) to discriminate ICMS stimuli delivered on multiple electrodes spaced within somatosensory cortex. We delivered serial stimulation on single electrodes to evaluate the discriminability of sensations corresponding to ICMS of distinct cortical locations. Additionally, we delivered trains of multichannel stimulation, derived from a tactile sensor, synchronously across multiple electrodes. Our results indicate that discrimination of multiple ICMS stimuli is a challenging task, but that discriminable sensory percepts can be elicited by both single and multichannel ICMS on electrodes spaced within somatosensory cortex.

Date Created
2016-12-02
Agent

The Proprioceptive Map of the Arm Is Systematic and Stable, But Idiosyncratic

128790-Thumbnail Image.png
Description

Visual and somatosensory signals participate together in providing an estimate of the hand's spatial location. While the ability of subjects to identify the spatial location of their hand based on visual and proprioceptive signals has previously been characterized, relatively few

Visual and somatosensory signals participate together in providing an estimate of the hand's spatial location. While the ability of subjects to identify the spatial location of their hand based on visual and proprioceptive signals has previously been characterized, relatively few studies have examined in detail the spatial structure of the proprioceptive map of the arm. Here, we reconstructed and analyzed the spatial structure of the estimation errors that resulted when subjects reported the location of their unseen hand across a 2D horizontal workspace. Hand position estimation was mapped under four conditions: with and without tactile feedback, and with the right and left hands. In the task, we moved each subject's hand to one of 100 targets in the workspace while their eyes were closed. Then, we either a) applied tactile stimulation to the fingertip by allowing the index finger to touch the target or b) as a control, hovered the fingertip 2 cm above the target. After returning the hand to a neutral position, subjects opened their eyes to verbally report where their fingertip had been. We measured and analyzed both the direction and magnitude of the resulting estimation errors. Tactile feedback reduced the magnitude of these estimation errors, but did not change their overall structure. In addition, the spatial structure of these errors was idiosyncratic: each subject had a unique pattern of errors that was stable between hands and over time. Finally, we found that at the population level the magnitude of the estimation errors had a characteristic distribution over the workspace: errors were smallest closer to the body. The stability of estimation errors across conditions and time suggests the brain constructs a proprioceptive map that is reliable, even if it is not necessarily accurate. The idiosyncrasy across subjects emphasizes that each individual constructs a map that is unique to their own experiences.

Date Created
2011-11-16
Agent

Effects of Fusion Between Tactile and Proprioceptive Inputs on Tactile Perception

128895-Thumbnail Image.png
Description

Tactile perception is typically considered the result of cortical interpretation of afferent signals from a network of mechanical sensors underneath the skin. Yet, tactile illusion studies suggest that tactile perception can be elicited without afferent signals from mechanoceptors. Therefore, the

Tactile perception is typically considered the result of cortical interpretation of afferent signals from a network of mechanical sensors underneath the skin. Yet, tactile illusion studies suggest that tactile perception can be elicited without afferent signals from mechanoceptors. Therefore, the extent that tactile perception arises from isomorphic mapping of tactile afferents onto the somatosensory cortex remains controversial. We tested whether isomorphic mapping of tactile afferent fibers onto the cortex leads directly to tactile perception by examining whether it is independent from proprioceptive input by evaluating the impact of different hand postures on the perception of a tactile illusion across fingertips. Using the Cutaneous Rabbit Effect, a well studied illusion evoking the perception that a stimulus occurs at a location where none has been delivered, we found that hand posture has a significant effect on the perception of the illusion across the fingertips. This finding emphasizes that tactile perception arises from integration of perceived mechanical and proprioceptive input and not purely from tactile interaction with the external environment.

Date Created
2011-03-25
Agent

Engineering approaches for improving cortical interfacing and algorithms for the evaluation of treatment resistant epilepsy

154164-Thumbnail Image.png
Description
Epilepsy is a group of disorders that cause seizures in approximately 2.2 million people in the United States. Over 30% of these patients have epilepsies that do not respond to treatment with anti-epileptic drugs. For this population, focal resection surgery

Epilepsy is a group of disorders that cause seizures in approximately 2.2 million people in the United States. Over 30% of these patients have epilepsies that do not respond to treatment with anti-epileptic drugs. For this population, focal resection surgery could offer long-term seizure freedom. Surgery candidates undergo a myriad of tests and monitoring to determine where and when seizures occur. The “gold standard” method for focus identification involves the placement of electrocorticography (ECoG) grids in the sub-dural space, followed by continual monitoring and visual inspection of the patient’s cortical activity. This process, however, is highly subjective and uses dated technology. Multiple studies were performed to investigate how the evaluation process could benefit from an algorithmic adjust using current ECoG technology, and how the use of new microECoG technology could further improve the process.

Computational algorithms can quickly and objectively find signal characteristics that may not be detectable with visual inspection, but many assume the data are stationary and/or linear, which biological data are not. An empirical mode decomposition (EMD) based algorithm was developed to detect potential seizures and tested on data collected from eight patients undergoing monitoring for focal resection surgery. EMD does not require linearity or stationarity and is data driven. The results suggest that a biological data driven algorithm could serve as a useful tool to objectively identify changes in cortical activity associated with seizures.

Next, the use of microECoG technology was investigated. Though both ECoG and microECoG grids are composed of electrodes resting on the surface of the cortex, changing the diameter of the electrodes creates non-trivial changes in the physics of the electrode-tissue interface that need to be accounted for. Experimenting with different recording configurations showed that proper grounding, referencing, and amplification are critical to obtain high quality neural signals from microECoG grids.

Finally, the relationship between data collected from the cortical surface with micro and macro electrodes was studied. Simultaneous recordings of the two electrode types showed differences in power spectra that suggest the inclusion of activity, possibly from deep structures, by macroelectrodes that is not accessible by microelectrodes.
Date Created
2015
Agent