Full metadata
Title
Haptic perception, decision-making, and learning for manipulation with artificial hands
Description
Robotic systems are outmatched by the abilities of the human hand to perceive and manipulate the world. Human hands are able to physically interact with the world to perceive, learn, and act to accomplish tasks. Limitations of robotic systems to interact with and manipulate the world diminish their usefulness. In order to advance robot end effectors, specifically artificial hands, rich multimodal tactile sensing is needed. In this work, a multi-articulating, anthropomorphic robot testbed was developed for investigating tactile sensory stimuli during finger-object interactions. The artificial finger is controlled by a tendon-driven remote actuation system that allows for modular control of any tendon-driven end effector and capabilities for both speed and strength. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. Next, attention was focused on real-time artificial perception for decision-making. A robotic system needs to perceive its environment in order to make decisions. Specific actions such as “exploratory procedures” can be employed to classify and characterize object features. Prior work on offline perception was extended to develop an anytime predictive model that returns the probability of having touched a specific feature of an object based on minimally processed sensor data. Developing models for anytime classification of features facilitates real-time action-perception loops. Finally, by combining real-time action-perception with reinforcement learning, a policy was learned to complete a functional contour-following task: closing a deformable ziplock bag. The approach relies only on proprioceptive and localized tactile data. A Contextual Multi-Armed Bandit (C-MAB) reinforcement learning algorithm was implemented to maximize cumulative rewards within a finite time period by balancing exploration versus exploitation of the action space. Performance of the C-MAB learner was compared to a benchmark Q-learner that eventually returns the optimal policy. To assess robustness and generalizability, the learned policy was tested on variations of the original contour-following task. The work presented contributes to the full range of tools necessary to advance the abilities of artificial hands with respect to dexterity, perception, decision-making, and learning.
Date Created
2016
Contributors
- Hellman, Randall Blake (Author)
- Santos, Veronica J (Thesis advisor)
- Artemiadis, Panagiotis K (Committee member)
- Berman, Spring (Committee member)
- Helms Tillery, Stephen I (Committee member)
- Fainekos, Georgios (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
xiv, 166 pages : illustrations (some color)
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.I.40238
Statement of Responsibility
by Randall Blake Hellman
Description Source
Viewed on November 7, 2016
Level of coding
full
Note
thesis
Partial requirement for: Ph.D., Arizona State University, 2016
bibliography
Includes bibliographical references (pages 154-166)
Field of study: Mechanical engineering
System Created
- 2016-10-12 02:17:26
System Modified
- 2021-08-30 01:21:40
- 3 years 3 months ago
Additional Formats