Por favor, use este identificador para citar o enlazar a este item: http://hdl.handle.net/10261/98410
COMPARTIR / EXPORTAR:
logo share SHARE BASE
Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE

Invitar a revisión por pares abierta
Título

Robot learning from demonstration of force-based manipulation tasks

AutorRozo, Leonel CSIC ORCID
DirectorTorras, Carme CSIC ORCID; Jiménez Schlegl, Pablo
Fecha de publicación2013
EditorUniversidad Politécnica de Cataluña
ResumenOne of the main challenges in Robotics is to develop robots that can interact with humans in a natural way, sharing the same dynamic and unstructured environments. Such an interaction may be aimed at assisting, helping or collaborating with a human user. To achieve this, the robot must be endowed with a cognitive system that allows it not only to learn new skills from its human partner, but also to refine or improve those already learned. In this context, learning from demonstration appears as … [+]a natural and userfriendly way to transfer knowledge from humans to robots. This dissertation addresses such a topic and its application to an unexplored field, namely force-based manipulation tasks learning. In this kind of scenarios, force signals can convey data about the stiffness of a given object, the inertial components acting on a tool, a desired force profile to be reached, etc. Therefore, if the user wants the robot to learn a manipulation skill successfully, it is essential that its cognitive system is able to deal with force perceptions. The first issue this thesis tackles is to extract the input information that is relevant for learning the task at hand, which is also known as the what to imitate? problem. Here, the proposed solution takes into consideration that the robot actions are a function of sensory signals, in other words the importance of each perception is assessed through its correlation with the robot movements. A Mutual Information analysis is used for selecting the most relevant inputs according to their influence on the output space. In this way, the robot can gather all the information coming from its sensory system, and the perception selection module proposed here automatically chooses the data the robot needs to learn a given task. Having selected the relevant input information for the task, it is necessary to represent the human demonstrations in a compact way, encoding the relevant characteristics of the data, for instance, sequential information, uncertainty, constraints, etc.
This issue is the next problem addressed in this thesis. Here, a probabilistic learning framework based on hidden Markov models and Gaussian mixture regression is proposed for learning force-based manipulation skills. The outstanding features of such a framework are: (i) it is able to deal with the noise and uncertainty of force signals because of its probabilistic formulation, (ii) it exploits the sequential information embedded in the model for managing perceptual aliasing and time discrepancies, and (iii) it takes advantage of task variables to encode those force-based skills where the robot actions are modulated by an external parameter. Therefore, the resulting learning structure is able to robustly encode and reproduce different manipulation tasks. After, this thesis goes a step forward by proposing a novel whole framework for learning impedance-based behaviors from demonstrations. The key aspects here are that this new structure merges vision and force information for encoding the data compactly, and it allows the robot to have different behaviors by shaping its compliance level over the course of the task. This is achieved by a parametric probabilistic model, whose Gaussian components are the basis of a statistical dynamical system that governs the robot motion. From the force perceptions, the stiffness of the springs composing such a system are estimated, allowing the robot to shape its compliance. This approach permits to extend the learning paradigm to other fields different from the common trajectory following. The proposed frameworks are tested in three scenarios, namely, (a) the ball-in-box task, (b) drink pouring, and (c) a collaborative assembly, where the experimental results evidence the importance of using force perceptions as well as the usefulness and strengths of the methods.
DescripciónTesis Doctoral presentada por Leonel Dario Rozo Castañeda en el programa de doctorado "Automatic Control, Robotics and Computer Vision" de la UPC y realizada en el Institut de Robòtica i Informàtica Industrial, CSIC-UPC.
URIhttp://hdl.handle.net/10261/98410
Aparece en las colecciones: (IRII) Tesis




Ficheros en este ítem:
Fichero Descripción Tamaño Formato
Robot learning from demonstration.pdf10,96 MBAdobe PDFVista previa
Visualizar/Abrir
Mostrar el registro completo

CORE Recommender

Page view(s)

304
checked on 17-abr-2024

Download(s)

628
checked on 17-abr-2024

Google ScholarTM

Check


NOTA: Los ítems de Digital.CSIC están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.