Ayuda
Ir al contenido

Dialnet


Visión-based control of múltiple robots

  • Autores: Miguel Aranda Calleja
  • Directores de la Tesis: Gonzalo López-Nicolás (dir. tes.)
  • Lectura: En la Universidad de Zaragoza ( España ) en 2015
  • Idioma: español
  • Tribunal Calificador de la Tesis: Luis Enrique Moreno Lorente (presid.), Carlos Sagüés (secret.), Youcef Mezouar (voc.)
  • Materias:
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • Endowing mobile agents with the ability to autonomously navigate an environment is a fundamental research problem in robotics. In particular, systems that consist of multiple autonomous robots moving in coordinated fashion have received tremendous attention recently. Compared with single-robot setups, multirobot systems provide more efficient and robust task completion, and enable behaviors having a higher degree of complexity and sophistication. These properties make them attractive in numerous applications across diverse domains that include manufacturing, transportation, farming, environmental monitoring or search and rescue missions. Technological advances in computation, sensing, actuation and communications are continuously enabling new real-world implementations of multirobot control systems. Relevant current challenges in this field concern the development of increasingly reliable, flexible and scalable systems while taking into account critical aspects such as efficiency of performance and cost per agent. The incorporation of the realistic capabilities and limitations of the robots in the design of group coordination algorithms represents a particularly significant issue.

      Autonomous robots rely on sensors to obtain the primary information they need to make decisions. Vision sensors provide abundant information while being widely available, convenient to use and relatively inexpensive, which has made them a usual choice in many robotic tasks. When dealing with systems that comprise multiple robots, the simplicity and cost advantages associated with the use of cameras become particularly relevant. Still, mobile robot control using vision presents challenges inherent to the very nature of this sensing modality, and faces specific problems when multirobot scenarios are considered.

      It is our goal in this work to address a number of these issues, presenting solutions that advance the state of the art in the field of vision-based control of multiple robots.

      We propose novel methods for control and navigation of mobile robots using 1D multiple-view models computed from angular visual information obtained with omnidirectional cameras. The relevance of the approaches we present lies in that they overcome field-of-view and robustness limitations, while at the same time providing advantages in terms of accuracy, simplicity and applicability on real platforms. In addition, we address coordinatedmotion tasks for multiple robots, exploring different system architectures.

      In particular, we propose a new image-based control setup where multiple aerial cameras are used to drive a team of ground robots to a desired formation, with interesting properties regarding simplicity, scalability and flexibility. Furthermore, we also present decentralized formation stabilization methods whose novelty with respect to the state of the art is that they guarantee global stability while relying only on information expressed in the robots¿ local reference frames and, thereby, being amenable to vision-based implementations.

      More precisely, this thesis contains the following contributions:

      - A method for visual robot homing based on a memory of omnidirectional images.

      We propose a stable control law for this task that uses purely angular visual information processed through the 1D trifocal tensor. Our method is accurate, robust, and permits flexible and long-range navigation in a planar workspace.

      - A novel vision-based pose stabilization methodology for nonholonomic ground robots based on sinusoidal-varying control inputs, which are specifically designed to comply with the kinematic constraints of the mobile platform. The pose information is obtained from omnidirectional vision through the 1D trifocal tensor, and the approach gives rise to smooth and efficient motions.

      - An algorithm to recover a generic motion between two 1D views that, unlike previous approaches, does not require a third view. The relevance of this method, which employs two 1D homographies, in the context of the thesis is that we use it to enable a vision-based control strategy to stabilize a team of mobile robots to a desired formation.

      - A novel multirobot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and control a formation of ground mobile robots. The control is image-based and partially distributed, which avoids field-of-view, scalability and robustness to failure issues, while greatly reducing the complexity and power consumption requirements for the ground robots.

      - Three coordinate-free methods for decentralized mobile robot formation stabilization.

      Contrary to existing approaches, these controllers are globally convergent while requiring only relative position measurements expressed in local coordinates, which paves the way for vision-based implementations. Specifically, we propose a distributed networked control strategy based on global information affected by time-delays, a purely distributed approach that relies on partial information, and a 3D target enclosing method for aerial robots.

      In the thesis, we describe in detail the proposed control approaches and formally study their properties. In addition, the performance of the different methodologies is evaluated both in simulation environments and through experiments with real robotic platforms and vision sensors.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno