Ayuda
Ir al contenido

Dialnet


Global localization based on evolutionary optimization algorithms for indoor and underground environments

  • Autores: Juan Carballeira López
  • Directores de la Tesis: Luis Enrique Moreno Lorente (dir. tes.)
  • Lectura: En la Universidad Carlos III de Madrid ( España ) en 2022
  • Idioma: inglés
  • Tribunal Calificador de la Tesis: Fabio Bonsignorio (presid.), María Dolores Blanco Rojas (secret.), Alberto Brunete González (voc.)
  • Programa de doctorado: Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de Madrid
  • Materias:
  • Enlaces
  • Resumen
    • español

      Lo que define a un robot completamente autónomo es su capacidad para percibir el entorno, comprenderlo y poder desplazarse en ´el para realizar las tareas encomendadas. Estas cualidades se engloban dentro del concepto de la navegación, pero entre todas ellas la más básica y de la que dependen en buena parte el resto es la localización, la capacidad del sistema de conocer su posición respecto al entorno que lo rodea. De esta forma el problema de la localización se podría definir como la búsqueda de las coordenadas de posición y los ángulos de orientación de un robot móvil dentro de un entorno conocido. En esta tesis se aborda el caso particular de la localización global, cuando no existe información inicial alguna y el sistema depende únicamente de sus sensores. El objetivo de este trabajo es el desarrollo de varias herramientas que permitan que el sistema encuentre la localización en la que se encuentra respecto a los dos tipos de mapa más comúnmente utilizados para representar el entorno: los mapas de ocupación y las nubes de puntos. Los primeros subdividen el espacio en celdas de igual tamaño cuyo valor se define de forma binaria entre espacio libre y ocupado. Las nubes de puntos definen los obstáculos como una serie dispersa de puntos en el espacio comúnmente medidos a través de un láser. En este trabajo se presentan varios algoritmos para la búsqueda de esa posición utilizando únicamente las medidas de este sensor láser, en contraste con los métodos más habituales que combinan información externa con información propia del movimiento del robot, la odometría. De esta forma el sistema es capaz de encontrar su posición en entornos interiores sin depender de posicionamiento externo y sin verse influenciado por la deriva típica que inducen los sensores de movimiento. La solución se afronta mediante la implementación de varios tipos de algoritmos estocásticos de optimización o Meta-heurísticas, en concreto entre los denominados bio-inspirados o comúnmente conocidos como Algoritmos Evolutivos. Estos algoritmos, inspirados en varios fenómenos de la naturaleza, se basan en la evolución de una serie de partículas o población hacia una solución en base a la optimización de una función de coste que define el problema. Los algoritmos implementados en este trabajo son Differential Evolution, Particle Swarm Optimization e Invasive Weed Optimization, que tratan de imitar el comportamiento de la evolución por mutación, el movimiento de enjambres o bandas de animales y la colonización por parte de especies invasivas de plantas respectivamente. Las distintas implementaciones abordan la necesidad de parametrizar estos algoritmos para un espacio de búsqueda muy amplio como es un mapa completo, con la necesidad de que su comportamiento sea muy exploratorio, así como las condiciones de convergencia que definen el fin de la búsqueda ya que al ser un proceso recursivo de estimación la solución no es conocida. Estos algoritmos plantean la forma de buscar la localización ´optima del robot mediante la comparación de las medidas del láser en la posición real con lo esperado en la posición de cada una de esas partículas teniendo en cuenta el mapa conocido. La función de coste evalúa esa semejanza entre las medidas reales y estimadas y por tanto, es la función que define el problema. Las funciones típicamente utilizadas tanto en mapeado como localización mediante el uso de sensores láser de distancia son el error cuadrático medio o el error absoluto entre distancia estimada y real. En este trabajo se presenta una perspectiva diferente, aprovechando las distancias estadísticas o divergencias, utilizadas para establecer la semejanza entre distribuciones probabilísticas. Modelando el sensor como una distribución de probabilidad entorno a la medida aportada por el láser, se puede aprovechar la asimetría de esas divergencias para favorecer o penalizar distintas situaciones. De esta forma se evalúa como difieren las medias y no solo cuanto. Los resultados obtenidos en distintos mapas tanto simulados como reales demuestran que el problema de la localización se resuelve con éxito mediante estos métodos tanto respecto al error de estimación de la posición como de la orientación del robot. El uso de las divergencias y su implementación en una función de coste ponderada proporciona gran robustez y precisión al filtro de localización y gran respuesta ante diferentes fuentes y niveles de ruido, tanto de la propia medida del sensor, del ambiente y de obstáculos no modelados en el mapa del entorno.

    • English

      A fully autonomous robot is defined by its capability to sense, understand and move within the environment to perform a specific task. These qualities are included within the concept of navigation. However, among them, a basic transcendent one is localization, the capacity of the system to know its position regarding its surroundings. Therefore, the localization issue could be defined as searching the robot's coordinates and rotation angles within a known environment. In this thesis, the particular case of Global Localization (GL) is addressed. GL can be defined as the search for a robot's pose (position and orientation) in a two-dimensional (2D) or three-dimensional (3D) environment when the initial location is unknown. This work aims to develop several tools that allow the system to locate in the two most usual geometric map representations: occupancy maps and Point Clouds. The former divides the dimensional space into equally-sized cells coded with a binary value distinguishing between free and occupied space. Point Clouds define obstacles and environment features as a sparse set of points in the space, commonly measured through a laser sensor.

      In this thesis, a set of optimization techniques based on Evolutionary Algorithms (EAs) are developed as a continuation of the work carried out in the Robotics Lab research group of the Systems Engineering and Automation Department of the Carlos III University of Madrid (UC3M) in this field. These algorithms, which try to emulate nature in its way of selection for survival, will be presented and implemented as a feasible solution to this issue for different cases in 2D and 3D environments, both simulated and real. These maps represent the worst-case scenario of indoor or underground situations with no external information available. Therefore, the robot must rely uniquely on its sensory system. Various algorithms are presented to search for that position through laser measurements only, in contrast with more usual methods that combine external and motion information, odometry. Therefore, the system is capable of finding its own position in large volume environments, with no necessity of external positioning and without the influence of the uncertainty that motion sensors typically induce, a remarkable quality considering indoor and underground spaces. Our solution is addressed by implementing various stochastic optimization algorithms or Meta-heuristics, specifically those bio-inspired or commonly known as Evolutionary Algorithms. Inspired by natural phenomena, these algorithms are based on the evolution of a series of particles or population members towards a solution through the optimization of a cost or fitness function that defines the problem. Different alternatives for these cost, fitness or selection functions are presented as a more flexible approach than the commonly used Euclidean distance when comparing possible locations of the mobile robot based on range sensor information.

      The implemented algorithms are Differential Evolution (DE), Particle Swarm Optimization (PSO), and Invasive Weed Optimization (IWO), which try to mimic the behavior of evolution through mutation, the movement of swarms or flocks of animals during migrations or foraging, and the colonizing behavior of invasive species of plants respectively. These algorithms were selected among the population-based algorithms for the simplicity and low level of specific parameters required to be tuned. The different implementations address the necessity to parameterize these algorithms for a wide search space as a complete three-dimensional map, enhancing an exploratory behavior, and the convergence conditions that terminate the search when the population concentrates on the optimum solution. The process is a recursive optimum estimation search, so the solution is unknown. These implementations address the optimum localization search procedure by comparing the laser measurements from the real position with the one obtained from each candidate particle in the known map. The cost function evaluates this similarity between real and estimated measurements and, therefore, is the function that defines the problem to optimize.

      Our approach is based on the assumption that there is previous knowledge of the geometric map that defines the environment where the autonomous mobile robot performs its tasks. Depending on this map, localization becomes a two-dimensional problem, where the pose of the robot is typically defined by position (x, y) and rotation over the z axis Ѳ, or a simplified 3D where it is assumed that rotation only takes place within the horizontal plane and therefore the robot works with four degrees of freedom (DOF), (x, y, z, Ѳ). In this work, we address both 2D and 3D situations, but in the latter considering that the robot could vary any of the 6DOF that fully define a three-dimensional environment. Therefore, its pose would be defined by (x, y, z, α, β, γ) encompassing position and orientation (roll, pitch, yaw) in a 3D space. But more important than the classification of 2D and 3D is the difference between sparse and continuous maps or, Point Cloud-based and grid-based maps as it could also be presented. The first is formed by a set of spatial points representing laser beam measures. This map is built through SLAM techniques concatenating local point clouds in a consistent spatial manner to form a global map. Hence, the localization issue could be addressed as a point-to-point global/local PC pair scan matching search, local regarding the current PC obtained from the robot in the actual position. Each candidate member of the population represents a transformation of the local scan, a translation, and rotation to a candidate location. For each point in this transformed scan, a Nearest Neighbour search is performed to establish the closest point in the global map to be compared with. On a second model, the 2D or 3D space is represented as an occupancy grid map. Therefore, the whole environment is represented as occlusion or free space in contrast with sparse PC representation. On occupancy maps, laser measures can be simulated over the model. Comparison between real and estimated laser scans can be performed in a sorted manner, as there is a spatial relationship between beams, for every measurement in the real scan a homologous point can be found in the estimate, regarding vertical and horizontal orientation.

      The difference between real and estimated laser scans must be evaluated to select the most valuable members of the population in each iteration of the algorithm. The common approach in localization or mapping using laser sensors is to establish the mean square error or the absolute error between laser measurements as an optimization function. In this work, a different perspective is introduced by benefiting from statistical distance or divergences, utilized to describe the similarity between probability distributions. The algorithm can benefit from the asymmetries provided by these divergences to favor or penalize different situations. Hence, how the laser scans differ and not only how much, can be evaluated. Each measure of the laser is modeled as a probability distribution around the distance obtained, considering a low probability on shorter distances, a Gaussian distribution over the measurement, and an uncertainty zone behind the occlusion. This probabilistic model is considered for both real and estimated locations and is introduced into the algorithm’s fitness function as a weighted comparison that accounts for feasible or nonfeasible situations regarding the possible presence of obstacles, unregistered on the map but detected by the robot in the actual location. Statistical distances or divergences are now considered to measure the similarity between laser measurements, penalizing or favoring different situations for a more robust localization outcome. Probability-based cost functions introduce an asymmetry into the evaluation, favoring a biased comparison, in contrast with symmetric L1 and L2 norms. This implementation has proven to manage modeled and unmodeled types of perturbations in the perception phase from the actual position.

      Four different statistical distances have been implemented with the Kullback-Leibler (KL) divergence as a starting point. This divergence is a non-symmetric statistical distance that measures how probability distribution P differs from Q or the surprise when referring to a situation with actual distribution P by using Q as a model. Researchers use power functions to generalize the KL divergence and to obtain different classes of divergences. They report that power functions allow to increase the robustness with respect to outliers and, therefore, the performance is better or more flexible. Through this approach, it is possible to define three families of divergences (Alpha, Beta, and Gamma) that can be viewed as generalizations of the KL divergence. All classes are linked, and it is possible to do transformations between them. These families are derived from the well-known Csiszar–Morimoto f-divergence and the Bregman divergence. From the Alpha and Beta families, other three divergences are implemented and tested in this thesis, Density Power, Itakura-Saito and Jensen-Shannon.

      The probabilistic fitness function approach is combined with different types of Evolutionary Algorithms. For 2D and 3D occupancy grid maps, this stochastic engine is an implementation of the DE method. On sparse Point Cloud 3D maps, three bio-inspired or Evolutionary Meta-heuristics are implemented, DE, PSO, and IWO. The different Meta-heuristics have been tested in various situations. The accuracy of the algorithms is enough to consider the Global Localization issue solved in both types of environments and 2D and 3D spaces. The stochastic nature of the search can deal with the non-linearity and arbitrariness of the dynamics and the introduction of non-Gaussian perturbations without the constraints of posterior density approximations. Population-based Meta-heuristics have proven to cope with the expected quantity of local minimums caused by the symmetries and non-singular positions in indoor maps.

      For occupancy grid maps a 6DOF DE-based GL filter has been developed. This implementation is an easy-to-tune tool for localization tasks. Its mutation and cross-over characteristics encompass an inherent exploratory nature very suitable for large search domains. Thresholding and discarding mechanisms are implemented to adjust premature convergence to local minimums and computational costs. DE-GL filter was tested over simulated and real 3D occupancy grid maps with an average error of around 5cm obtained in a search space encompassing thousands of cubic meters. The orientation error is insignificant, and the successful localization rate is over 75% in most cases. Several experiments have been conducted to test the DE-GL filter before perception perturbations, including Gaussian noise, uniform noise, and unmodeled obstacles. A comparison between divergence-based functions and L2-norm shows outstanding results in favor of the probabilistic approach. The robustness of the weighted divergence-based fitness function can deal with up to 60% occluded laser measures. Regarding occupancy maps, the outstanding overcome of statistical distances compared with quadratic functions enforces the idea that a probabilistic approach bears a significant advantage compared to symmetric functions regardless of the divergences chosen. This approach can deal with higher levels and different sources of unexpected information from the laser scan. The results obtained in different maps, simulated and real, prove that the Global Localization issue is successfully solved through these methods, both in position and orientation. The implementation of divergence-based weighted cost functions provides great robustness and accuracy to the localization filter and optimal response before different sources and noise levels from sensor measurements, the environment, or the presence of obstacles that are not registered on the map.

      The bio-inspired optimization solutions have been extended to Point Cloud maps, the current state-of-the-art 3D space representation technique. Sparse laser information presents a more challenging issue as more information is obtained with an increase in computational costs. In addition, no spatial relation can be ensured between the points that represent a global map and a local scan. Hence, this sparsity is addressed by solving the optimization as a Scan Matching procedure. Two more Meta-heuristics are presented based on PSO and IWO algorithms that present different qualities but the same simplicity regarding DE. These are population-based algorithms that introduce a more local-focused search of the domain. Higher accuracy is expected if the convergence of the algorithm is achieved. The implementation, the considered parameters, and its particularization for the GL task are presented and tested. In a general conclusion, the GL issue has been solved for sparse metric maps and all Meta-heuristic implementations. Differences can be observed that lead us to select PSO-GL for the trade-off between precision and computational time. The local search nature of IWO provides high accuracy. However, the exponential increase in population in each iteration leads to a high computational cost, the main drawback of this implementation. The exploratory nature of DE leads to satisfactory results but even reducing it close to converging still leads to a lower success rate. PSO shows a more stable performance in terms of accuracy and localization rate. The parameter adjustment to establish a trade-off between local and global search is more intuitive, and accurate localization is obtained in fewer iterations when compared to IWO.

      Regarding the implementation of probability-based fitness functions, the results show that if this problem is approached as a Scan Matching solution, probability profiles do not suppose a drastic improvement as in occupancy maps. The difficulty in establishing a spatial relation for each point of the local laser scan with a point in the global map prevents the weighted divergence-based fitness function from exploiting its capabilities. Based on the experiments in Point Cloud environments, the probabilistic approach still shows some advantages over the L2-norm in positioning accuracy before unmodeled obstacles. However, in a general conclusion, the quadratic function is more suitable in this type of representation if no spatial relation is achieved, showing a more robust performance with a higher success rate.

      The main drawback of these methods is the computational cost and initialization parameters. The high amount of time elapsed is motivated by the large size of the search space, with a significant population required, which leads to a high number of evaluations to converge to an optimum solution. As shown in the experimental tests, posterior tracking performance alleviates this situation. The search space and the necessary population number can be significantly reduced, and accurate localization can be obtained at reasonable times. However, stochastic algorithms are still challenging to apply online. Population initialization and adjustment are important factors in the computational efficiency of these methods. Empirical tests performed prove that an increase in population does not necessarily improve the performance of the different solutions.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno