Ayuda
Ir al contenido

Dialnet


Model-based reinforcement learning for approximate optimal regulation

    1. [1] University of Florida

      University of Florida

      Estados Unidos

  • Localización: Automatica: A journal of IFAC the International Federation of Automatic Control, ISSN 0005-1098, Vol. 64, 2016, págs. 94-104
  • Idioma: inglés
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • Reinforcement learning (RL)-based online approximate optimal control methods applied to deterministic systems typically require a restrictive persistence of excitation (PE) condition for convergence. This paper develops a concurrent learning (CL)-based implementation of model-based RL to solve approximate optimal regulation problems online under a PE-like rank condition. The development is based on the observation that, given a model of the system, RL can be implemented by evaluating the Bellman error at any number of desired points in the state space. In this result, a parametric system model is considered, and a CL-based parameter identifier is developed to compensate for uncertainty in the parameters. Uniformly ultimately bounded regulation of the system states to a neighborhood of the origin, and convergence of the developed policy to a neighborhood of the optimal policy are established using a Lyapunov-based analysis, and simulation results are presented to demonstrate the performance of the developed controller.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno