Ayuda
Ir al contenido

Dialnet


Resumen de Data-based reinforcement learning approximate optimal control for an uncertain nonlinear system with control effectiveness faults

Patryk Deptula, Zachary I. Bell, Emily A. Doucette, J. Willard Curtis, W.E. Dixon

  • An infinite horizon approximate optimal control problem is developed for a system with unknown drift parameters and control effectiveness faults. A data-based filtered parameter estimator with a novel dynamic gain structure is developed to simultaneously estimate the unknown drift dynamics and control effectiveness fault. A local state-following approximate dynamic programming method is used to approximate the unknown optimal value function for an uncertain system. Using a relaxed persistence of excitation condition, a Lyapunov-based stability analysis shows exponential convergence to a residual error for the parameter estimation and uniformly ultimately bounded convergence for the closed-loop system. Simulation results are presented which demonstrate the effectiveness of the developed method.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus