Ayuda
Ir al contenido

Dialnet


Optimal distributed stochastic mirror descent for strongly convex optimization

  • Autores: Deming Yuan, Yiguang Hong, Daniel W.C. Ho, Guoping Jiang
  • Localización: Automatica: A journal of IFAC the International Federation of Automatic Control, ISSN 0005-1098, Vol. 90, 2018, págs. 196-203
  • Idioma: inglés
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • In this paper we consider convergence rate problems for stochastic strongly-convex optimization in the non-Euclidean sense with a constraint set over a time-varying multi-agent network. We propose two efficient non-Euclidean stochastic subgradient descent algorithms based on the Bregman divergence as distance-measuring function rather than the Euclidean distances that were employed by the standard distributed stochastic projected subgradient algorithms. For distributed optimization of non-smooth and strongly convex functions whose only stochastic subgradients are available, the first algorithm recovers the best previous known rate of O(ln(T)∕T) (where T is the total number of iterations). The second algorithm is an epoch variant of the first algorithm that attains the optimal convergence rate of O(1∕T), matching that of the best previously known centralized stochastic subgradient algorithm. Finally, we report some simulation results to illustrate the proposed algorithms.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno