A CONTINUOUS-TIME APPROACH TO ONLINE OPTIMIZATION

JOURNAL OF DYNAMICS AND GAMES(2017)

Cited 32|Views12
No score
Abstract
We consider a family of mirror descent strategies for online optimization in continuous-time and we show that they lead to no regret. From a more traditional, discrete-time viewpoint, this continuous-time approach allows us to derive the no-regret properties of a large class of discrete-time algorithms including as special cases the exponential weights algorithm, online mirror descent, smooth fictitious play and vanishingly smooth fictitious play. In so doing, we obtain a unified view of many classical regret bounds, and we show that they can be decomposed into a term stemming from continuous-time considerations and a term which measures the disparity between discrete and continuous time. This generalizes the continuous-time based analysis of the exponential weights algorithm from [29]. As a result, we obtain a general class of in finite horizon learning strategies that guarantee an O(n(-1/2)) regret bound without having to resort to a doubling trick.
More
Translated text
Key words
Online optimization,regret minimization,mirror descent,gradient descent,continuous time,convex optimization
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined