Stochastic Convex Optimization Methods In Machine Learning/ Mark Schmidt.
Publication details: Rio de Janeiro: IMPA, 2016.Description: Mini Course - 8 classesOther title:- Minicurso: Stochastic Convex Optimization Methods In Machine Learning
Mini Course 3 We first review classic algorithms and complexity results for stochastic methods for convex optimization, and then turn our attention to the wide variety of exponentially-convergent algorithms that have been developed in the last four years. Topics will include finite-time convergence rates of classic stochastic gradient methods, stochastic average/variance-reduced gradient methods, primal-dual methods, proximal operators, acceleration, alternating minimization, non-uniform sampling, and a discussion of parallelization and non-convex problems. Applications in the field of machine learning will emphasized, but the principles we cover in this course are applicable to many fields .
There are no comments on this title.