• Main
  • Program
  • Workshops
  • Tutorials
  • Invited Speakers
  • Young Researchers
  • Venue
  • Registration
  • Accomodation
  • Call for Papers
  • Important Dates
  • Submissions
  • Presenters
  • Committee
  • Awards
  • Photos
  • Contact
  • Imprint
  • Data Protection
  • Sponsors
    Exhibitors


    Gold




    Bronze


    TU-Graz Logo
    OAGM Logo
    OAGM Logo



    Francis Bach


    "Large-scale convex optimization for machine learning"

    Abstract

    Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. In this talk, I will present several recent results, showing that in the ideal infinite-data setting, online learning algorithms based on stochastic approximation should be preferred, but that in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate with an iteration cost similar to stochastic gradient descent. (joint work with Nicolas Le Roux, Eric Moulines and Mark Schmidt)




    Imprint-Dataprotection