Lower Bounds for Parallel and Randomized Convex Optimization
Recently there has been an outburst of parallelization techniques to speed up optimization algorithms, particularly in applications in statistical learning and structured linear programs. Motivated by these developments, we seek for theoretical explanations of provable improvements (or the lack thereof) in performance obtained by parallelizing optimization algorithms. In 1994, Nemirovski proved that for low-dimensional optimization problems there is a very limited improvement that could be obtained by parallelization, and furthermore conjectured that no acceleration should be achievable by these means. In this talk, I will present new results showing that in high-dimensional settings no acceleration can be obtained by parallelization, providing strong evidence towards Nemirovski's conjecture.
This is joint work with Jelena Diakonikolas (UC Berkeley) and appeared in COLT'19