摘要

We describe an asynchronous parallel stochastic proximal coordinate descent algorithm for minimizing a composite objective function, which consists of a smooth convex function added to a separable convex function. In contrast to previous analyses, our model of asynchronous computation accounts for the fact that components of the unknown vector may be written by some cores simultaneously with being read by others. Despite the complications arising from this possibility, the method achieves a linear convergence rate on functions that satisfy an optimal strong convexity property and a sublinear rate (1/k) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is O(n(1/4)). We describe results from implementation on 10 cores of a multicore processor.

  • 出版日期2015