Abstract: Subgradient methods provide effective means to solve large-scale convex optimization problems within the Lagrangian duality framework. For networking applications, these methods have recently been used with great success in developing decentralized cross-layer resource allocation mechanisms. Despite widespread use of subgradient methods to solve (nondifferentiable) dual problems, there are limited results in the existing literature on the recovery of approximate primal solutions and the convergence rate analysis both in the primal and the dual space.
In this talk, we first present dual subgradient methods that use averaging schemes to generate approximate primal optimal solutions for general convex constrained optimization problems. We provide estimates on the convergence rate of the primal sequences in terms of the amount of feasibility violation and primal function values. The estimates are given per iteration, thus providing practical stopping criteria.
We then consider problems where the subgradient of the dual function cannot be computed efficiently, thus impeding the use of dual subgraident methods. For these problems, we introduce new primal-dual subgradient methods aimed at computing the saddle points of the Lagrangian function and provide convergence rate estimates for the constructed primal solutions.