Deflected Conditional Approximate Subgradient Methods

Subgradient methods for constrained nondifferentiable problems benefit from projection of the search direction onto the (normal cone of) the feasible set prior to computing the steplength, that is, from the use of conditional subgradient techniques. In general, subgradient methods also largely benefit from deflection, i.e., defining the search direction as a (convex) combination of the previous direction and the current subgradient. However, combining the two techniques is not straightforward, especially if an inexact oracle is available which can only compute approximate function values and subgradients. We present a convergence analysis of several different variants, both conceptual and implementable, of approximate conditional deflected subgradient methods. Our analysis extends the available results in the literature by using the main stepsize rules presented so far while allowing deflection in a more flexible way; to allow for (diminishing/square summable) rules where the stepsize is tightly controlled a-priori, we propose a new class of deflection-restricted approaches where it is the deflection parameter, rather than the stepsize, which is dynamically adjusted using the "target value" of the optimization sequence. For both Polyak-type and diminishing/square summable stepsizes, we propose a "correction" of the standard formula which shows that, in the inexact case, knowledge about the error computed by the oracle (which is available in several practical applications) can be exploited in order to strengthen the convergence properties of the method.