Wolfe conditions

In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969.[1][2]

In these methods the idea is to find

min x f ( x ) {\displaystyle \min _{x}f(\mathbf {x} )}
for some smooth f : R n R {\displaystyle f\colon \mathbb {R} ^{n}\to \mathbb {R} } . Each step often involves approximately solving the subproblem
min α f ( x k + α p k ) {\displaystyle \min _{\alpha }f(\mathbf {x} _{k}+\alpha \mathbf {p} _{k})}
where x k {\displaystyle \mathbf {x} _{k}} is the current best guess, p k R n {\displaystyle \mathbf {p} _{k}\in \mathbb {R} ^{n}} is a search direction, and α R {\displaystyle \alpha \in \mathbb {R} } is the step length.

The inexact line searches provide an efficient way of computing an acceptable step length α {\displaystyle \alpha } that reduces the objective function 'sufficiently', rather than minimizing the objective function over α R + {\displaystyle \alpha \in \mathbb {R} ^{+}} exactly. A line search algorithm can use Wolfe conditions as a requirement for any guessed α {\displaystyle \alpha } , before finding a new search direction p k {\displaystyle \mathbf {p} _{k}} .

Armijo rule and curvature

A step length α k {\displaystyle \alpha _{k}} is said to satisfy the Wolfe conditions, restricted to the direction p k {\displaystyle \mathbf {p} _{k}} , if the following two inequalities hold:

  1. f ( x k + α k p k ) f ( x k ) + c 1 α k p k T f ( x k ) , {\displaystyle f(\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k})\leq f(\mathbf {x} _{k})+c_{1}\alpha _{k}\mathbf {p} _{k}^{\mathrm {T} }\nabla f(\mathbf {x} _{k}),}
  2. p k T f ( x k + α k p k ) c 2 p k T f ( x k ) , {\displaystyle {-\mathbf {p} }_{k}^{\mathrm {T} }\nabla f(\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k})\leq -c_{2}\mathbf {p} _{k}^{\mathrm {T} }\nabla f(\mathbf {x} _{k}),}

with 0 < c 1 < c 2 < 1 {\displaystyle 0<c_{1}<c_{2}<1} . (In examining condition (ii), recall that to ensure that p k {\displaystyle \mathbf {p} _{k}} is a descent direction, we have p k T f ( x k ) < 0 {\displaystyle \mathbf {p} _{k}^{\mathrm {T} }\nabla f(\mathbf {x} _{k})<0} , as in the case of gradient descent, where p k = f ( x k ) {\displaystyle \mathbf {p} _{k}=-\nabla f(\mathbf {x} _{k})} , or Newton–Raphson, where p k = H 1 f ( x k ) {\displaystyle \mathbf {p} _{k}=-\mathbf {H} ^{-1}\nabla f(\mathbf {x} _{k})} with H {\displaystyle \mathbf {H} } positive definite.)

c 1 {\displaystyle c_{1}} is usually chosen to be quite small while c 2 {\displaystyle c_{2}} is much larger; Nocedal and Wright give example values of c 1 = 10 4 {\displaystyle c_{1}=10^{-4}} and c 2 = 0.9 {\displaystyle c_{2}=0.9} for Newton or quasi-Newton methods and c 2 = 0.1 {\displaystyle c_{2}=0.1} for the nonlinear conjugate gradient method.[3] Inequality i) is known as the Armijo rule[4] and ii) as the curvature condition; i) ensures that the step length α k {\displaystyle \alpha _{k}} decreases f {\displaystyle f} 'sufficiently', and ii) ensures that the slope has been reduced sufficiently. Conditions i) and ii) can be interpreted as respectively providing an upper and lower bound on the admissible step length values.

Strong Wolfe condition on curvature

Denote a univariate function φ {\displaystyle \varphi } restricted to the direction p k {\displaystyle \mathbf {p} _{k}} as φ ( α ) = f ( x k + α p k ) {\displaystyle \varphi (\alpha )=f(\mathbf {x} _{k}+\alpha \mathbf {p} _{k})} . The Wolfe conditions can result in a value for the step length that is not close to a minimizer of φ {\displaystyle \varphi } . If we modify the curvature condition to the following,

  1. | p k T f ( x k + α k p k ) | c 2 | p k T f ( x k ) | {\displaystyle {\big |}\mathbf {p} _{k}^{\mathrm {T} }\nabla f(\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}){\big |}\leq c_{2}{\big |}\mathbf {p} _{k}^{\mathrm {T} }\nabla f(\mathbf {x} _{k}){\big |}}

then i) and iii) together form the so-called strong Wolfe conditions, and force α k {\displaystyle \alpha _{k}} to lie close to a critical point of φ {\displaystyle \varphi } .

Rationale

The principal reason for imposing the Wolfe conditions in an optimization algorithm where x k + 1 = x k + α p k {\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\alpha \mathbf {p} _{k}} is to ensure convergence of the gradient to zero. In particular, if the cosine of the angle between p k {\displaystyle \mathbf {p} _{k}} and the gradient,

cos θ k = f ( x k ) T p k f ( x k ) p k {\displaystyle \cos \theta _{k}={\frac {\nabla f(\mathbf {x} _{k})^{\mathrm {T} }\mathbf {p} _{k}}{\|\nabla f(\mathbf {x} _{k})\|\|\mathbf {p} _{k}\|}}}
is bounded away from zero and the i) and ii) conditions hold, then f ( x k ) 0 {\displaystyle \nabla f(\mathbf {x} _{k})\rightarrow 0} .

An additional motivation, in the case of a quasi-Newton method, is that if p k = B k 1 f ( x k ) {\displaystyle \mathbf {p} _{k}=-B_{k}^{-1}\nabla f(\mathbf {x} _{k})} , where the matrix B k {\displaystyle B_{k}} is updated by the BFGS or DFP formula, then if B k {\displaystyle B_{k}} is positive definite ii) implies B k + 1 {\displaystyle B_{k+1}} is also positive definite.

Comments

Wolfe's conditions are more complicated than Armijo's condition, and a gradient descent algorithm based on Armijo's condition has a better theoretical guarantee than one based on Wolfe conditions (see the sections on "Upper bound for learning rates" and "Theoretical guarantee" in the Backtracking line search article).

See also

  • Backtracking line search

References

  1. ^ Wolfe, P. (1969). "Convergence Conditions for Ascent Methods". SIAM Review. 11 (2): 226–235. doi:10.1137/1011036. JSTOR 2028111.
  2. ^ Wolfe, P. (1971). "Convergence Conditions for Ascent Methods. II: Some Corrections". SIAM Review. 13 (2): 185–188. doi:10.1137/1013035. JSTOR 2028821.
  3. ^ Nocedal, Jorge; Wright, Stephen (1999). Numerical Optimization. p. 38.
  4. ^ Armijo, Larry (1966). "Minimization of functions having Lipschitz continuous first partial derivatives". Pacific J. Math. 16 (1): 1–3. doi:10.2140/pjm.1966.16.1.

Further reading

  • "Line Search Methods". Numerical Optimization. Springer Series in Operations Research and Financial Engineering. 2006. pp. 30–32. doi:10.1007/978-0-387-40065-5_3. ISBN 978-0-387-30303-1.
  • "Quasi-Newton Methods". Numerical Optimization. Springer Series in Operations Research and Financial Engineering. 2006. pp. 135–163. doi:10.1007/978-0-387-40065-5_6. ISBN 978-0-387-30303-1.
  • v
  • t
  • e
Optimization: Algorithms, methods, and heuristics
Functions
Gradients
Convergence
Quasi–Newton
Other methods
Hessians
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
General
Differentiable
Convex
minimization
Linear and
quadratic
Interior point
Basis-exchange
Paradigms
Graph
algorithms
Minimum
spanning tree
Shortest path
Network flows