- categories: Data Science, Real Analysis, Method, Optimization, Algorithm
Definition:
The BFGS algorithm is a quasi-Newton optimization method used to minimize a differentiable scalar-valued function . It approximates the inverse of the Hessian matrix iteratively, avoiding the computational cost of explicitly calculating or inverting the Hessian.
BFGS updates an approximation of the inverse Hessian matrix using information from gradients at successive iterations. It is widely used for unconstrained optimization problems and is robust and efficient for many applications.
Key Idea
-
Avoid Explicit Hessian Computation:
Instead of directly computing the Hessian matrix , BFGS builds an approximation of the inverse Hessian, , iteratively. -
Quasi-Newton Update:
The inverse Hessian approximation is updated based on gradient differences and step sizes between iterations. -
Line Search:
BFGS typically incorporates a line search to determine a suitable step size for each iteration.
Algorithm
-
Initialization:
- Start with an initial guess .
- Initialize the inverse Hessian approximation as (identity matrix).
- Compute the initial gradient .
-
Iterative Updates:
For :-
Compute the search direction:
-
Perform a line search to find step size that sufficiently reduces :
-
Compute the gradient difference and step vector:
-
Update the inverse Hessian approximation :
-
-
Convergence:
Stop when the gradient norm is below a predefined threshold.
Key Formulas
-
Search Direction:
-
Hessian Inverse Update Rule:
where:- : Change in position.
- : Change in gradient.
Advantages of BFGS
-
Efficiency:
- Avoids the cost of explicit Hessian computation and inversion.
- Updates the inverse Hessian approximation in per iteration.
-
Robustness:
- Converges quickly for well-behaved functions.
- Suitable for both small and medium-sized problems.
-
Superlinear Convergence:
- Converges faster than gradient descent near the minimum due to better curvature approximation.
Limitations of BFGS
-
Memory Requirements:
- Storing and updating the inverse Hessian approximation requires memory, which can be prohibitive for very high-dimensional problems.
-
Line Search Dependency:
- Relies on an effective line search to ensure sufficient decrease in the objective function.
-
Ill-Conditioned Problems:
- May struggle when the Hessian is ill-conditioned, though modifications like damped BFGS help mitigate this.
Variants
-
Limited-Memory BFGS (L-BFGS):
- Stores only a few vectors to approximate the Hessian, reducing memory requirements to .
- Suitable for high-dimensional problems (e.g., training machine learning models).
-
Damped BFGS:
- Modifies the update rule to ensure positive definiteness of , improving stability.
Example
Problem: Minimize .
-
Gradient:
-
Initialization:
- Start with .
- Set .
-
First Iteration:
-
Compute search direction:
-
Perform line search to find :
-
Update position:
-
-
Update to using and .
Comparison with Other Methods
Method | Hessian Use | Memory Cost | Convergence Speed | Use Case |
---|---|---|---|---|
Gradient Descent | None | Low | Linear | Simple/large problems |
Newton’s Method | Explicit Hessian | High | Quadratic (near optima) | Small problems |
BFGS | Approx. Hessian | Medium | Superlinear | Medium-sized problems |
L-BFGS | Approx. Hessian | Low | Superlinear | High-dimensional problems |
Conclusion
The BFGS algorithm is a powerful and versatile tool for unconstrained optimization. It balances the accuracy of second-order methods with the efficiency of avoiding explicit Hessian computation, making it a cornerstone in optimization techniques for machine learning and scientific computing.