A projected Lagrangian algorithm for nonlinear minimax optimization

A projected Lagrangian algorithm for nonlinear minimax optimization PDF Author: Walter Murray
Publisher:
ISBN:
Category :
Languages : en
Pages : 82

Get Book Here

Book Description
The minimax problem is an unconstrained optimization problem whose objective functions is not differentiable everywhere, and hence cannot be solved efficiently by standard techniques for unconstrained optimization. It is well known that the problem can be transformed into a nonlinearly constrained optimization problem with one extra variable, where the objective and constraint functions are continuously differentiable. This equivalent problem has special properties which are ignored if solved by a general-purpose constrained optimization method. The algorithm we present exploits the special structure of the equivalent problem. A direction of search is obtained at each iteration of the algorithm by solving a equality-constrained quadratic programming problem, related to one a projected Lagrangian method might use to solve the equivalent constrained optimization problem. Special Lagrangian multiplier estimates are used to form an approximation to the Hessian of the Lagrangian function, which appears in the quadratic program. Analytical Hessians, finite-differencing or quasi-Newton updating may be used in the approximation of this matrix. The resulting direction of search is guaranteed to be a descent direction for the minimax objective function. Under mild conditions the algorithms are locally quadratically convergent if analytical Hessians are used. (Author).

A projected Lagrangian algorithm for nonlinear minimax optimization

A projected Lagrangian algorithm for nonlinear minimax optimization PDF Author: Walter Murray
Publisher:
ISBN:
Category :
Languages : en
Pages : 82

Get Book Here

Book Description
The minimax problem is an unconstrained optimization problem whose objective functions is not differentiable everywhere, and hence cannot be solved efficiently by standard techniques for unconstrained optimization. It is well known that the problem can be transformed into a nonlinearly constrained optimization problem with one extra variable, where the objective and constraint functions are continuously differentiable. This equivalent problem has special properties which are ignored if solved by a general-purpose constrained optimization method. The algorithm we present exploits the special structure of the equivalent problem. A direction of search is obtained at each iteration of the algorithm by solving a equality-constrained quadratic programming problem, related to one a projected Lagrangian method might use to solve the equivalent constrained optimization problem. Special Lagrangian multiplier estimates are used to form an approximation to the Hessian of the Lagrangian function, which appears in the quadratic program. Analytical Hessians, finite-differencing or quasi-Newton updating may be used in the approximation of this matrix. The resulting direction of search is guaranteed to be a descent direction for the minimax objective function. Under mild conditions the algorithms are locally quadratically convergent if analytical Hessians are used. (Author).

Projected Lagrangian Algorithms for Nonlinear Minimax and L1 Optimization

Projected Lagrangian Algorithms for Nonlinear Minimax and L1 Optimization PDF Author: Michael Lockhart Overton
Publisher:
ISBN:
Category : Algorithms
Languages : en
Pages : 360

Get Book Here

Book Description


A Projected Lagrangian Algorithm for Nonlinear L Subscript 1 Optimization

A Projected Lagrangian Algorithm for Nonlinear L Subscript 1 Optimization PDF Author: Stanford University. Systems Optimization Laboratory
Publisher:
ISBN:
Category :
Languages : en
Pages : 50

Get Book Here

Book Description


Projected Lagrangian Algorithms for Nonlinear Minimax and $\ell_1$ Optimization

Projected Lagrangian Algorithms for Nonlinear Minimax and $\ell_1$ Optimization PDF Author: M. L. Overton
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description


A Projected Lagrangian Algorithm for Nonlinear 'l (sub 1)' Optimization

A Projected Lagrangian Algorithm for Nonlinear 'l (sub 1)' Optimization PDF Author: Walter Murray
Publisher:
ISBN:
Category :
Languages : en
Pages : 40

Get Book Here

Book Description
The nonlinear l (sub 1) problem is an unconstrained optimization problem whose objective function is not differentiable everywhere, and hence cannot be solved efficiently using standard techniques for unconstrained optimization. The problem can be transformed into a nonlinearly constrained optimization problem, but it involves many extra variables. We show how to construct a method based on projected Lagrangian methods for constrained optimization which requires successively solving quadratic programs in the same number of variables as that of the original problem. Special Lagrange multiplier estimates are used to form an approximation to the Hessian of the Lagrangian function, which appears in the quadratic program. A special line search algorithm is used to obtain a reduction in the l (sub 1) objective function at each iteration. Under mild conditions the method is locally quadratically convergent if analytical Hessians are used. (Author).

Projected Lagangian [i.e. Lagrangian] Algorithms for Nonlinear Minimax and L1 Optimization

Projected Lagangian [i.e. Lagrangian] Algorithms for Nonlinear Minimax and L1 Optimization PDF Author: Stanford University. Computer Science Department
Publisher:
ISBN:
Category : Algorithms
Languages : en
Pages : 164

Get Book Here

Book Description


A projected Lagrangian Algorithm and its implementation for sparse nonlinear constraints

A projected Lagrangian Algorithm and its implementation for sparse nonlinear constraints PDF Author: Bruce A. Murtagh
Publisher:
ISBN:
Category : Nonlinear programming
Languages : en
Pages : 55

Get Book Here

Book Description


Research in Progress

Research in Progress PDF Author:
Publisher:
ISBN:
Category : Military research
Languages : en
Pages : 160

Get Book Here

Book Description


Minimax and Applications

Minimax and Applications PDF Author: Ding-Zhu Du
Publisher: Springer Science & Business Media
ISBN: 1461335574
Category : Computers
Languages : en
Pages : 300

Get Book Here

Book Description
Techniques and principles of minimax theory play a key role in many areas of research, including game theory, optimization, and computational complexity. In general, a minimax problem can be formulated as min max f(x, y) (1) ",EX !lEY where f(x, y) is a function defined on the product of X and Y spaces. There are two basic issues regarding minimax problems: The first issue concerns the establishment of sufficient and necessary conditions for equality minmaxf(x,y) = maxminf(x,y). (2) "'EX !lEY !lEY "'EX The classical minimax theorem of von Neumann is a result of this type. Duality theory in linear and convex quadratic programming interprets minimax theory in a different way. The second issue concerns the establishment of sufficient and necessary conditions for values of the variables x and y that achieve the global minimax function value f(x*, y*) = minmaxf(x, y). (3) "'EX !lEY There are two developments in minimax theory that we would like to mention.

Projected Lagrangian methods based on the trajectories of penalty and barrier functions

Projected Lagrangian methods based on the trajectories of penalty and barrier functions PDF Author: Stanford University. Systems Optimization Laboratory
Publisher:
ISBN:
Category :
Languages : en
Pages : 82

Get Book Here

Book Description
This report contains a complete derivation and description of two algorithms for nonlinearly constrained optimization which are based on properties of the solution trajectory of the quadratic penalty function and the logarithmic barrier function. The methods utilize the penalty and barrier functions only as merit functions, and do not generate iterates by solving a sequence of ill-conditioned problems. The search direction is the solution of a simple, well-posed quadratic program (QP), where the quadratic objective function is an approximation to the Lagrangian function; the steplength is based on a sufficient decrease in a penalty or barrier function, to ensure progress toward the solution. The penalty trajectory algorithm was first proposed by Murray in 1969; the barrier trajectory algorithm, which retains feasibility throughout, was given by Wright in 1976. Here we give a unified presentation of both algorithms, and indicate their relationship to other QP-based methods. Full details of implementation are included, as well as numerical results that display the success of the methods on non-trivial problems. (Author).