Author: P. E. Gill
Publisher:
ISBN:
Category :
Languages : en
Pages : 49
Book Description
Computation of Lagrange-Multiplier Estimates for Constra Minimization
Author: P. E. Gill
Publisher:
ISBN:
Category :
Languages : en
Pages : 49
Book Description
Publisher:
ISBN:
Category :
Languages : en
Pages : 49
Book Description
The computation of lagrange multiplier estimates for constrained minimization
Author: National Physical Laboratory (Great Britain). Division of Numerical Analysis and Computing
Publisher:
ISBN:
Category :
Languages : en
Pages : 49
Book Description
Publisher:
ISBN:
Category :
Languages : en
Pages : 49
Book Description
Computation of the Search Direction in Constrained Optimization Algorithms
Author: Stanford University. Department of Operations Research. Systems Optimization Laboratory
Publisher:
ISBN:
Category :
Languages : en
Pages : 36
Book Description
Publisher:
ISBN:
Category :
Languages : en
Pages : 36
Book Description
A Projected Lagrangian Algorithm for Nonlinear L Subscript 1 Optimization
Author: Stanford University. Systems Optimization Laboratory
Publisher:
ISBN:
Category :
Languages : en
Pages : 50
Book Description
Publisher:
ISBN:
Category :
Languages : en
Pages : 50
Book Description
Practical Augmented Lagrangian Methods for Constrained Optimization
Author: Ernesto G. Birgin
Publisher: SIAM
ISBN: 1611973368
Category : Mathematics
Languages : en
Pages : 222
Book Description
This book focuses on Augmented Lagrangian techniques for solving practical constrained optimization problems. The authors: rigorously delineate mathematical convergence theory based on sequential optimality conditions and novel constraint qualifications; orient the book to practitioners by giving priority to results that provide insight on the practical behavior of algorithms and by providing geometrical and algorithmic interpretations of every mathematical result; and fully describe a freely available computational package for constrained optimization and illustrate its usefulness with applications.
Publisher: SIAM
ISBN: 1611973368
Category : Mathematics
Languages : en
Pages : 222
Book Description
This book focuses on Augmented Lagrangian techniques for solving practical constrained optimization problems. The authors: rigorously delineate mathematical convergence theory based on sequential optimality conditions and novel constraint qualifications; orient the book to practitioners by giving priority to results that provide insight on the practical behavior of algorithms and by providing geometrical and algorithmic interpretations of every mathematical result; and fully describe a freely available computational package for constrained optimization and illustrate its usefulness with applications.
Lagrange Multipliers Estimates for Constrained Minimization
Author: L. F. Escudero
Publisher:
ISBN:
Category :
Languages : en
Pages : 39
Book Description
Publisher:
ISBN:
Category :
Languages : en
Pages : 39
Book Description
Algorithms for Constrained Minimization of Smooth Nonlinear Functions
Author: Albert G. Buckley
Publisher:
ISBN:
Category : Mathematical optimization
Languages : en
Pages : 204
Book Description
Publisher:
ISBN:
Category : Mathematical optimization
Languages : en
Pages : 204
Book Description
Algorithms for Nonlinearly Constrained Optimization
Author: Stanford University. Systems Optimization Laboratory
Publisher:
ISBN:
Category :
Languages : en
Pages : 48
Book Description
Publisher:
ISBN:
Category :
Languages : en
Pages : 48
Book Description
Projected Lagrangian methods based on the trajectories of penalty and barrier functions
Author: Stanford University. Systems Optimization Laboratory
Publisher:
ISBN:
Category :
Languages : en
Pages : 82
Book Description
This report contains a complete derivation and description of two algorithms for nonlinearly constrained optimization which are based on properties of the solution trajectory of the quadratic penalty function and the logarithmic barrier function. The methods utilize the penalty and barrier functions only as merit functions, and do not generate iterates by solving a sequence of ill-conditioned problems. The search direction is the solution of a simple, well-posed quadratic program (QP), where the quadratic objective function is an approximation to the Lagrangian function; the steplength is based on a sufficient decrease in a penalty or barrier function, to ensure progress toward the solution. The penalty trajectory algorithm was first proposed by Murray in 1969; the barrier trajectory algorithm, which retains feasibility throughout, was given by Wright in 1976. Here we give a unified presentation of both algorithms, and indicate their relationship to other QP-based methods. Full details of implementation are included, as well as numerical results that display the success of the methods on non-trivial problems. (Author).
Publisher:
ISBN:
Category :
Languages : en
Pages : 82
Book Description
This report contains a complete derivation and description of two algorithms for nonlinearly constrained optimization which are based on properties of the solution trajectory of the quadratic penalty function and the logarithmic barrier function. The methods utilize the penalty and barrier functions only as merit functions, and do not generate iterates by solving a sequence of ill-conditioned problems. The search direction is the solution of a simple, well-posed quadratic program (QP), where the quadratic objective function is an approximation to the Lagrangian function; the steplength is based on a sufficient decrease in a penalty or barrier function, to ensure progress toward the solution. The penalty trajectory algorithm was first proposed by Murray in 1969; the barrier trajectory algorithm, which retains feasibility throughout, was given by Wright in 1976. Here we give a unified presentation of both algorithms, and indicate their relationship to other QP-based methods. Full details of implementation are included, as well as numerical results that display the success of the methods on non-trivial problems. (Author).
A projected Lagrangian algorithm for nonlinear minimax optimization
Author: Walter Murray
Publisher:
ISBN:
Category :
Languages : en
Pages : 82
Book Description
The minimax problem is an unconstrained optimization problem whose objective functions is not differentiable everywhere, and hence cannot be solved efficiently by standard techniques for unconstrained optimization. It is well known that the problem can be transformed into a nonlinearly constrained optimization problem with one extra variable, where the objective and constraint functions are continuously differentiable. This equivalent problem has special properties which are ignored if solved by a general-purpose constrained optimization method. The algorithm we present exploits the special structure of the equivalent problem. A direction of search is obtained at each iteration of the algorithm by solving a equality-constrained quadratic programming problem, related to one a projected Lagrangian method might use to solve the equivalent constrained optimization problem. Special Lagrangian multiplier estimates are used to form an approximation to the Hessian of the Lagrangian function, which appears in the quadratic program. Analytical Hessians, finite-differencing or quasi-Newton updating may be used in the approximation of this matrix. The resulting direction of search is guaranteed to be a descent direction for the minimax objective function. Under mild conditions the algorithms are locally quadratically convergent if analytical Hessians are used. (Author).
Publisher:
ISBN:
Category :
Languages : en
Pages : 82
Book Description
The minimax problem is an unconstrained optimization problem whose objective functions is not differentiable everywhere, and hence cannot be solved efficiently by standard techniques for unconstrained optimization. It is well known that the problem can be transformed into a nonlinearly constrained optimization problem with one extra variable, where the objective and constraint functions are continuously differentiable. This equivalent problem has special properties which are ignored if solved by a general-purpose constrained optimization method. The algorithm we present exploits the special structure of the equivalent problem. A direction of search is obtained at each iteration of the algorithm by solving a equality-constrained quadratic programming problem, related to one a projected Lagrangian method might use to solve the equivalent constrained optimization problem. Special Lagrangian multiplier estimates are used to form an approximation to the Hessian of the Lagrangian function, which appears in the quadratic program. Analytical Hessians, finite-differencing or quasi-Newton updating may be used in the approximation of this matrix. The resulting direction of search is guaranteed to be a descent direction for the minimax objective function. Under mild conditions the algorithms are locally quadratically convergent if analytical Hessians are used. (Author).