Projected-Search Methods for Constrained Optimization

Projected-Search Methods for Constrained Optimization PDF Author: Minxin Zhang
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description
Projected-search methods for bound-constrained optimization are based on searching along a continuous path obtained by projecting a search direction onto the feasible region. These methods have the potential to change the direction of the search path multiple times along the boundary of the feasible region at the cost of computing a single direction. However, as the objective function is only piecewise differentiable along the path, conventional projected-search methods are limited at using a simple backtracking procedure to obtain a step that satisfies an "Armijo-like" sufficient decrease condition. To extend the benefits of Wolfe line search for unconstrained optimization to projected-search methods, a new quasi-Wolfe step is introduced. Two general classes of projected-search methods that use the new quasi-Wolfe search are then formulated and analyzed. These methods may be broadly categorized as either active-set methods or interior methods. Additionally, a new quasi-Newton projected-search method UBOPT is proposed for unconstrained and bound-constrained optimization. The method computes quasi-Newton directions in a sequence of subspaces, and employs the framework of the class of projected-search active-set methods. Furthermore, a new interior method is proposed for general nonlinearly constrained optimization, combining a shifted primal-dual interior method with a projected-search method for bound-constrained optimization. The method involves the computation of an approximate Newton direction for a primal-dual penalty-barrier function that incorporates shifts on both the primal and dual variables. The shifts allow the method to be safely "warm started" from a good approximate solution and eliminate the ill-conditioning of the associated linear equations that may occur when the variables are close to zero. The approximate Newton direction is used in conjunction with a new projected-search algorithm that employs a flexible non-monotone quasi-Armijo line search for the minimization of each penalty-barrier function. Numerical results demonstrate that the new method requires significantly fewer iterations than a conventional interior method, thereby reducing the number of times that the search direction need be computed.

Projected-Search Methods for Constrained Optimization

Projected-Search Methods for Constrained Optimization PDF Author: Minxin Zhang
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description
Projected-search methods for bound-constrained optimization are based on searching along a continuous path obtained by projecting a search direction onto the feasible region. These methods have the potential to change the direction of the search path multiple times along the boundary of the feasible region at the cost of computing a single direction. However, as the objective function is only piecewise differentiable along the path, conventional projected-search methods are limited at using a simple backtracking procedure to obtain a step that satisfies an "Armijo-like" sufficient decrease condition. To extend the benefits of Wolfe line search for unconstrained optimization to projected-search methods, a new quasi-Wolfe step is introduced. Two general classes of projected-search methods that use the new quasi-Wolfe search are then formulated and analyzed. These methods may be broadly categorized as either active-set methods or interior methods. Additionally, a new quasi-Newton projected-search method UBOPT is proposed for unconstrained and bound-constrained optimization. The method computes quasi-Newton directions in a sequence of subspaces, and employs the framework of the class of projected-search active-set methods. Furthermore, a new interior method is proposed for general nonlinearly constrained optimization, combining a shifted primal-dual interior method with a projected-search method for bound-constrained optimization. The method involves the computation of an approximate Newton direction for a primal-dual penalty-barrier function that incorporates shifts on both the primal and dual variables. The shifts allow the method to be safely "warm started" from a good approximate solution and eliminate the ill-conditioning of the associated linear equations that may occur when the variables are close to zero. The approximate Newton direction is used in conjunction with a new projected-search algorithm that employs a flexible non-monotone quasi-Armijo line search for the minimization of each penalty-barrier function. Numerical results demonstrate that the new method requires significantly fewer iterations than a conventional interior method, thereby reducing the number of times that the search direction need be computed.

Projected-search Methods for Box-constrained Optimization

Projected-search Methods for Box-constrained Optimization PDF Author: Michael William Ferry
Publisher:
ISBN: 9781124628189
Category :
Languages : en
Pages : 134

Get Book Here

Book Description
Many algorithms used in unconstrained minimization are line-search methods. Given an initial point x and function f : Rn [arrow] R to be minimized, a line-search method repeatedly solves two subproblems : the first calculates a search direction p; the second performs a line search on the function [phi]([alpha]) = f(x + [alpha]p). Then, [alpha]p is added to x and the process is repeated until a solution is located. Quasi-Newton methods are often used to calculate the search direction. A quasi-Newton method creates a quadratic model of f at x and defines the search direction p such that x + p is the minimizer of the model. After each iteration the model is updated to more closely resemble f near x. Line searches seek to satisfy conditions that ensure the convergence of the sequence of iterates. One step that decreases f "sufficiently" is called an Armijo step. A Wolfe step satisfies stronger conditions that impose bounds on [phi]([alpha]). Quasi-Newton methods perform significantly better when using Wolfe steps. Recently Gill and Leonard proposed the reduced Hessian (RH) method, which is a new quasi-Newton method for unconstrained optimization. This method exploits key structures in the quadratic model so that the dimension of the search space is reduced. Placing box constraints x leads to more complex problems. One method for solving such problems is the projected-search method. This method performs an unconstrained minimization on a changing subset of the variables and projects points that violate the constraints back into the feasible region while performing the line search. To date, projected line-search methods have been restricted to using an Armijo-like line search. By modifying the line-search conditions, we create a new projected line search that uses a Wolfe-like step. This line search retains many of the benefits of a Wolfe line search for the unconstrained case. Projected-search methods and RH methods share a similar structure in solving for the search direction. We exploit this similarity and merge the two ideas to create a class of RH methods for box-constrained optimization. When combined with the new line search, this new family of algorithms minimizes problems in less than 74% of the time taken by the leading comparable alternative on a collection of standard test problems.

Introduction to Optimization Methods

Introduction to Optimization Methods PDF Author: P. Adby
Publisher: Springer Science & Business Media
ISBN: 940095705X
Category : Science
Languages : en
Pages : 214

Get Book Here

Book Description
During the last decade the techniques of non-linear optim ization have emerged as an important subject for study and research. The increasingly widespread application of optim ization has been stimulated by the availability of digital computers, and the necessity of using them in the investigation of large systems. This book is an introduction to non-linear methods of optimization and is suitable for undergraduate and post graduate courses in mathematics, the physical and social sciences, and engineering. The first half of the book covers the basic optimization techniques including linear search methods, steepest descent, least squares, and the Newton-Raphson method. These are described in detail, with worked numerical examples, since they form the basis from which advanced methods are derived. Since 1965 advanced methods of unconstrained and constrained optimization have been developed to utilise the computational power of the digital computer. The second half of the book describes fully important algorithms in current use such as variable metric methods for unconstrained problems and penalty function methods for constrained problems. Recent work, much of which has not yet been widely applied, is reviewed and compared with currently popular techniques under a few generic main headings. vi PREFACE Chapter I describes the optimization problem in mathemat ical form and defines the terminology used in the remainder of the book. Chapter 2 is concerned with single variable optimization. The main algorithms of both search and approximation methods are developed in detail since they are an essential part of many multi-variable methods.

Computation of the Search Direction in Constrained Optimization Algorithms

Computation of the Search Direction in Constrained Optimization Algorithms PDF Author: Stanford University. Department of Operations Research. Systems Optimization Laboratory
Publisher:
ISBN:
Category :
Languages : en
Pages : 36

Get Book Here

Book Description


Practical Methods of Optimization

Practical Methods of Optimization PDF Author: R. Fletcher
Publisher: John Wiley & Sons
ISBN: 111872318X
Category : Mathematics
Languages : en
Pages : 470

Get Book Here

Book Description
Fully describes optimization methods that are currently most valuable in solving real-life problems. Since optimization has applications in almost every branch of science and technology, the text emphasizes their practical aspects in conjunction with the heuristics useful in making them perform more reliably and efficiently. To this end, it presents comparative numerical studies to give readers a feel for possibile applications and to illustrate the problems in assessing evidence. Also provides theoretical background which provides insights into how methods are derived. This edition offers revised coverage of basic theory and standard techniques, with updated discussions of line search methods, Newton and quasi-Newton methods, and conjugate direction methods, as well as a comprehensive treatment of restricted step or trust region methods not commonly found in the literature. Also includes recent developments in hybrid methods for nonlinear least squares; an extended discussion of linear programming, with new methods for stable updating of LU factors; and a completely new section on network programming. Chapters include computer subroutines, worked examples, and study questions.

Two-metric Projection Methods for Constrained Optimization

Two-metric Projection Methods for Constrained Optimization PDF Author: Eli Gafni
Publisher:
ISBN:
Category : Mathematical optimization
Languages : en
Pages : 59

Get Book Here

Book Description


Algorithms for Nonlinearly Constrained Optimization

Algorithms for Nonlinearly Constrained Optimization PDF Author: Stanford University. Systems Optimization Laboratory
Publisher:
ISBN:
Category :
Languages : en
Pages : 48

Get Book Here

Book Description


A projected Lagrangian algorithm for nonlinear minimax optimization

A projected Lagrangian algorithm for nonlinear minimax optimization PDF Author: Walter Murray
Publisher:
ISBN:
Category :
Languages : en
Pages : 82

Get Book Here

Book Description
The minimax problem is an unconstrained optimization problem whose objective functions is not differentiable everywhere, and hence cannot be solved efficiently by standard techniques for unconstrained optimization. It is well known that the problem can be transformed into a nonlinearly constrained optimization problem with one extra variable, where the objective and constraint functions are continuously differentiable. This equivalent problem has special properties which are ignored if solved by a general-purpose constrained optimization method. The algorithm we present exploits the special structure of the equivalent problem. A direction of search is obtained at each iteration of the algorithm by solving a equality-constrained quadratic programming problem, related to one a projected Lagrangian method might use to solve the equivalent constrained optimization problem. Special Lagrangian multiplier estimates are used to form an approximation to the Hessian of the Lagrangian function, which appears in the quadratic program. Analytical Hessians, finite-differencing or quasi-Newton updating may be used in the approximation of this matrix. The resulting direction of search is guaranteed to be a descent direction for the minimax objective function. Under mild conditions the algorithms are locally quadratically convergent if analytical Hessians are used. (Author).

Practical Augmented Lagrangian Methods for Constrained Optimization

Practical Augmented Lagrangian Methods for Constrained Optimization PDF Author: Ernesto G. Birgin
Publisher: SIAM
ISBN: 161197335X
Category : Mathematics
Languages : en
Pages : 222

Get Book Here

Book Description
This book focuses on Augmented Lagrangian techniques for solving practical constrained optimization problems. The authors rigorously delineate mathematical convergence theory based on sequential optimality conditions and novel constraint qualifications. They also orient the book to practitioners by giving priority to results that provide insight on the practical behavior of algorithms and by providing geometrical and algorithmic interpretations of every mathematical result, and they fully describe a freely available computational package for constrained optimization and illustrate its usefulness with applications.

Nonlinear Systems and Optimization for the Chemical Engineer

Nonlinear Systems and Optimization for the Chemical Engineer PDF Author: Guido Buzzi-Ferraris
Publisher: John Wiley & Sons
ISBN: 3527667164
Category : Technology & Engineering
Languages : en
Pages : 531

Get Book Here

Book Description
This third book in a suite of four practical guides is an engineer's companion to using numerical methods for the solution of complex mathematical problems. The required software is provided by way of the freeware mathematical library BzzMath that is developed and maintained by the authors. The present volume focuses on optimization and nonlinear systems solution. The book describes numerical methods, innovative techniques and strategies that are all implemented in a well-established, freeware library. Each of these handy guides enables the reader to use and implement standard numerical tools for their work, explaining the theory behind the various functions and problem solvers, and showcasing applications in diverse scientific and engineering fields. Numerous examples, sample codes, programs and applications are proposed and discussed. The book teaches engineers and scientists how to use the latest and most powerful numerical methods for their daily work.