Author: Francesco Borrelli
Publisher: Cambridge University Press
ISBN: 1108158293
Category : Mathematics
Languages : en
Pages : 447
Book Description
Model Predictive Control (MPC), the dominant advanced control approach in industry over the past twenty-five years, is presented comprehensively in this unique book. With a simple, unified approach, and with attention to real-time implementation, it covers predictive control theory including the stability, feasibility, and robustness of MPC controllers. The theory of explicit MPC, where the nonlinear optimal feedback controller can be calculated efficiently, is presented in the context of linear systems with linear constraints, switched linear systems, and, more generally, linear hybrid systems. Drawing upon years of practical experience and using numerous examples and illustrative applications, the authors discuss the techniques required to design predictive control laws, including algorithms for polyhedral manipulations, mathematical and multiparametric programming and how to validate the theoretical properties and to implement predictive control policies. The most important algorithms feature in an accompanying free online MATLAB toolbox, which allows easy access to sample solutions. Predictive Control for Linear and Hybrid Systems is an ideal reference for graduate, postgraduate and advanced control practitioners interested in theory and/or implementation aspects of predictive control.
Predictive Control for Linear and Hybrid Systems
Author: Francesco Borrelli
Publisher: Cambridge University Press
ISBN: 1108158293
Category : Mathematics
Languages : en
Pages : 447
Book Description
Model Predictive Control (MPC), the dominant advanced control approach in industry over the past twenty-five years, is presented comprehensively in this unique book. With a simple, unified approach, and with attention to real-time implementation, it covers predictive control theory including the stability, feasibility, and robustness of MPC controllers. The theory of explicit MPC, where the nonlinear optimal feedback controller can be calculated efficiently, is presented in the context of linear systems with linear constraints, switched linear systems, and, more generally, linear hybrid systems. Drawing upon years of practical experience and using numerous examples and illustrative applications, the authors discuss the techniques required to design predictive control laws, including algorithms for polyhedral manipulations, mathematical and multiparametric programming and how to validate the theoretical properties and to implement predictive control policies. The most important algorithms feature in an accompanying free online MATLAB toolbox, which allows easy access to sample solutions. Predictive Control for Linear and Hybrid Systems is an ideal reference for graduate, postgraduate and advanced control practitioners interested in theory and/or implementation aspects of predictive control.
Publisher: Cambridge University Press
ISBN: 1108158293
Category : Mathematics
Languages : en
Pages : 447
Book Description
Model Predictive Control (MPC), the dominant advanced control approach in industry over the past twenty-five years, is presented comprehensively in this unique book. With a simple, unified approach, and with attention to real-time implementation, it covers predictive control theory including the stability, feasibility, and robustness of MPC controllers. The theory of explicit MPC, where the nonlinear optimal feedback controller can be calculated efficiently, is presented in the context of linear systems with linear constraints, switched linear systems, and, more generally, linear hybrid systems. Drawing upon years of practical experience and using numerous examples and illustrative applications, the authors discuss the techniques required to design predictive control laws, including algorithms for polyhedral manipulations, mathematical and multiparametric programming and how to validate the theoretical properties and to implement predictive control policies. The most important algorithms feature in an accompanying free online MATLAB toolbox, which allows easy access to sample solutions. Predictive Control for Linear and Hybrid Systems is an ideal reference for graduate, postgraduate and advanced control practitioners interested in theory and/or implementation aspects of predictive control.
Numerical Optimization
Author: Jorge Nocedal
Publisher: Springer Science & Business Media
ISBN: 0387400656
Category : Mathematics
Languages : en
Pages : 686
Book Description
Optimization is an important tool used in decision science and for the analysis of physical systems used in engineering. One can trace its roots to the Calculus of Variations and the work of Euler and Lagrange. This natural and reasonable approach to mathematical programming covers numerical methods for finite-dimensional optimization problems. It begins with very simple ideas progressing through more complicated concepts, concentrating on methods for both unconstrained and constrained optimization.
Publisher: Springer Science & Business Media
ISBN: 0387400656
Category : Mathematics
Languages : en
Pages : 686
Book Description
Optimization is an important tool used in decision science and for the analysis of physical systems used in engineering. One can trace its roots to the Calculus of Variations and the work of Euler and Lagrange. This natural and reasonable approach to mathematical programming covers numerical methods for finite-dimensional optimization problems. It begins with very simple ideas progressing through more complicated concepts, concentrating on methods for both unconstrained and constrained optimization.
Mathematical Reviews
Author:
Publisher:
ISBN:
Category : Mathematics
Languages : en
Pages : 724
Book Description
Publisher:
ISBN:
Category : Mathematics
Languages : en
Pages : 724
Book Description
Predictive Control for Linear and Hybrid Systems
Author: Francesco Borrelli
Publisher: Cambridge University Press
ISBN: 1107016886
Category : Mathematics
Languages : en
Pages : 447
Book Description
With a simple approach that includes real-time applications and algorithms, this book covers the theory of model predictive control (MPC).
Publisher: Cambridge University Press
ISBN: 1107016886
Category : Mathematics
Languages : en
Pages : 447
Book Description
With a simple approach that includes real-time applications and algorithms, this book covers the theory of model predictive control (MPC).
Interior Point Algorithms
Author: Yinyu Ye
Publisher: John Wiley & Sons
ISBN: 1118030958
Category : Mathematics
Languages : en
Pages : 440
Book Description
The first comprehensive review of the theory and practice of one oftoday's most powerful optimization techniques. The explosive growth of research into and development of interiorpoint algorithms over the past two decades has significantlyimproved the complexity of linear programming and yielded some oftoday's most sophisticated computing techniques. This book offers acomprehensive and thorough treatment of the theory, analysis, andimplementation of this powerful computational tool. Interior Point Algorithms provides detailed coverage of all basicand advanced aspects of the subject. Beginning with an overview offundamental mathematical procedures, Professor Yinyu Ye movesswiftly on to in-depth explorations of numerous computationalproblems and the algorithms that have been developed to solve them.An indispensable text/reference for students and researchers inapplied mathematics, computer science, operations research,management science, and engineering, Interior Point Algorithms: * Derives various complexity results for linear and convexprogramming * Emphasizes interior point geometry and potential theory * Covers state-of-the-art results for extension, implementation,and other cutting-edge computational techniques * Explores the hottest new research topics, including nonlinearprogramming and nonconvex optimization.
Publisher: John Wiley & Sons
ISBN: 1118030958
Category : Mathematics
Languages : en
Pages : 440
Book Description
The first comprehensive review of the theory and practice of one oftoday's most powerful optimization techniques. The explosive growth of research into and development of interiorpoint algorithms over the past two decades has significantlyimproved the complexity of linear programming and yielded some oftoday's most sophisticated computing techniques. This book offers acomprehensive and thorough treatment of the theory, analysis, andimplementation of this powerful computational tool. Interior Point Algorithms provides detailed coverage of all basicand advanced aspects of the subject. Beginning with an overview offundamental mathematical procedures, Professor Yinyu Ye movesswiftly on to in-depth explorations of numerous computationalproblems and the algorithms that have been developed to solve them.An indispensable text/reference for students and researchers inapplied mathematics, computer science, operations research,management science, and engineering, Interior Point Algorithms: * Derives various complexity results for linear and convexprogramming * Emphasizes interior point geometry and potential theory * Covers state-of-the-art results for extension, implementation,and other cutting-edge computational techniques * Explores the hottest new research topics, including nonlinearprogramming and nonconvex optimization.
Self-Regularity
Author: Jiming Peng
Publisher: Princeton University Press
ISBN: 140082513X
Category : Mathematics
Languages : en
Pages : 201
Book Description
Research on interior-point methods (IPMs) has dominated the field of mathematical programming for the last two decades. Two contrasting approaches in the analysis and implementation of IPMs are the so-called small-update and large-update methods, although, until now, there has been a notorious gap between the theory and practical performance of these two strategies. This book comes close to bridging that gap, presenting a new framework for the theory of primal-dual IPMs based on the notion of the self-regularity of a function. The authors deal with linear optimization, nonlinear complementarity problems, semidefinite optimization, and second-order conic optimization problems. The framework also covers large classes of linear complementarity problems and convex optimization. The algorithm considered can be interpreted as a path-following method or a potential reduction method. Starting from a primal-dual strictly feasible point, the algorithm chooses a search direction defined by some Newton-type system derived from the self-regular proximity. The iterate is then updated, with the iterates staying in a certain neighborhood of the central path until an approximate solution to the problem is found. By extensively exploring some intriguing properties of self-regular functions, the authors establish that the complexity of large-update IPMs can come arbitrarily close to the best known iteration bounds of IPMs. Researchers and postgraduate students in all areas of linear and nonlinear optimization will find this book an important and invaluable aid to their work.
Publisher: Princeton University Press
ISBN: 140082513X
Category : Mathematics
Languages : en
Pages : 201
Book Description
Research on interior-point methods (IPMs) has dominated the field of mathematical programming for the last two decades. Two contrasting approaches in the analysis and implementation of IPMs are the so-called small-update and large-update methods, although, until now, there has been a notorious gap between the theory and practical performance of these two strategies. This book comes close to bridging that gap, presenting a new framework for the theory of primal-dual IPMs based on the notion of the self-regularity of a function. The authors deal with linear optimization, nonlinear complementarity problems, semidefinite optimization, and second-order conic optimization problems. The framework also covers large classes of linear complementarity problems and convex optimization. The algorithm considered can be interpreted as a path-following method or a potential reduction method. Starting from a primal-dual strictly feasible point, the algorithm chooses a search direction defined by some Newton-type system derived from the self-regular proximity. The iterate is then updated, with the iterates staying in a certain neighborhood of the central path until an approximate solution to the problem is found. By extensively exploring some intriguing properties of self-regular functions, the authors establish that the complexity of large-update IPMs can come arbitrarily close to the best known iteration bounds of IPMs. Researchers and postgraduate students in all areas of linear and nonlinear optimization will find this book an important and invaluable aid to their work.
Convex Optimization & Euclidean Distance Geometry
Author: Jon Dattorro
Publisher: Meboo Publishing USA
ISBN: 0976401304
Category : Mathematics
Languages : en
Pages : 776
Book Description
The study of Euclidean distance matrices (EDMs) fundamentally asks what can be known geometrically given onlydistance information between points in Euclidean space. Each point may represent simply locationor, abstractly, any entity expressible as a vector in finite-dimensional Euclidean space.The answer to the question posed is that very much can be known about the points;the mathematics of this combined study of geometry and optimization is rich and deep.Throughout we cite beacons of historical accomplishment.The application of EDMs has already proven invaluable in discerning biological molecular conformation.The emerging practice of localization in wireless sensor networks, the global positioning system (GPS), and distance-based pattern recognitionwill certainly simplify and benefit from this theory.We study the pervasive convex Euclidean bodies and their various representations.In particular, we make convex polyhedra, cones, and dual cones more visceral through illustration, andwe study the geometric relation of polyhedral cones to nonorthogonal bases biorthogonal expansion.We explain conversion between halfspace- and vertex-descriptions of convex cones,we provide formulae for determining dual cones,and we show how classic alternative systems of linear inequalities or linear matrix inequalities and optimality conditions can be explained by generalized inequalities in terms of convex cones and their duals.The conic analogue to linear independence, called conic independence, is introducedas a new tool in the study of classical cone theory; the logical next step in the progression:linear, affine, conic.Any convex optimization problem has geometric interpretation.This is a powerful attraction: the ability to visualize geometry of an optimization problem.We provide tools to make visualization easier.The concept of faces, extreme points, and extreme directions of convex Euclidean bodiesis explained here, crucial to understanding convex optimization.The convex cone of positive semidefinite matrices, in particular, is studied in depth.We mathematically interpret, for example,its inverse image under affine transformation, and we explainhow higher-rank subsets of its boundary united with its interior are convex.The Chapter on "Geometry of convex functions",observes analogies between convex sets and functions:The set of all vector-valued convex functions is a closed convex cone.Included among the examples in this chapter, we show how the real affinefunction relates to convex functions as the hyperplane relates to convex sets.Here, also, pertinent results formultidimensional convex functions are presented that are largely ignored in the literature;tricks and tips for determining their convexityand discerning their geometry, particularly with regard to matrix calculus which remains largely unsystematizedwhen compared with the traditional practice of ordinary calculus.Consequently, we collect some results of matrix differentiation in the appendices.The Euclidean distance matrix (EDM) is studied,its properties and relationship to both positive semidefinite and Gram matrices.We relate the EDM to the four classical axioms of the Euclidean metric;thereby, observing the existence of an infinity of axioms of the Euclidean metric beyondthe triangle inequality. We proceed byderiving the fifth Euclidean axiom and then explain why furthering this endeavoris inefficient because the ensuing criteria (while describing polyhedra)grow linearly in complexity and number.Some geometrical problems solvable via EDMs,EDM problems posed as convex optimization, and methods of solution arepresented;\eg, we generate a recognizable isotonic map of the United States usingonly comparative distance information (no distance information, only distance inequalities).We offer a new proof of the classic Schoenberg criterion, that determines whether a candidate matrix is an EDM. Our proofrelies on fundamental geometry; assuming, any EDM must correspond to a list of points contained in some polyhedron(possibly at its vertices) and vice versa.It is not widely known that the Schoenberg criterion implies nonnegativity of the EDM entries; proved here.We characterize the eigenvalues of an EDM matrix and then devisea polyhedral cone required for determining membership of a candidate matrix(in Cayley-Menger form) to the convex cone of Euclidean distance matrices (EDM cone); \ie,a candidate is an EDM if and only if its eigenspectrum belongs to a spectral cone for EDM^N.We will see spectral cones are not unique.In the chapter "EDM cone", we explain the geometric relationship betweenthe EDM cone, two positive semidefinite cones, and the elliptope.We illustrate geometric requirements, in particular, for projection of a candidate matrixon a positive semidefinite cone that establish its membership to the EDM cone. The faces of the EDM cone are described,but still open is the question whether all its faces are exposed as they are for the positive semidefinite cone.The classic Schoenberg criterion, relating EDM and positive semidefinite cones, isrevealed to be a discretized membership relation (a generalized inequality, a new Farkas''''''''-like lemma)between the EDM cone and its ordinary dual. A matrix criterion for membership to the dual EDM cone is derived thatis simpler than the Schoenberg criterion.We derive a new concise expression for the EDM cone and its dual involvingtwo subspaces and a positive semidefinite cone."Semidefinite programming" is reviewedwith particular attention to optimality conditionsof prototypical primal and dual conic programs,their interplay, and the perturbation method of rank reduction of optimal solutions(extant but not well-known).We show how to solve a ubiquitous platonic combinatorial optimization problem from linear algebra(the optimal Boolean solution x to Ax=b)via semidefinite program relaxation.A three-dimensional polyhedral analogue for the positive semidefinite cone of 3X3 symmetricmatrices is introduced; a tool for visualizing in 6 dimensions.In "EDM proximity"we explore methods of solution to a few fundamental and prevalentEuclidean distance matrix proximity problems; the problem of finding that Euclidean distance matrix closestto a given matrix in the Euclidean sense.We pay particular attention to the problem when compounded with rank minimization.We offer a new geometrical proof of a famous result discovered by Eckart \& Young in 1936 regarding Euclideanprojection of a point on a subset of the positive semidefinite cone comprising all positive semidefinite matriceshaving rank not exceeding a prescribed limit rho.We explain how this problem is transformed to a convex optimization for any rank rho.
Publisher: Meboo Publishing USA
ISBN: 0976401304
Category : Mathematics
Languages : en
Pages : 776
Book Description
The study of Euclidean distance matrices (EDMs) fundamentally asks what can be known geometrically given onlydistance information between points in Euclidean space. Each point may represent simply locationor, abstractly, any entity expressible as a vector in finite-dimensional Euclidean space.The answer to the question posed is that very much can be known about the points;the mathematics of this combined study of geometry and optimization is rich and deep.Throughout we cite beacons of historical accomplishment.The application of EDMs has already proven invaluable in discerning biological molecular conformation.The emerging practice of localization in wireless sensor networks, the global positioning system (GPS), and distance-based pattern recognitionwill certainly simplify and benefit from this theory.We study the pervasive convex Euclidean bodies and their various representations.In particular, we make convex polyhedra, cones, and dual cones more visceral through illustration, andwe study the geometric relation of polyhedral cones to nonorthogonal bases biorthogonal expansion.We explain conversion between halfspace- and vertex-descriptions of convex cones,we provide formulae for determining dual cones,and we show how classic alternative systems of linear inequalities or linear matrix inequalities and optimality conditions can be explained by generalized inequalities in terms of convex cones and their duals.The conic analogue to linear independence, called conic independence, is introducedas a new tool in the study of classical cone theory; the logical next step in the progression:linear, affine, conic.Any convex optimization problem has geometric interpretation.This is a powerful attraction: the ability to visualize geometry of an optimization problem.We provide tools to make visualization easier.The concept of faces, extreme points, and extreme directions of convex Euclidean bodiesis explained here, crucial to understanding convex optimization.The convex cone of positive semidefinite matrices, in particular, is studied in depth.We mathematically interpret, for example,its inverse image under affine transformation, and we explainhow higher-rank subsets of its boundary united with its interior are convex.The Chapter on "Geometry of convex functions",observes analogies between convex sets and functions:The set of all vector-valued convex functions is a closed convex cone.Included among the examples in this chapter, we show how the real affinefunction relates to convex functions as the hyperplane relates to convex sets.Here, also, pertinent results formultidimensional convex functions are presented that are largely ignored in the literature;tricks and tips for determining their convexityand discerning their geometry, particularly with regard to matrix calculus which remains largely unsystematizedwhen compared with the traditional practice of ordinary calculus.Consequently, we collect some results of matrix differentiation in the appendices.The Euclidean distance matrix (EDM) is studied,its properties and relationship to both positive semidefinite and Gram matrices.We relate the EDM to the four classical axioms of the Euclidean metric;thereby, observing the existence of an infinity of axioms of the Euclidean metric beyondthe triangle inequality. We proceed byderiving the fifth Euclidean axiom and then explain why furthering this endeavoris inefficient because the ensuing criteria (while describing polyhedra)grow linearly in complexity and number.Some geometrical problems solvable via EDMs,EDM problems posed as convex optimization, and methods of solution arepresented;\eg, we generate a recognizable isotonic map of the United States usingonly comparative distance information (no distance information, only distance inequalities).We offer a new proof of the classic Schoenberg criterion, that determines whether a candidate matrix is an EDM. Our proofrelies on fundamental geometry; assuming, any EDM must correspond to a list of points contained in some polyhedron(possibly at its vertices) and vice versa.It is not widely known that the Schoenberg criterion implies nonnegativity of the EDM entries; proved here.We characterize the eigenvalues of an EDM matrix and then devisea polyhedral cone required for determining membership of a candidate matrix(in Cayley-Menger form) to the convex cone of Euclidean distance matrices (EDM cone); \ie,a candidate is an EDM if and only if its eigenspectrum belongs to a spectral cone for EDM^N.We will see spectral cones are not unique.In the chapter "EDM cone", we explain the geometric relationship betweenthe EDM cone, two positive semidefinite cones, and the elliptope.We illustrate geometric requirements, in particular, for projection of a candidate matrixon a positive semidefinite cone that establish its membership to the EDM cone. The faces of the EDM cone are described,but still open is the question whether all its faces are exposed as they are for the positive semidefinite cone.The classic Schoenberg criterion, relating EDM and positive semidefinite cones, isrevealed to be a discretized membership relation (a generalized inequality, a new Farkas''''''''-like lemma)between the EDM cone and its ordinary dual. A matrix criterion for membership to the dual EDM cone is derived thatis simpler than the Schoenberg criterion.We derive a new concise expression for the EDM cone and its dual involvingtwo subspaces and a positive semidefinite cone."Semidefinite programming" is reviewedwith particular attention to optimality conditionsof prototypical primal and dual conic programs,their interplay, and the perturbation method of rank reduction of optimal solutions(extant but not well-known).We show how to solve a ubiquitous platonic combinatorial optimization problem from linear algebra(the optimal Boolean solution x to Ax=b)via semidefinite program relaxation.A three-dimensional polyhedral analogue for the positive semidefinite cone of 3X3 symmetricmatrices is introduced; a tool for visualizing in 6 dimensions.In "EDM proximity"we explore methods of solution to a few fundamental and prevalentEuclidean distance matrix proximity problems; the problem of finding that Euclidean distance matrix closestto a given matrix in the Euclidean sense.We pay particular attention to the problem when compounded with rank minimization.We offer a new geometrical proof of a famous result discovered by Eckart \& Young in 1936 regarding Euclideanprojection of a point on a subset of the positive semidefinite cone comprising all positive semidefinite matriceshaving rank not exceeding a prescribed limit rho.We explain how this problem is transformed to a convex optimization for any rank rho.
Mathematics for Machine Learning
Author: Marc Peter Deisenroth
Publisher: Cambridge University Press
ISBN: 1108569323
Category : Computers
Languages : en
Pages : 392
Book Description
The fundamental mathematical tools needed to understand machine learning include linear algebra, analytic geometry, matrix decompositions, vector calculus, optimization, probability and statistics. These topics are traditionally taught in disparate courses, making it hard for data science or computer science students, or professionals, to efficiently learn the mathematics. This self-contained textbook bridges the gap between mathematical and machine learning texts, introducing the mathematical concepts with a minimum of prerequisites. It uses these concepts to derive four central machine learning methods: linear regression, principal component analysis, Gaussian mixture models and support vector machines. For students and others with a mathematical background, these derivations provide a starting point to machine learning texts. For those learning the mathematics for the first time, the methods help build intuition and practical experience with applying mathematical concepts. Every chapter includes worked examples and exercises to test understanding. Programming tutorials are offered on the book's web site.
Publisher: Cambridge University Press
ISBN: 1108569323
Category : Computers
Languages : en
Pages : 392
Book Description
The fundamental mathematical tools needed to understand machine learning include linear algebra, analytic geometry, matrix decompositions, vector calculus, optimization, probability and statistics. These topics are traditionally taught in disparate courses, making it hard for data science or computer science students, or professionals, to efficiently learn the mathematics. This self-contained textbook bridges the gap between mathematical and machine learning texts, introducing the mathematical concepts with a minimum of prerequisites. It uses these concepts to derive four central machine learning methods: linear regression, principal component analysis, Gaussian mixture models and support vector machines. For students and others with a mathematical background, these derivations provide a starting point to machine learning texts. For those learning the mathematics for the first time, the methods help build intuition and practical experience with applying mathematical concepts. Every chapter includes worked examples and exercises to test understanding. Programming tutorials are offered on the book's web site.
Algorithms for Optimization
Author: Mykel J. Kochenderfer
Publisher: MIT Press
ISBN: 0262039427
Category : Computers
Languages : en
Pages : 521
Book Description
A comprehensive introduction to optimization with a focus on practical algorithms for the design of engineering systems. This book offers a comprehensive introduction to optimization with a focus on practical algorithms. The book approaches optimization from an engineering perspective, where the objective is to design a system that optimizes a set of metrics subject to constraints. Readers will learn about computational approaches for a range of challenges, including searching high-dimensional spaces, handling problems where there are multiple competing objectives, and accommodating uncertainty in the metrics. Figures, examples, and exercises convey the intuition behind the mathematical approaches. The text provides concrete implementations in the Julia programming language. Topics covered include derivatives and their generalization to multiple dimensions; local descent and first- and second-order methods that inform local descent; stochastic methods, which introduce randomness into the optimization process; linear constrained optimization, when both the objective function and the constraints are linear; surrogate models, probabilistic surrogate models, and using probabilistic surrogate models to guide optimization; optimization under uncertainty; uncertainty propagation; expression optimization; and multidisciplinary design optimization. Appendixes offer an introduction to the Julia language, test functions for evaluating algorithm performance, and mathematical concepts used in the derivation and analysis of the optimization methods discussed in the text. The book can be used by advanced undergraduates and graduate students in mathematics, statistics, computer science, any engineering field, (including electrical engineering and aerospace engineering), and operations research, and as a reference for professionals.
Publisher: MIT Press
ISBN: 0262039427
Category : Computers
Languages : en
Pages : 521
Book Description
A comprehensive introduction to optimization with a focus on practical algorithms for the design of engineering systems. This book offers a comprehensive introduction to optimization with a focus on practical algorithms. The book approaches optimization from an engineering perspective, where the objective is to design a system that optimizes a set of metrics subject to constraints. Readers will learn about computational approaches for a range of challenges, including searching high-dimensional spaces, handling problems where there are multiple competing objectives, and accommodating uncertainty in the metrics. Figures, examples, and exercises convey the intuition behind the mathematical approaches. The text provides concrete implementations in the Julia programming language. Topics covered include derivatives and their generalization to multiple dimensions; local descent and first- and second-order methods that inform local descent; stochastic methods, which introduce randomness into the optimization process; linear constrained optimization, when both the objective function and the constraints are linear; surrogate models, probabilistic surrogate models, and using probabilistic surrogate models to guide optimization; optimization under uncertainty; uncertainty propagation; expression optimization; and multidisciplinary design optimization. Appendixes offer an introduction to the Julia language, test functions for evaluating algorithm performance, and mathematical concepts used in the derivation and analysis of the optimization methods discussed in the text. The book can be used by advanced undergraduates and graduate students in mathematics, statistics, computer science, any engineering field, (including electrical engineering and aerospace engineering), and operations research, and as a reference for professionals.
Large-Scale Nonlinear Optimization
Author: Gianni Pillo
Publisher: Springer Science & Business Media
ISBN: 0387300651
Category : Mathematics
Languages : en
Pages : 297
Book Description
This book reviews and discusses recent advances in the development of methods and algorithms for nonlinear optimization and its applications, focusing on the large-dimensional case, the current forefront of much research. Individual chapters, contributed by eminent authorities, provide an up-to-date overview of the field from different and complementary standpoints, including theoretical analysis, algorithmic development, implementation issues and applications.
Publisher: Springer Science & Business Media
ISBN: 0387300651
Category : Mathematics
Languages : en
Pages : 297
Book Description
This book reviews and discusses recent advances in the development of methods and algorithms for nonlinear optimization and its applications, focusing on the large-dimensional case, the current forefront of much research. Individual chapters, contributed by eminent authorities, provide an up-to-date overview of the field from different and complementary standpoints, including theoretical analysis, algorithmic development, implementation issues and applications.