Author: J. Frédéric Bonnans
Publisher: Springer
ISBN: 3030149773
Category : Mathematics
Languages : en
Pages : 320
Book Description
This textbook provides an introduction to convex duality for optimization problems in Banach spaces, integration theory, and their application to stochastic programming problems in a static or dynamic setting. It introduces and analyses the main algorithms for stochastic programs, while the theoretical aspects are carefully dealt with. The reader is shown how these tools can be applied to various fields, including approximation theory, semidefinite and second-order cone programming and linear decision rules. This textbook is recommended for students, engineers and researchers who are willing to take a rigorous approach to the mathematics involved in the application of duality theory to optimization with uncertainty.
Convex and Stochastic Optimization
First-order and Stochastic Optimization Methods for Machine Learning
Author: Guanghui Lan
Publisher: Springer Nature
ISBN: 3030395685
Category : Mathematics
Languages : en
Pages : 591
Book Description
This book covers not only foundational materials but also the most recent progresses made during the past few years on the area of machine learning algorithms. In spite of the intensive research and development in this area, there does not exist a systematic treatment to introduce the fundamental concepts and recent progresses on machine learning algorithms, especially on those based on stochastic optimization methods, randomized algorithms, nonconvex optimization, distributed and online learning, and projection free methods. This book will benefit the broad audience in the area of machine learning, artificial intelligence and mathematical programming community by presenting these recent developments in a tutorial style, starting from the basic building blocks to the most carefully designed and complicated algorithms for machine learning.
Publisher: Springer Nature
ISBN: 3030395685
Category : Mathematics
Languages : en
Pages : 591
Book Description
This book covers not only foundational materials but also the most recent progresses made during the past few years on the area of machine learning algorithms. In spite of the intensive research and development in this area, there does not exist a systematic treatment to introduce the fundamental concepts and recent progresses on machine learning algorithms, especially on those based on stochastic optimization methods, randomized algorithms, nonconvex optimization, distributed and online learning, and projection free methods. This book will benefit the broad audience in the area of machine learning, artificial intelligence and mathematical programming community by presenting these recent developments in a tutorial style, starting from the basic building blocks to the most carefully designed and complicated algorithms for machine learning.
Convex Optimization
Author: Sébastien Bubeck
Publisher: Foundations and Trends (R) in Machine Learning
ISBN: 9781601988607
Category : Convex domains
Languages : en
Pages : 142
Book Description
This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. It begins with the fundamental theory of black-box optimization and proceeds to guide the reader through recent advances in structural optimization and stochastic optimization. The presentation of black-box optimization, strongly influenced by the seminal book by Nesterov, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. Special attention is also given to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging), and discussing their relevance in machine learning. The text provides a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski's alternative to Nesterov's smoothing), and a concise description of interior point methods. In stochastic optimization it discusses stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. It also briefly touches upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.
Publisher: Foundations and Trends (R) in Machine Learning
ISBN: 9781601988607
Category : Convex domains
Languages : en
Pages : 142
Book Description
This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. It begins with the fundamental theory of black-box optimization and proceeds to guide the reader through recent advances in structural optimization and stochastic optimization. The presentation of black-box optimization, strongly influenced by the seminal book by Nesterov, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. Special attention is also given to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging), and discussing their relevance in machine learning. The text provides a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski's alternative to Nesterov's smoothing), and a concise description of interior point methods. In stochastic optimization it discusses stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. It also briefly touches upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.
Stochastic Optimization Methods
Author: Kurt Marti
Publisher: Springer
ISBN: 3662462141
Category : Business & Economics
Languages : en
Pages : 389
Book Description
This book examines optimization problems that in practice involve random model parameters. It details the computation of robust optimal solutions, i.e., optimal solutions that are insensitive with respect to random parameter variations, where appropriate deterministic substitute problems are needed. Based on the probability distribution of the random data and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into appropriate deterministic substitute problems. Due to the probabilities and expectations involved, the book also shows how to apply approximative solution techniques. Several deterministic and stochastic approximation methods are provided: Taylor expansion methods, regression and response surface methods (RSM), probability inequalities, multiple linearization of survival/failure domains, discretization methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation and gradient procedures and differentiation formulas for probabilities and expectations. In the third edition, this book further develops stochastic optimization methods. In particular, it now shows how to apply stochastic optimization methods to the approximate solution of important concrete problems arising in engineering, economics and operations research.
Publisher: Springer
ISBN: 3662462141
Category : Business & Economics
Languages : en
Pages : 389
Book Description
This book examines optimization problems that in practice involve random model parameters. It details the computation of robust optimal solutions, i.e., optimal solutions that are insensitive with respect to random parameter variations, where appropriate deterministic substitute problems are needed. Based on the probability distribution of the random data and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into appropriate deterministic substitute problems. Due to the probabilities and expectations involved, the book also shows how to apply approximative solution techniques. Several deterministic and stochastic approximation methods are provided: Taylor expansion methods, regression and response surface methods (RSM), probability inequalities, multiple linearization of survival/failure domains, discretization methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation and gradient procedures and differentiation formulas for probabilities and expectations. In the third edition, this book further develops stochastic optimization methods. In particular, it now shows how to apply stochastic optimization methods to the approximate solution of important concrete problems arising in engineering, economics and operations research.
Convex Stochastic Optimization
Author: Teemu Pennanen
Publisher: Springer Nature
ISBN: 3031764323
Category :
Languages : en
Pages : 420
Book Description
Publisher: Springer Nature
ISBN: 3031764323
Category :
Languages : en
Pages : 420
Book Description
Convex Optimization
Author: Stephen P. Boyd
Publisher: Cambridge University Press
ISBN: 9780521833783
Category : Business & Economics
Languages : en
Pages : 744
Book Description
Convex optimization problems arise frequently in many different fields. This book provides a comprehensive introduction to the subject, and shows in detail how such problems can be solved numerically with great efficiency. The book begins with the basic elements of convex sets and functions, and then describes various classes of convex optimization problems. Duality and approximation techniques are then covered, as are statistical estimation techniques. Various geometrical problems are then presented, and there is detailed discussion of unconstrained and constrained minimization problems, and interior-point methods. The focus of the book is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. It contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance and economics.
Publisher: Cambridge University Press
ISBN: 9780521833783
Category : Business & Economics
Languages : en
Pages : 744
Book Description
Convex optimization problems arise frequently in many different fields. This book provides a comprehensive introduction to the subject, and shows in detail how such problems can be solved numerically with great efficiency. The book begins with the basic elements of convex sets and functions, and then describes various classes of convex optimization problems. Duality and approximation techniques are then covered, as are statistical estimation techniques. Various geometrical problems are then presented, and there is detailed discussion of unconstrained and constrained minimization problems, and interior-point methods. The focus of the book is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. It contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance and economics.
Optimization for Machine Learning
Author: Suvrit Sra
Publisher: MIT Press
ISBN: 026201646X
Category : Computers
Languages : en
Pages : 509
Book Description
An up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities. The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields. Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.
Publisher: MIT Press
ISBN: 026201646X
Category : Computers
Languages : en
Pages : 509
Book Description
An up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities. The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields. Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.
Lectures on Stochastic Programming: Modeling and Theory, Third Edition
Author: Alexander Shapiro
Publisher: SIAM
ISBN: 1611976596
Category : Mathematics
Languages : en
Pages : 542
Book Description
An accessible and rigorous presentation of contemporary models and ideas of stochastic programming, this book focuses on optimization problems involving uncertain parameters for which stochastic models are available. Since these problems occur in vast, diverse areas of science and engineering, there is much interest in rigorous ways of formulating, analyzing, and solving them. This substantially revised edition presents a modern theory of stochastic programming, including expanded and detailed coverage of sample complexity, risk measures, and distributionally robust optimization. It adds two new chapters that provide readers with a solid understanding of emerging topics; updates Chapter 6 to now include a detailed discussion of the interchangeability principle for risk measures; and presents new material on formulation and numerical approaches to solving periodical multistage stochastic programs. Lectures on Stochastic Programming: Modeling and Theory, Third Edition is written for researchers and graduate students working on theory and applications of optimization, with the hope that it will encourage them to apply stochastic programming models and undertake further studies of this fascinating and rapidly developing area.
Publisher: SIAM
ISBN: 1611976596
Category : Mathematics
Languages : en
Pages : 542
Book Description
An accessible and rigorous presentation of contemporary models and ideas of stochastic programming, this book focuses on optimization problems involving uncertain parameters for which stochastic models are available. Since these problems occur in vast, diverse areas of science and engineering, there is much interest in rigorous ways of formulating, analyzing, and solving them. This substantially revised edition presents a modern theory of stochastic programming, including expanded and detailed coverage of sample complexity, risk measures, and distributionally robust optimization. It adds two new chapters that provide readers with a solid understanding of emerging topics; updates Chapter 6 to now include a detailed discussion of the interchangeability principle for risk measures; and presents new material on formulation and numerical approaches to solving periodical multistage stochastic programs. Lectures on Stochastic Programming: Modeling and Theory, Third Edition is written for researchers and graduate students working on theory and applications of optimization, with the hope that it will encourage them to apply stochastic programming models and undertake further studies of this fascinating and rapidly developing area.
Stochastic Optimization Models in Finance
Author: William T. Ziemba
Publisher: World Scientific
ISBN: 981256800X
Category : Business & Economics
Languages : en
Pages : 756
Book Description
A reprint of one of the classic volumes on portfolio theory and investment, this book has been used by the leading professors at universities such as Stanford, Berkeley, and Carnegie-Mellon. It contains five parts, each with a review of the literature and about 150 pages of computational and review exercises and further in-depth, challenging problems.Frequently referenced and highly usable, the material remains as fresh and relevant for a portfolio theory course as ever.
Publisher: World Scientific
ISBN: 981256800X
Category : Business & Economics
Languages : en
Pages : 756
Book Description
A reprint of one of the classic volumes on portfolio theory and investment, this book has been used by the leading professors at universities such as Stanford, Berkeley, and Carnegie-Mellon. It contains five parts, each with a review of the literature and about 150 pages of computational and review exercises and further in-depth, challenging problems.Frequently referenced and highly usable, the material remains as fresh and relevant for a portfolio theory course as ever.
Introductory Lectures on Convex Optimization
Author: Y. Nesterov
Publisher: Springer Science & Business Media
ISBN: 144198853X
Category : Mathematics
Languages : en
Pages : 253
Book Description
It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name "polynomial-time interior-point methods", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12].
Publisher: Springer Science & Business Media
ISBN: 144198853X
Category : Mathematics
Languages : en
Pages : 253
Book Description
It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name "polynomial-time interior-point methods", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12].