Author: Alexander J. Zaslavski
Publisher: Springer Nature
ISBN: 3030603008
Category : Mathematics
Languages : en
Pages : 148
Book Description
This focused monograph presents a study of subgradient algorithms for constrained minimization problems in a Hilbert space. The book is of interest for experts in applications of optimization to engineering and economics. The goal is to obtain a good approximate solution of the problem in the presence of computational errors. The discussion takes into consideration the fact that for every algorithm its iteration consists of several steps and that computational errors for different steps are different, in general. The book is especially useful for the reader because it contains solutions to a number of difficult and interesting problems in the numerical optimization. The subgradient projection algorithm is one of the most important tools in optimization theory and its applications. An optimization problem is described by an objective function and a set of feasible points. For this algorithm each iteration consists of two steps. The first step requires a calculation of a subgradient of the objective function; the second requires a calculation of a projection on the feasible set. The computational errors in each of these two steps are different. This book shows that the algorithm discussed, generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if computational errors for the two steps of the algorithm are known, one discovers an approximate solution and how many iterations one needs for this. In addition to their mathematical interest, the generalizations considered in this book have a significant practical meaning.
The Projected Subgradient Algorithm in Convex Optimization
Author: Alexander J. Zaslavski
Publisher: Springer Nature
ISBN: 3030603008
Category : Mathematics
Languages : en
Pages : 148
Book Description
This focused monograph presents a study of subgradient algorithms for constrained minimization problems in a Hilbert space. The book is of interest for experts in applications of optimization to engineering and economics. The goal is to obtain a good approximate solution of the problem in the presence of computational errors. The discussion takes into consideration the fact that for every algorithm its iteration consists of several steps and that computational errors for different steps are different, in general. The book is especially useful for the reader because it contains solutions to a number of difficult and interesting problems in the numerical optimization. The subgradient projection algorithm is one of the most important tools in optimization theory and its applications. An optimization problem is described by an objective function and a set of feasible points. For this algorithm each iteration consists of two steps. The first step requires a calculation of a subgradient of the objective function; the second requires a calculation of a projection on the feasible set. The computational errors in each of these two steps are different. This book shows that the algorithm discussed, generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if computational errors for the two steps of the algorithm are known, one discovers an approximate solution and how many iterations one needs for this. In addition to their mathematical interest, the generalizations considered in this book have a significant practical meaning.
Publisher: Springer Nature
ISBN: 3030603008
Category : Mathematics
Languages : en
Pages : 148
Book Description
This focused monograph presents a study of subgradient algorithms for constrained minimization problems in a Hilbert space. The book is of interest for experts in applications of optimization to engineering and economics. The goal is to obtain a good approximate solution of the problem in the presence of computational errors. The discussion takes into consideration the fact that for every algorithm its iteration consists of several steps and that computational errors for different steps are different, in general. The book is especially useful for the reader because it contains solutions to a number of difficult and interesting problems in the numerical optimization. The subgradient projection algorithm is one of the most important tools in optimization theory and its applications. An optimization problem is described by an objective function and a set of feasible points. For this algorithm each iteration consists of two steps. The first step requires a calculation of a subgradient of the objective function; the second requires a calculation of a projection on the feasible set. The computational errors in each of these two steps are different. This book shows that the algorithm discussed, generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if computational errors for the two steps of the algorithm are known, one discovers an approximate solution and how many iterations one needs for this. In addition to their mathematical interest, the generalizations considered in this book have a significant practical meaning.
Convex Optimization with Computational Errors
Author: Alexander J. Zaslavski
Publisher: Springer Nature
ISBN: 3030378225
Category : Mathematics
Languages : en
Pages : 364
Book Description
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this. The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximation-based method is used in Chapter 9 for constrained convex optimization. Chapter 10 is devoted to minimization of quasiconvex functions. Minimization of sharp weakly convex functions is discussed in Chapter 11. Chapter 12 is devoted to a generalized projected subgradient method for minimization of a convex function over a set which is not necessarily convex. The book is of interest for researchers and engineers working in optimization. It also can be useful in preparation courses for graduate students. The main feature of the book which appeals specifically to this audience is the study of the influence of computational errors for several important optimization algorithms. The book is of interest for experts in applications of optimization to engineering and economics.
Publisher: Springer Nature
ISBN: 3030378225
Category : Mathematics
Languages : en
Pages : 364
Book Description
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this. The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximation-based method is used in Chapter 9 for constrained convex optimization. Chapter 10 is devoted to minimization of quasiconvex functions. Minimization of sharp weakly convex functions is discussed in Chapter 11. Chapter 12 is devoted to a generalized projected subgradient method for minimization of a convex function over a set which is not necessarily convex. The book is of interest for researchers and engineers working in optimization. It also can be useful in preparation courses for graduate students. The main feature of the book which appeals specifically to this audience is the study of the influence of computational errors for several important optimization algorithms. The book is of interest for experts in applications of optimization to engineering and economics.
Optimization in Banach Spaces
Author: Alexander J. Zaslavski
Publisher: Springer Nature
ISBN: 3031126440
Category : Mathematics
Languages : en
Pages : 132
Book Description
The book is devoted to the study of constrained minimization problems on closed and convex sets in Banach spaces with a Frechet differentiable objective function. Such problems are well studied in a finite-dimensional space and in an infinite-dimensional Hilbert space. When the space is Hilbert there are many algorithms for solving optimization problems including the gradient projection algorithm which is one of the most important tools in the optimization theory, nonlinear analysis and their applications. An optimization problem is described by an objective function and a set of feasible points. For the gradient projection algorithm each iteration consists of two steps. The first step is a calculation of a gradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error. In our recent research we show that the gradient projection algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. It should be mentioned that the properties of a Hilbert space play an important role. When we consider an optimization problem in a general Banach space the situation becomes more difficult and less understood. On the other hand such problems arise in the approximation theory. The book is of interest for mathematicians working in optimization. It also can be useful in preparation courses for graduate students. The main feature of the book which appeals specifically to this audience is the study of algorithms for convex and nonconvex minimization problems in a general Banach space. The book is of interest for experts in applications of optimization to the approximation theory. In this book the goal is to obtain a good approximate solution of the constrained optimization problem in a general Banach space under the presence of computational errors. It is shown that the algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. The book consists of four chapters. In the first we discuss several algorithms which are studied in the book and prove a convergence result for an unconstrained problem which is a prototype of our results for the constrained problem. In Chapter 2 we analyze convex optimization problems. Nonconvex optimization problems are studied in Chapter 3. In Chapter 4 we study continuous algorithms for minimization problems under the presence of computational errors. The algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. The book consists of four chapters. In the first we discuss several algorithms which are studied in the book and prove a convergence result for an unconstrained problem which is a prototype of our results for the constrained problem. In Chapter 2 we analyze convex optimization problems. Nonconvex optimization problems are studied in Chapter 3. In Chapter 4 we study continuous algorithms for minimization problems under the presence of computational errors.
Publisher: Springer Nature
ISBN: 3031126440
Category : Mathematics
Languages : en
Pages : 132
Book Description
The book is devoted to the study of constrained minimization problems on closed and convex sets in Banach spaces with a Frechet differentiable objective function. Such problems are well studied in a finite-dimensional space and in an infinite-dimensional Hilbert space. When the space is Hilbert there are many algorithms for solving optimization problems including the gradient projection algorithm which is one of the most important tools in the optimization theory, nonlinear analysis and their applications. An optimization problem is described by an objective function and a set of feasible points. For the gradient projection algorithm each iteration consists of two steps. The first step is a calculation of a gradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error. In our recent research we show that the gradient projection algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. It should be mentioned that the properties of a Hilbert space play an important role. When we consider an optimization problem in a general Banach space the situation becomes more difficult and less understood. On the other hand such problems arise in the approximation theory. The book is of interest for mathematicians working in optimization. It also can be useful in preparation courses for graduate students. The main feature of the book which appeals specifically to this audience is the study of algorithms for convex and nonconvex minimization problems in a general Banach space. The book is of interest for experts in applications of optimization to the approximation theory. In this book the goal is to obtain a good approximate solution of the constrained optimization problem in a general Banach space under the presence of computational errors. It is shown that the algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. The book consists of four chapters. In the first we discuss several algorithms which are studied in the book and prove a convergence result for an unconstrained problem which is a prototype of our results for the constrained problem. In Chapter 2 we analyze convex optimization problems. Nonconvex optimization problems are studied in Chapter 3. In Chapter 4 we study continuous algorithms for minimization problems under the presence of computational errors. The algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. The book consists of four chapters. In the first we discuss several algorithms which are studied in the book and prove a convergence result for an unconstrained problem which is a prototype of our results for the constrained problem. In Chapter 2 we analyze convex optimization problems. Nonconvex optimization problems are studied in Chapter 3. In Chapter 4 we study continuous algorithms for minimization problems under the presence of computational errors.
Inherently Parallel Algorithms in Feasibility and Optimization and their Applications
Author: D. Butnariu
Publisher: Elsevier
ISBN: 0080508766
Category : Mathematics
Languages : en
Pages : 515
Book Description
The Haifa 2000 Workshop on "Inherently Parallel Algorithms for Feasibility and Optimization and their Applications" brought together top scientists in this area. The objective of the Workshop was to discuss, analyze and compare the latest developments in this fast growing field of applied mathematics and to identify topics of research which are of special interest for industrial applications and for further theoretical study.Inherently parallel algorithms, that is, computational methods which are, by their mathematical nature, parallel, have been studied in various contexts for more than fifty years. However, it was only during the last decade that they have mostly proved their practical usefulness because new generations of computers made their implementation possible in order to solve complex feasibility and optimization problems involving huge amounts of data via parallel processing. These led to an accumulation of computational experience and theoretical information and opened new and challenging questions concerning the behavior of inherently parallel algorithms for feasibility and optimization, their convergence in new environments and in circumstances in which they were not considered before their stability and reliability. Several research groups all over the world focused on these questions and it was the general feeling among scientists involved in this effort that the time has come to survey the latest progress and convey a perspective for further development and concerted scientific investigations. Thus, the editors of this volume, with the support of the Israeli Academy for Sciences and Humanities, took the initiative of organizing a Workshop intended to bring together the leading scientists in the field. The current volume is the Proceedings of the Workshop representing the discussions, debates and communications that took place. Having all that information collected in a single book will provide mathematicians and engineers interested in the theoretical and practical aspects of the inherently parallel algorithms for feasibility and optimization with a tool for determining when, where and which algorithms in this class are fit for solving specific problems, how reliable they are, how they behave and how efficient they were in previous applications. Such a tool will allow software creators to choose ways of better implementing these methods by learning from existing experience.
Publisher: Elsevier
ISBN: 0080508766
Category : Mathematics
Languages : en
Pages : 515
Book Description
The Haifa 2000 Workshop on "Inherently Parallel Algorithms for Feasibility and Optimization and their Applications" brought together top scientists in this area. The objective of the Workshop was to discuss, analyze and compare the latest developments in this fast growing field of applied mathematics and to identify topics of research which are of special interest for industrial applications and for further theoretical study.Inherently parallel algorithms, that is, computational methods which are, by their mathematical nature, parallel, have been studied in various contexts for more than fifty years. However, it was only during the last decade that they have mostly proved their practical usefulness because new generations of computers made their implementation possible in order to solve complex feasibility and optimization problems involving huge amounts of data via parallel processing. These led to an accumulation of computational experience and theoretical information and opened new and challenging questions concerning the behavior of inherently parallel algorithms for feasibility and optimization, their convergence in new environments and in circumstances in which they were not considered before their stability and reliability. Several research groups all over the world focused on these questions and it was the general feeling among scientists involved in this effort that the time has come to survey the latest progress and convey a perspective for further development and concerted scientific investigations. Thus, the editors of this volume, with the support of the Israeli Academy for Sciences and Humanities, took the initiative of organizing a Workshop intended to bring together the leading scientists in the field. The current volume is the Proceedings of the Workshop representing the discussions, debates and communications that took place. Having all that information collected in a single book will provide mathematicians and engineers interested in the theoretical and practical aspects of the inherently parallel algorithms for feasibility and optimization with a tool for determining when, where and which algorithms in this class are fit for solving specific problems, how reliable they are, how they behave and how efficient they were in previous applications. Such a tool will allow software creators to choose ways of better implementing these methods by learning from existing experience.
Advances in Convex Analysis and Global Optimization
Author: Nicolas Hadjisavvas
Publisher: Springer Science & Business Media
ISBN: 146130279X
Category : Mathematics
Languages : en
Pages : 601
Book Description
There has been much recent progress in global optimization algo rithms for nonconvex continuous and discrete problems from both a theoretical and a practical perspective. Convex analysis plays a fun damental role in the analysis and development of global optimization algorithms. This is due essentially to the fact that virtually all noncon vex optimization problems can be described using differences of convex functions and differences of convex sets. A conference on Convex Analysis and Global Optimization was held during June 5 -9, 2000 at Pythagorion, Samos, Greece. The conference was honoring the memory of C. Caratheodory (1873-1950) and was en dorsed by the Mathematical Programming Society (MPS) and by the Society for Industrial and Applied Mathematics (SIAM) Activity Group in Optimization. The conference was sponsored by the European Union (through the EPEAEK program), the Department of Mathematics of the Aegean University and the Center for Applied Optimization of the University of Florida, by the General Secretariat of Research and Tech nology of Greece, by the Ministry of Education of Greece, and several local Greek government agencies and companies. This volume contains a selective collection of refereed papers based on invited and contribut ing talks presented at this conference. The two themes of convexity and global optimization pervade this book. The conference provided a forum for researchers working on different aspects of convexity and global opti mization to present their recent discoveries, and to interact with people working on complementary aspects of mathematical programming.
Publisher: Springer Science & Business Media
ISBN: 146130279X
Category : Mathematics
Languages : en
Pages : 601
Book Description
There has been much recent progress in global optimization algo rithms for nonconvex continuous and discrete problems from both a theoretical and a practical perspective. Convex analysis plays a fun damental role in the analysis and development of global optimization algorithms. This is due essentially to the fact that virtually all noncon vex optimization problems can be described using differences of convex functions and differences of convex sets. A conference on Convex Analysis and Global Optimization was held during June 5 -9, 2000 at Pythagorion, Samos, Greece. The conference was honoring the memory of C. Caratheodory (1873-1950) and was en dorsed by the Mathematical Programming Society (MPS) and by the Society for Industrial and Applied Mathematics (SIAM) Activity Group in Optimization. The conference was sponsored by the European Union (through the EPEAEK program), the Department of Mathematics of the Aegean University and the Center for Applied Optimization of the University of Florida, by the General Secretariat of Research and Tech nology of Greece, by the Ministry of Education of Greece, and several local Greek government agencies and companies. This volume contains a selective collection of refereed papers based on invited and contribut ing talks presented at this conference. The two themes of convexity and global optimization pervade this book. The conference provided a forum for researchers working on different aspects of convexity and global opti mization to present their recent discoveries, and to interact with people working on complementary aspects of mathematical programming.
Optimization on Solution Sets of Common Fixed Point Problems
Author: Alexander J. Zaslavski
Publisher: Springer Nature
ISBN: 3030788490
Category : Mathematics
Languages : en
Pages : 434
Book Description
This book is devoted to a detailed study of the subgradient projection method and its variants for convex optimization problems over the solution sets of common fixed point problems and convex feasibility problems. These optimization problems are investigated to determine good solutions obtained by different versions of the subgradient projection algorithm in the presence of sufficiently small computational errors. The use of selected algorithms is highlighted including the Cimmino type subgradient, the iterative subgradient, and the dynamic string-averaging subgradient. All results presented are new. Optimization problems where the underlying constraints are the solution sets of other problems, frequently occur in applied mathematics. The reader should not miss the section in Chapter 1 which considers some examples arising in the real world applications. The problems discussed have an important impact in optimization theory as well. The book will be useful for researches interested in the optimization theory and its applications.
Publisher: Springer Nature
ISBN: 3030788490
Category : Mathematics
Languages : en
Pages : 434
Book Description
This book is devoted to a detailed study of the subgradient projection method and its variants for convex optimization problems over the solution sets of common fixed point problems and convex feasibility problems. These optimization problems are investigated to determine good solutions obtained by different versions of the subgradient projection algorithm in the presence of sufficiently small computational errors. The use of selected algorithms is highlighted including the Cimmino type subgradient, the iterative subgradient, and the dynamic string-averaging subgradient. All results presented are new. Optimization problems where the underlying constraints are the solution sets of other problems, frequently occur in applied mathematics. The reader should not miss the section in Chapter 1 which considers some examples arising in the real world applications. The problems discussed have an important impact in optimization theory as well. The book will be useful for researches interested in the optimization theory and its applications.
Nonsmooth Optimization
Author: Claude Lemarechal
Publisher: Elsevier
ISBN: 1483188760
Category : Technology & Engineering
Languages : en
Pages : 195
Book Description
Nonsmooth Optimization contains the proceedings of a workshop on non-smooth optimization (NSO) held from March 28 to April 8,1977 in Austria under the auspices of the International Institute for Applied Systems Analysis. The papers explore the techniques and theory of NSO and cover topics ranging from systems of inequalities to smooth approximation of non-smooth functions, as well as quadratic programming and line searches. Comprised of nine chapters, this volume begins with a survey of Soviet research on subgradient optimization carried out since 1962, followed by a discussion on rates of convergence in subgradient optimization. The reader is then introduced to the method of subgradient optimization in an abstract setting and the minimal hypotheses required to ensure convergence; NSO and nonlinear programming; and bundle methods in NSO. A feasible descent algorithm for linearly constrained least squares problems is described. The book also considers sufficient minimization of piecewise-linear univariate functions before concluding with a description of the method of parametric decomposition in mathematical programming. This monograph will be of interest to mathematicians and mathematics students.
Publisher: Elsevier
ISBN: 1483188760
Category : Technology & Engineering
Languages : en
Pages : 195
Book Description
Nonsmooth Optimization contains the proceedings of a workshop on non-smooth optimization (NSO) held from March 28 to April 8,1977 in Austria under the auspices of the International Institute for Applied Systems Analysis. The papers explore the techniques and theory of NSO and cover topics ranging from systems of inequalities to smooth approximation of non-smooth functions, as well as quadratic programming and line searches. Comprised of nine chapters, this volume begins with a survey of Soviet research on subgradient optimization carried out since 1962, followed by a discussion on rates of convergence in subgradient optimization. The reader is then introduced to the method of subgradient optimization in an abstract setting and the minimal hypotheses required to ensure convergence; NSO and nonlinear programming; and bundle methods in NSO. A feasible descent algorithm for linearly constrained least squares problems is described. The book also considers sufficient minimization of piecewise-linear univariate functions before concluding with a description of the method of parametric decomposition in mathematical programming. This monograph will be of interest to mathematicians and mathematics students.
Numerical Optimization with Computational Errors
Author: Alexander J. Zaslavski
Publisher: Springer
ISBN: 3319309218
Category : Mathematics
Languages : en
Pages : 308
Book Description
This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s method.
Publisher: Springer
ISBN: 3319309218
Category : Mathematics
Languages : en
Pages : 308
Book Description
This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s method.
Fixed Point Theory and Graph Theory
Author: Monther Alfuraidan
Publisher: Academic Press
ISBN: 0128043652
Category : Mathematics
Languages : en
Pages : 444
Book Description
Fixed Point Theory and Graph Theory provides an intersection between the theories of fixed point theorems that give the conditions under which maps (single or multivalued) have solutions and graph theory which uses mathematical structures to illustrate the relationship between ordered pairs of objects in terms of their vertices and directed edges. This edited reference work is perhaps the first to provide a link between the two theories, describing not only their foundational aspects, but also the most recent advances and the fascinating intersection of the domains. The authors provide solution methods for fixed points in different settings, with two chapters devoted to the solutions method for critically important non-linear problems in engineering, namely, variational inequalities, fixed point, split feasibility, and hierarchical variational inequality problems. The last two chapters are devoted to integrating fixed point theory in spaces with the graph and the use of retractions in the fixed point theory for ordered sets. - Introduces both metric fixed point and graph theory in terms of their disparate foundations and common application environments - Provides a unique integration of otherwise disparate domains that aids both students seeking to understand either area and researchers interested in establishing an integrated research approach - Emphasizes solution methods for fixed points in non-linear problems such as variational inequalities, split feasibility, and hierarchical variational inequality problems that is particularly appropriate for engineering and core science applications
Publisher: Academic Press
ISBN: 0128043652
Category : Mathematics
Languages : en
Pages : 444
Book Description
Fixed Point Theory and Graph Theory provides an intersection between the theories of fixed point theorems that give the conditions under which maps (single or multivalued) have solutions and graph theory which uses mathematical structures to illustrate the relationship between ordered pairs of objects in terms of their vertices and directed edges. This edited reference work is perhaps the first to provide a link between the two theories, describing not only their foundational aspects, but also the most recent advances and the fascinating intersection of the domains. The authors provide solution methods for fixed points in different settings, with two chapters devoted to the solutions method for critically important non-linear problems in engineering, namely, variational inequalities, fixed point, split feasibility, and hierarchical variational inequality problems. The last two chapters are devoted to integrating fixed point theory in spaces with the graph and the use of retractions in the fixed point theory for ordered sets. - Introduces both metric fixed point and graph theory in terms of their disparate foundations and common application environments - Provides a unique integration of otherwise disparate domains that aids both students seeking to understand either area and researchers interested in establishing an integrated research approach - Emphasizes solution methods for fixed points in non-linear problems such as variational inequalities, split feasibility, and hierarchical variational inequality problems that is particularly appropriate for engineering and core science applications
Set-Valued Mappings and Enlargements of Monotone Operators
Author: Regina S. Burachik
Publisher: Springer Science & Business Media
ISBN: 0387697578
Category : Mathematics
Languages : en
Pages : 305
Book Description
This is the first comprehensive book treatment of the emerging subdiscipline of set-valued mapping and enlargements of maximal monotone operators. It features several important new results and applications in the field. Throughout the text, examples help readers make the bridge from theory to application. Numerous exercises are also offered to enable readers to apply and build their own skills and knowledge.
Publisher: Springer Science & Business Media
ISBN: 0387697578
Category : Mathematics
Languages : en
Pages : 305
Book Description
This is the first comprehensive book treatment of the emerging subdiscipline of set-valued mapping and enlargements of maximal monotone operators. It features several important new results and applications in the field. Throughout the text, examples help readers make the bridge from theory to application. Numerous exercises are also offered to enable readers to apply and build their own skills and knowledge.