Author: Julio B. Clempner
Publisher: Springer Nature
ISBN: 3031435753
Category : Technology & Engineering
Languages : en
Pages : 340
Book Description
This book considers a class of ergodic finite controllable Markov's chains. The main idea behind the method, described in this book, is to develop the original discrete optimization problems (or game models) in the space of randomized formulations, where the variables stand in for the distributions (mixed strategies or preferences) of the original discrete (pure) strategies in the use. The following suppositions are made: a finite state space, a limited action space, continuity of the probabilities and rewards associated with the actions, and a necessity for accessibility. These hypotheses lead to the existence of an optimal policy. The best course of action is always stationary. It is either simple (i.e., nonrandomized stationary) or composed of two nonrandomized policies, which is equivalent to randomly selecting one of two simple policies throughout each epoch by tossing a biased coin. As a bonus, the optimization procedure just has to repeatedly solve the time-average dynamic programming equation, making it theoretically feasible to choose the optimum course of action under the global restriction. In the ergodic cases the state distributions, generated by the corresponding transition equations, exponentially quickly converge to their stationary (final) values. This makes it possible to employ all widely used optimization methods (such as Gradient-like procedures, Extra-proximal method, Lagrange's multipliers, Tikhonov's regularization), including the related numerical techniques. In the book we tackle different problems and theoretical Markov models like controllable and ergodic Markov chains, multi-objective Pareto front solutions, partially observable Markov chains, continuous-time Markov chains, Nash equilibrium and Stackelberg equilibrium, Lyapunov-like function in Markov chains, Best-reply strategy, Bayesian incentive-compatible mechanisms, Bayesian Partially Observable Markov Games, bargaining solutions for Nash and Kalai-Smorodinsky formulations, multi-traffic signal-control synchronization problem, Rubinstein's non-cooperative bargaining solutions, the transfer pricing problem as bargaining.
Optimization and Games for Controllable Markov Chains
Controlled Markov Processes and Viscosity Solutions
Author: Wendell H. Fleming
Publisher: Springer Science & Business Media
ISBN: 0387310711
Category : Mathematics
Languages : en
Pages : 436
Book Description
This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games.
Publisher: Springer Science & Business Media
ISBN: 0387310711
Category : Mathematics
Languages : en
Pages : 436
Book Description
This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games.
Optimization, Control, and Applications of Stochastic Systems
Author: Daniel Hernández-Hernández
Publisher: Springer Science & Business Media
ISBN: 0817683372
Category : Science
Languages : en
Pages : 331
Book Description
This volume provides a general overview of discrete- and continuous-time Markov control processes and stochastic games, along with a look at the range of applications of stochastic control and some of its recent theoretical developments. These topics include various aspects of dynamic programming, approximation algorithms, and infinite-dimensional linear programming. In all, the work comprises 18 carefully selected papers written by experts in their respective fields. Optimization, Control, and Applications of Stochastic Systems will be a valuable resource for all practitioners, researchers, and professionals in applied mathematics and operations research who work in the areas of stochastic control, mathematical finance, queueing theory, and inventory systems. It may also serve as a supplemental text for graduate courses in optimal control and dynamic games.
Publisher: Springer Science & Business Media
ISBN: 0817683372
Category : Science
Languages : en
Pages : 331
Book Description
This volume provides a general overview of discrete- and continuous-time Markov control processes and stochastic games, along with a look at the range of applications of stochastic control and some of its recent theoretical developments. These topics include various aspects of dynamic programming, approximation algorithms, and infinite-dimensional linear programming. In all, the work comprises 18 carefully selected papers written by experts in their respective fields. Optimization, Control, and Applications of Stochastic Systems will be a valuable resource for all practitioners, researchers, and professionals in applied mathematics and operations research who work in the areas of stochastic control, mathematical finance, queueing theory, and inventory systems. It may also serve as a supplemental text for graduate courses in optimal control and dynamic games.
Stochastic Teams, Games, and Control under Information Constraints
Author: Serdar Yüksel
Publisher: Springer Nature
ISBN: 3031540719
Category :
Languages : en
Pages : 935
Book Description
Publisher: Springer Nature
ISBN: 3031540719
Category :
Languages : en
Pages : 935
Book Description
Advances in Dynamic Games and Applications
Author: Jerzy A. Filar
Publisher: Springer Science & Business Media
ISBN: 1461213363
Category : Mathematics
Languages : en
Pages : 459
Book Description
Modem game theory has evolved enonnously since its inception in the 1920s in the works ofBorel and von Neumann and since publication in the 1940s of the seminal treatise "Theory of Games and Economic Behavior" by von Neumann and Morgenstern. The branch of game theory known as dynamic games is-to a significant extent-descended from the pioneering work on differential games done by Isaacs in the 1950s and 1960s. Since those early decades game theory has branched out in many directions, spanning such diverse disciplines as mathematics, economics, electrical and electronics engineering, operations research, computer science, theoretical ecology, environmental science, and even political science. The papers in this volume reflect both the maturity and the vitality of modem day game theory in general, and of dynamic games, in particular. The maturity can be seen from the sophistication of the theorems, proofs, methods, and numerical algorithms contained in these articles. The vitality is manifested by the range of new ideas, new applications, the numberofyoung researchers among the authors, and the expanding worldwide coverage of research centers and institutes where the contributions originated
Publisher: Springer Science & Business Media
ISBN: 1461213363
Category : Mathematics
Languages : en
Pages : 459
Book Description
Modem game theory has evolved enonnously since its inception in the 1920s in the works ofBorel and von Neumann and since publication in the 1940s of the seminal treatise "Theory of Games and Economic Behavior" by von Neumann and Morgenstern. The branch of game theory known as dynamic games is-to a significant extent-descended from the pioneering work on differential games done by Isaacs in the 1950s and 1960s. Since those early decades game theory has branched out in many directions, spanning such diverse disciplines as mathematics, economics, electrical and electronics engineering, operations research, computer science, theoretical ecology, environmental science, and even political science. The papers in this volume reflect both the maturity and the vitality of modem day game theory in general, and of dynamic games, in particular. The maturity can be seen from the sophistication of the theorems, proofs, methods, and numerical algorithms contained in these articles. The vitality is manifested by the range of new ideas, new applications, the numberofyoung researchers among the authors, and the expanding worldwide coverage of research centers and institutes where the contributions originated
Dynamic Games in Economics
Author: Josef Haunschmied
Publisher: Springer
ISBN: 3642542484
Category : Mathematics
Languages : en
Pages : 321
Book Description
Dynamic game theory serves the purpose of including strategic interaction in decision making and is therefore often applied to economic problems. This book presents the state-of-the-art and directions for future research in dynamic game theory related to economics. It was initiated by contributors to the 12th Viennese Workshop on Optimal Control, Dynamic Games and Nonlinear Dynamics and combines a selection of papers from the workshop with invited papers of high quality.
Publisher: Springer
ISBN: 3642542484
Category : Mathematics
Languages : en
Pages : 321
Book Description
Dynamic game theory serves the purpose of including strategic interaction in decision making and is therefore often applied to economic problems. This book presents the state-of-the-art and directions for future research in dynamic game theory related to economics. It was initiated by contributors to the 12th Viennese Workshop on Optimal Control, Dynamic Games and Nonlinear Dynamics and combines a selection of papers from the workshop with invited papers of high quality.
Continuous-Time Markov Decision Processes
Author: Alexey Piunovskiy
Publisher: Springer Nature
ISBN: 3030549879
Category : Mathematics
Languages : en
Pages : 605
Book Description
This book offers a systematic and rigorous treatment of continuous-time Markov decision processes, covering both theory and possible applications to queueing systems, epidemiology, finance, and other fields. Unlike most books on the subject, much attention is paid to problems with functional constraints and the realizability of strategies. Three major methods of investigations are presented, based on dynamic programming, linear programming, and reduction to discrete-time problems. Although the main focus is on models with total (discounted or undiscounted) cost criteria, models with average cost criteria and with impulsive controls are also discussed in depth. The book is self-contained. A separate chapter is devoted to Markov pure jump processes and the appendices collect the requisite background on real analysis and applied probability. All the statements in the main text are proved in detail. Researchers and graduate students in applied probability, operational research, statistics and engineering will find this monograph interesting, useful and valuable.
Publisher: Springer Nature
ISBN: 3030549879
Category : Mathematics
Languages : en
Pages : 605
Book Description
This book offers a systematic and rigorous treatment of continuous-time Markov decision processes, covering both theory and possible applications to queueing systems, epidemiology, finance, and other fields. Unlike most books on the subject, much attention is paid to problems with functional constraints and the realizability of strategies. Three major methods of investigations are presented, based on dynamic programming, linear programming, and reduction to discrete-time problems. Although the main focus is on models with total (discounted or undiscounted) cost criteria, models with average cost criteria and with impulsive controls are also discussed in depth. The book is self-contained. A separate chapter is devoted to Markov pure jump processes and the appendices collect the requisite background on real analysis and applied probability. All the statements in the main text are proved in detail. Researchers and graduate students in applied probability, operational research, statistics and engineering will find this monograph interesting, useful and valuable.
Constrained Markov Decision Processes
Author: Eitan Altman
Publisher: Routledge
ISBN: 1351458248
Category : Mathematics
Languages : en
Pages : 256
Book Description
This book provides a unified approach for the study of constrained Markov decision processes with a finite state space and unbounded costs. Unlike the single controller case considered in many other books, the author considers a single controller with several objectives, such as minimizing delays and loss, probabilities, and maximization of throughputs. It is desirable to design a controller that minimizes one cost objective, subject to inequality constraints on other cost objectives. This framework describes dynamic decision problems arising frequently in many engineering fields. A thorough overview of these applications is presented in the introduction. The book is then divided into three sections that build upon each other.
Publisher: Routledge
ISBN: 1351458248
Category : Mathematics
Languages : en
Pages : 256
Book Description
This book provides a unified approach for the study of constrained Markov decision processes with a finite state space and unbounded costs. Unlike the single controller case considered in many other books, the author considers a single controller with several objectives, such as minimizing delays and loss, probabilities, and maximization of throughputs. It is desirable to design a controller that minimizes one cost objective, subject to inequality constraints on other cost objectives. This framework describes dynamic decision problems arising frequently in many engineering fields. A thorough overview of these applications is presented in the introduction. The book is then divided into three sections that build upon each other.
Mathematical Theory of Adaptive Control
Author: Vladimir G. Sragovich
Publisher: World Scientific
ISBN: 9812701036
Category : Mathematics
Languages : en
Pages : 490
Book Description
The theory of adaptive control is concerned with construction of strategies so that the controlled system behaves in a desirable way, without assuming the complete knowledge of the system. The models considered in this comprehensive book are of Markovian type. Both partial observation and partial information cases are analyzed. While the book focuses on discrete time models, continuous time ones are considered in the final chapter. The book provides a novel perspective by summarizing results on adaptive control obtained in the Soviet Union, which are not well known in the West. Comments on the interplay between the Russian and Western methods are also included.
Publisher: World Scientific
ISBN: 9812701036
Category : Mathematics
Languages : en
Pages : 490
Book Description
The theory of adaptive control is concerned with construction of strategies so that the controlled system behaves in a desirable way, without assuming the complete knowledge of the system. The models considered in this comprehensive book are of Markovian type. Both partial observation and partial information cases are analyzed. While the book focuses on discrete time models, continuous time ones are considered in the final chapter. The book provides a novel perspective by summarizing results on adaptive control obtained in the Soviet Union, which are not well known in the West. Comments on the interplay between the Russian and Western methods are also included.
Selected Topics on Continuous-time Controlled Markov Chains and Markov Games
Author: Tomas Prieto-Rumeau
Publisher: World Scientific
ISBN: 1848168497
Category : Mathematics
Languages : en
Pages : 292
Book Description
This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas. An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown. This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book.
Publisher: World Scientific
ISBN: 1848168497
Category : Mathematics
Languages : en
Pages : 292
Book Description
This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas. An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown. This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book.