On Some Stochastic Optimal Control Problems in Actuarial Mathematics

On Some Stochastic Optimal Control Problems in Actuarial Mathematics PDF Author: Dongchen Li
Publisher:
ISBN:
Category : Bankruptcy
Languages : en
Pages : 150

Get Book Here

Book Description
The event of ruin (bankruptcy) has long been a core concept of risk management interest in the literature of actuarial science. There are two major research lines. The first one focuses on distributional studies of some crucial ruin-related variables such as the deficit at ruin or the time to ruin. The second one focuses on dynamically controlling the probability that ruin occurs by imposing controls such as investment, reinsurance, or dividend payouts. The content of the thesis will be in line with the second research direction, but under a relaxed definition of ruin, for the reason that ruin is often too harsh a criteria to be implemented in practice. Relaxation of the concept of ruin through the consideration of "exotic ruin" features, including for instance, ruin under discrete observations, Parisian ruin setup, two-sided exit framework, and drawdown setup, received considerable attention in recent years. While there has been a rich literature on the distributional studies of those new features in insurance surplus processes, comparably less contributions have been made to dynamically controlling the corresponding risk. The thesis proposes to analytically study stochastic control problems related to some "exotic ruin" features in the broad area of insurance and finance. In particular, in Chapter 3, we study an optimal investment problem by minimizing the probability that a significant drawdown occurs. In Chapter 4, we take this analysis one step further by proposing a general drawdown-based penalty structure, which include for example, the probability of drawdown considered in Chapter 3 as a special case. Subsequently, we apply it in an optimal investment problem of maximizing a fund manager's expected cumulative income. Moreover, in Chapter 5 we study an optimal investment-reinsurance problem in a two-sided exit framework. All problems mentioned above are considered in a random time horizon. Although the random time horizon is mainly determined by the nature of the problem, we point out that under suitable assumptions, a random time horizon is analytically more tractable in comparison to its finite deterministic counterpart. For each problem considered in Chapters 3--5, we will adopt the dynamic programming principle (DPP) to derive a partial differential equation (PDE), commonly referred to as a Hamilton-Jacobi-Bellman (HJB) equation in the literature, and subsequently show that the value function of each problem is equivalent to a strong solution to the associated HJB equation via a verification argument. The remaining problem is then to solve the HJB equations explicitly. We will develop a new decomposition method in Chapter 3, which decomposes a nonlinear second-order ordinary differential equation (ODE) into two solvable nonlinear first-order ODEs. In Chapters 4 and 5, we use the Legendre transform to build respectively one-to-one correspondence between the original problem and its dual problem, with the latter being a linear free boundary problem that can be solved in explicit forms. It is worth mentioning that additional difficulties arise in the drawdown related problems of Chapters 3 and 4 for the reason that the underlying problems involve the maximum process as an additional dimension. We overcome this difficulty by utilizing a dimension reduction technique. Chapter 6 will be devoted to the study of an optimal investment-reinsurance problem of maximizing the expected mean-variance utility function, which is a typical time-inconsistent problem in the sense that DPP fails. The problem is then formulated as a non-cooperative game, and a subgame perfect Nash equilibrium is subsequently solved. The thesis is finally ended with some concluding remarks and some future research directions in Chapter 7.

On Some Stochastic Optimal Control Problems in Actuarial Mathematics

On Some Stochastic Optimal Control Problems in Actuarial Mathematics PDF Author: Dongchen Li
Publisher:
ISBN:
Category : Bankruptcy
Languages : en
Pages : 150

Get Book Here

Book Description
The event of ruin (bankruptcy) has long been a core concept of risk management interest in the literature of actuarial science. There are two major research lines. The first one focuses on distributional studies of some crucial ruin-related variables such as the deficit at ruin or the time to ruin. The second one focuses on dynamically controlling the probability that ruin occurs by imposing controls such as investment, reinsurance, or dividend payouts. The content of the thesis will be in line with the second research direction, but under a relaxed definition of ruin, for the reason that ruin is often too harsh a criteria to be implemented in practice. Relaxation of the concept of ruin through the consideration of "exotic ruin" features, including for instance, ruin under discrete observations, Parisian ruin setup, two-sided exit framework, and drawdown setup, received considerable attention in recent years. While there has been a rich literature on the distributional studies of those new features in insurance surplus processes, comparably less contributions have been made to dynamically controlling the corresponding risk. The thesis proposes to analytically study stochastic control problems related to some "exotic ruin" features in the broad area of insurance and finance. In particular, in Chapter 3, we study an optimal investment problem by minimizing the probability that a significant drawdown occurs. In Chapter 4, we take this analysis one step further by proposing a general drawdown-based penalty structure, which include for example, the probability of drawdown considered in Chapter 3 as a special case. Subsequently, we apply it in an optimal investment problem of maximizing a fund manager's expected cumulative income. Moreover, in Chapter 5 we study an optimal investment-reinsurance problem in a two-sided exit framework. All problems mentioned above are considered in a random time horizon. Although the random time horizon is mainly determined by the nature of the problem, we point out that under suitable assumptions, a random time horizon is analytically more tractable in comparison to its finite deterministic counterpart. For each problem considered in Chapters 3--5, we will adopt the dynamic programming principle (DPP) to derive a partial differential equation (PDE), commonly referred to as a Hamilton-Jacobi-Bellman (HJB) equation in the literature, and subsequently show that the value function of each problem is equivalent to a strong solution to the associated HJB equation via a verification argument. The remaining problem is then to solve the HJB equations explicitly. We will develop a new decomposition method in Chapter 3, which decomposes a nonlinear second-order ordinary differential equation (ODE) into two solvable nonlinear first-order ODEs. In Chapters 4 and 5, we use the Legendre transform to build respectively one-to-one correspondence between the original problem and its dual problem, with the latter being a linear free boundary problem that can be solved in explicit forms. It is worth mentioning that additional difficulties arise in the drawdown related problems of Chapters 3 and 4 for the reason that the underlying problems involve the maximum process as an additional dimension. We overcome this difficulty by utilizing a dimension reduction technique. Chapter 6 will be devoted to the study of an optimal investment-reinsurance problem of maximizing the expected mean-variance utility function, which is a typical time-inconsistent problem in the sense that DPP fails. The problem is then formulated as a non-cooperative game, and a subgame perfect Nash equilibrium is subsequently solved. The thesis is finally ended with some concluding remarks and some future research directions in Chapter 7.

Deterministic and Stochastic Optimal Control and Inverse Problems

Deterministic and Stochastic Optimal Control and Inverse Problems PDF Author: Baasansuren Jadamba
Publisher: CRC Press
ISBN: 1000511723
Category : Computers
Languages : en
Pages : 394

Get Book Here

Book Description
Inverse problems of identifying parameters and initial/boundary conditions in deterministic and stochastic partial differential equations constitute a vibrant and emerging research area that has found numerous applications. A related problem of paramount importance is the optimal control problem for stochastic differential equations. This edited volume comprises invited contributions from world-renowned researchers in the subject of control and inverse problems. There are several contributions on optimal control and inverse problems covering different aspects of the theory, numerical methods, and applications. Besides a unified presentation of the most recent and relevant developments, this volume also presents some survey articles to make the material self-contained. To maintain the highest level of scientific quality, all manuscripts have been thoroughly reviewed.

Stochastic Control in Insurance

Stochastic Control in Insurance PDF Author: Hanspeter Schmidli
Publisher: Springer Science & Business Media
ISBN: 1848000030
Category : Business & Economics
Languages : en
Pages : 263

Get Book Here

Book Description
Yet again, here is a Springer volume that offers readers something completely new. Until now, solved examples of the application of stochastic control to actuarial problems could only be found in journals. Not any more: this is the first book to systematically present these methods in one volume. The author starts with a short introduction to stochastic control techniques, then applies the principles to several problems. These examples show how verification theorems and existence theorems may be proved, and that the non-diffusion case is simpler than the diffusion case. Schmidli’s brilliant text also includes a number of appendices, a vital resource for those in both academic and professional settings.

Optimal Stochastic Control, Stochastic Target Problems, and Backward SDE

Optimal Stochastic Control, Stochastic Target Problems, and Backward SDE PDF Author: Nizar Touzi
Publisher: Springer Science & Business Media
ISBN: 1461442850
Category : Mathematics
Languages : en
Pages : 219

Get Book Here

Book Description
​This book collects some recent developments in stochastic control theory with applications to financial mathematics. We first address standard stochastic control problems from the viewpoint of the recently developed weak dynamic programming principle. A special emphasis is put on the regularity issues and, in particular, on the behavior of the value function near the boundary. We then provide a quick review of the main tools from viscosity solutions which allow to overcome all regularity problems. We next address the class of stochastic target problems which extends in a nontrivial way the standard stochastic control problems. Here the theory of viscosity solutions plays a crucial role in the derivation of the dynamic programming equation as the infinitesimal counterpart of the corresponding geometric dynamic programming equation. The various developments of this theory have been stimulated by applications in finance and by relevant connections with geometric flows. Namely, the second order extension was motivated by illiquidity modeling, and the controlled loss version was introduced following the problem of quantile hedging. The third part specializes to an overview of Backward stochastic differential equations, and their extensions to the quadratic case.​

Stochastic Linear-Quadratic Optimal Control Theory: Differential Games and Mean-Field Problems

Stochastic Linear-Quadratic Optimal Control Theory: Differential Games and Mean-Field Problems PDF Author: Jingrui Sun
Publisher: Springer Nature
ISBN: 3030483061
Category : Mathematics
Languages : en
Pages : 138

Get Book Here

Book Description
This book gathers the most essential results, including recent ones, on linear-quadratic optimal control problems, which represent an important aspect of stochastic control. It presents results for two-player differential games and mean-field optimal control problems in the context of finite and infinite horizon problems, and discusses a number of new and interesting issues. Further, the book identifies, for the first time, the interconnections between the existence of open-loop and closed-loop Nash equilibria, solvability of the optimality system, and solvability of the associated Riccati equation, and also explores the open-loop solvability of mean-filed linear-quadratic optimal control problems. Although the content is largely self-contained, readers should have a basic grasp of linear algebra, functional analysis and stochastic ordinary differential equations. The book is mainly intended for senior undergraduate and graduate students majoring in applied mathematics who are interested in stochastic control theory. However, it will also appeal to researchers in other related areas, such as engineering, management, finance/economics and the social sciences.

Stochastic Linear-Quadratic Optimal Control Theory: Open-Loop and Closed-Loop Solutions

Stochastic Linear-Quadratic Optimal Control Theory: Open-Loop and Closed-Loop Solutions PDF Author: Jingrui Sun
Publisher: Springer Nature
ISBN: 3030209229
Category : Mathematics
Languages : en
Pages : 129

Get Book Here

Book Description
This book gathers the most essential results, including recent ones, on linear-quadratic optimal control problems, which represent an important aspect of stochastic control. It presents the results in the context of finite and infinite horizon problems, and discusses a number of new and interesting issues. Further, it precisely identifies, for the first time, the interconnections between three well-known, relevant issues – the existence of optimal controls, solvability of the optimality system, and solvability of the associated Riccati equation. Although the content is largely self-contained, readers should have a basic grasp of linear algebra, functional analysis and stochastic ordinary differential equations. The book is mainly intended for senior undergraduate and graduate students majoring in applied mathematics who are interested in stochastic control theory. However, it will also appeal to researchers in other related areas, such as engineering, management, finance/economics and the social sciences.

Numerical Methods for Stochastic Control Problems in Continuous Time

Numerical Methods for Stochastic Control Problems in Continuous Time PDF Author: Harold Kushner
Publisher: Springer Science & Business Media
ISBN: 146130007X
Category : Mathematics
Languages : en
Pages : 480

Get Book Here

Book Description
Stochastic control is a very active area of research. This monograph, written by two leading authorities in the field, has been updated to reflect the latest developments. It covers effective numerical methods for stochastic control problems in continuous time on two levels, that of practice and that of mathematical development. It is broadly accessible for graduate students and researchers.

Stochastic Controls

Stochastic Controls PDF Author: Jiongmin Yong
Publisher: Springer Science & Business Media
ISBN: 1461214661
Category : Mathematics
Languages : en
Pages : 459

Get Book Here

Book Description
As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.

Stochastic Optimization in Insurance

Stochastic Optimization in Insurance PDF Author: Pablo Azcue
Publisher: Springer
ISBN: 1493909959
Category : Mathematics
Languages : en
Pages : 153

Get Book Here

Book Description
The main purpose of the book is to show how a viscosity approach can be used to tackle control problems in insurance. The problems covered are the maximization of survival probability as well as the maximization of dividends in the classical collective risk model. The authors consider the possibility of controlling the risk process by reinsurance as well as by investments. They show that optimal value functions are characterized as either the unique or the smallest viscosity solution of the associated Hamilton-Jacobi-Bellman equation; they also study the structure of the optimal strategies and show how to find them. The viscosity approach was widely used in control problems related to mathematical finance but until quite recently it was not used to solve control problems related to actuarial mathematical science. This book is designed to familiarize the reader on how to use this approach. The intended audience is graduate students as well as researchers in this area.

An Introduction to Optimal Control of FBSDE with Incomplete Information

An Introduction to Optimal Control of FBSDE with Incomplete Information PDF Author: Guangchen Wang
Publisher: Springer
ISBN: 3319790390
Category : Mathematics
Languages : en
Pages : 124

Get Book Here

Book Description
This book focuses on maximum principle and verification theorem for incomplete information forward-backward stochastic differential equations (FBSDEs) and their applications in linear-quadratic optimal controls and mathematical finance. ​Lots of interesting phenomena arising from the area of mathematical finance can be described by FBSDEs. Optimal control problems of FBSDEs are theoretically important and practically relevant. A standard assumption in the literature is that the stochastic noises in the model are completely observed. However, this is rarely the case in real world situations. The optimal control problems under complete information are studied extensively. Nevertheless, very little is known about these problems when the information is not complete. The aim of this book is to fill this gap. This book is written in a style suitable for graduate students and researchers in mathematics and engineering with basic knowledge of stochastic process, optimal control and mathematical finance.