Description : Stochastic optimization problems arise in decision-making problems under uncertainty, and find various applications in economics and finance. On the other hand, problems in finance have recently led to new developments in the theory of stochastic control. This volume provides a systematic treatment of stochastic optimization problems applied to finance by presenting the different existing methods: dynamic programming, viscosity solutions, backward stochastic differential equations, and martingale duality methods. The theory is discussed in the context of recent developments in this field, with complete and detailed proofs, and is illustrated by means of concrete examples from the world of finance: portfolio allocation, option hedging, real options, optimal investment, etc. This book is directed towards graduate students and researchers in mathematical finance, and will also benefit applied mathematicians interested in financial applications and practitioners wishing to know more about the use of stochastic optimization methods in finance.
Description : The goal of this textbook is to introduce students to the stochastic analysis tools that play an increasing role in the probabilistic approach to optimization problems, including stochastic control and stochastic differential games. While optimal control is taught in many graduate programs in applied mathematics and operations research, the author was intrigued by the lack of coverage of the theory of stochastic differential games. This is the first title in SIAM?s Financial Mathematics book series and is based on the author?s lecture notes. It will be helpful to students who are interested in stochastic differential equations (forward, backward, forward-backward); the probabilistic approach to stochastic control (dynamic programming and the stochastic maximum principle); and mean field games and control of McKean?Vlasov dynamics. The theory is illustrated by applications to models of systemic risk, macroeconomic growth, flocking/schooling, crowd behavior, and predatory trading, among others.
Description : Backward stochastic differential equations with jumps can be used to solve problems in both finance and insurance. Part I of this book presents the theory of BSDEs with Lipschitz generators driven by a Brownian motion and a compensated random measure, with an emphasis on those generated by step processes and Lévy processes. It discusses key results and techniques (including numerical algorithms) for BSDEs with jumps and studies filtration-consistent nonlinear expectations and g-expectations. Part I also focuses on the mathematical tools and proofs which are crucial for understanding the theory. Part II investigates actuarial and financial applications of BSDEs with jumps. It considers a general financial and insurance model and deals with pricing and hedging of insurance equity-linked claims and asset-liability management problems. It additionally investigates perfect hedging, superhedging, quadratic optimization, utility maximization, indifference pricing, ambiguity risk minimization, no-good-deal pricing and dynamic risk measures. Part III presents some other useful classes of BSDEs and their applications. This book will make BSDEs more accessible to those who are interested in applying these equations to actuarial and financial problems. It will be beneficial to students and researchers in mathematical finance, risk measures, portfolio optimization as well as actuarial practitioners.
Description : This book contains an introduction to three topics in stochastic control: discrete time stochastic control, i. e. , stochastic dynamic programming (Chapter 1), piecewise - terministic control problems (Chapter 3), and control of Ito diffusions (Chapter 4). The chapters include treatments of optimal stopping problems. An Appendix - calls material from elementary probability theory and gives heuristic explanations of certain more advanced tools in probability theory. The book will hopefully be of interest to students in several ?elds: economics, engineering, operations research, ?nance, business, mathematics. In economics and business administration, graduate students should readily be able to read it, and the mathematical level can be suitable for advanced undergraduates in mathem- ics and science. The prerequisites for reading the book are only a calculus course and a course in elementary probability. (Certain technical comments may demand a slightly better background. ) As this book perhaps (and hopefully) will be read by readers with widely diff- ing backgrounds, some general advice may be useful: Don’t be put off if paragraphs, comments, or remarks contain material of a seemingly more technical nature that you don’t understand. Just skip such material and continue reading, it will surely not be needed in order to understand the main ideas and results. The presentation avoids the use of measure theory.
Description : In recent years there has been a significant increase of interest in continuous-time Principal-Agent models, or contract theory, and their applications. Continuous-time models provide a powerful and elegant framework for solving stochastic optimization problems of finding the optimal contracts between two parties, under various assumptions on the information they have access to, and the effect they have on the underlying "profit/loss" values. This monograph surveys recent results of the theory in a systematic way, using the approach of the so-called Stochastic Maximum Principle, in models driven by Brownian Motion. Optimal contracts are characterized via a system of Forward-Backward Stochastic Differential Equations. In a number of interesting special cases these can be solved explicitly, enabling derivation of many qualitative economic conclusions.
Description : The theory of Markov decision processes focuses on controlled Markov chains in discrete time. The authors establish the theory for general state and action spaces and at the same time show its application by means of numerous examples, mostly taken from the fields of finance and operations research. By using a structural approach many technicalities (concerning measure theory) are avoided. They cover problems with finite and infinite horizons, as well as partially observable Markov decision processes, piecewise deterministic Markov decision processes and stopping problems. The book presents Markov decision processes in action and includes various state-of-the-art applications with a particular view towards finance. It is useful for upper-level undergraduates, Master's students and researchers in both applied probability and finance, and provides exercises (without solutions).
Description : This volume provides a general overview of discrete- and continuous-time Markov control processes and stochastic games, along with a look at the range of applications of stochastic control and some of its recent theoretical developments. These topics include various aspects of dynamic programming, approximation algorithms, and infinite-dimensional linear programming. In all, the work comprises 18 carefully selected papers written by experts in their respective fields. Optimization, Control, and Applications of Stochastic Systems will be a valuable resource for all practitioners, researchers, and professionals in applied mathematics and operations research who work in the areas of stochastic control, mathematical finance, queueing theory, and inventory systems. It may also serve as a supplemental text for graduate courses in optimal control and dynamic games.
Description : This monograph develops the Hamilton-Jacobi-Bellman theory via dynamic programming principle for a class of optimal control problems for stochastic hereditary differential equations (SHDEs) driven by a standard Brownian motion and with a bounded or an infinite but fading memory. These equations represent a class of stochastic infinite-dimensional systems that become increasingly important and have wide range of applications in physics, chemistry, biology, engineering and economics/finance. This monograph can be used as a reference for those who have special interest in optimal control theory and applications of stochastic hereditary systems.
Description : This book gives a systematic treatment of singularly perturbed systems that naturally arise in control and optimization, queueing networks, manufacturing systems, and financial engineering. It presents results on asymptotic expansions of solutions of Komogorov forward and backward equations, properties of functional occupation measures, exponential upper bounds, and functional limit results for Markov chains with weak and strong interactions. To bridge the gap between theory and applications, a large portion of the book is devoted to applications in controlled dynamic systems, production planning, and numerical methods for controlled Markovian systems with large-scale and complex structures in the real-world problems. This second edition has been updated throughout and includes two new chapters on asymptotic expansions of solutions for backward equations and hybrid LQG problems. The chapters on analytic and probabilistic properties of two-time-scale Markov chains have been almost completely rewritten and the notation has been streamlined and simplified. This book is written for applied mathematicians, engineers, operations researchers, and applied scientists. Selected material from the book can also be used for a one semester advanced graduate-level course in applied probability and stochastic processes.