Description : The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South American and Asian scholars.
Description : Markov processes are processes that have limited memory. In particular, their dependence on the past is only through the previous state. They are used to model the behavior of many systems including communications systems, transportation networks, image segmentation and analysis, biological systems and DNA sequence analysis, random atomic motion and diffusion in physics, social mobility, population studies, epidemiology, animal and insect migration, queueing systems, resource management, dams, financial engineering, actuarial science, and decision systems. Covering a wide range of areas of application of Markov processes, this second edition is revised to highlight the most important aspects as well as the most recent trends and applications of Markov processes. The author spent over 16 years in the industry before returning to academia, and he has applied many of the principles covered in this book in multiple research projects. Therefore, this is an applications-oriented book that also includes enough theory to provide a solid ground in the subject for the reader. Presents both the theory and applications of the different aspects of Markov processes Includes numerous solved examples as well as detailed diagrams that make it easier to understand the principle being presented Discusses different applications of hidden Markov models, such as DNA sequence analysis and speech analysis.
Description : This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games.
Description : This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas. An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown. This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book. Contents:IntroductionControlled Markov ChainsBasic Optimality CriteriaPolicy Iteration and Approximation TheoremsOvertaking, Bias, and Variance OptimalitySensitive Discount OptimalityBlackwell OptimalityConstrained Controlled Markov ChainsApplicationsZero-Sum Markov GamesBias and Overtaking Equilibria for Markov Games Readership: Graduate students and researchers in the fields of stochastic control and stochastic analysis. Keywords:Markov Decision Processes;Continuous-Time Controlled Markov Chains;Stochastic Dynamic Programming;Stochastic GamesKey Features:This book presents a reader-friendly, extensive, self-contained, and up-to-date analysis of advanced optimality criteria for continuous-time controlled Markov chains and Markov games. Most of the material herein is quite recent (it has been published in high-impact journals during the last five years) and it appears in book form for the first timeThis book introduces approximation theorems which, in particular, allow the reader to obtain numerical approximations of the solution to several control problems of practical interest. To the best of our knowledge, this is the first time that such computational issues are studied for denumerable state continuous-time controlled Markov chains. Hence, the book has an adequate balance between, on the one hand, theoretical results and, on the other hand, applications and computational issuesThe books that analyze continuous-time controlled Markov chains usually restrict themselves to the case of bounded transition and reward rates, which can be reduced to discrete-time models by using the uniformization technique. In our case, however, the transition and the reward rates might be unbounded, and so the uniformization technique cannot be used. By the way, let us mention that in models of practical interest the transition and the reward rates are, typically, unboundedReviews:“The book contains a large number of recent research results on CMCs and Markov games and puts them in perspective. It is written in a very conscious manner, contains detailed proofs of all main results, as well as extensive bibliographic remarks. The book is a very valuable piece of work for researchers on continuous-time CMCs and Markov games.”Zentralblatt MATH
Description : Presents a number of new and potentially useful self-learning (adaptive) control algorithms and theoretical as well as practical results for both unconstrained and constrained finite Markov chains-efficiently processing new information by adjusting the control strategies directly or indirectly.
Description : "This well-written book provides a clear and accessible treatment of the theory of discrete and continuous-time Markov chains, with an emphasis towards applications. The mathematical treatment is precise and rigorous without superfluous details, and the results are immediately illustrated in illuminating examples. This book will be extremely useful to anybody teaching a course on Markov processes." Jean-François Le Gall, Professor at Université de Paris-Orsay, France. Markov processes is the class of stochastic processes whose past and future are conditionally independent, given their present state. They constitute important models in many applied fields. After an introduction to the Monte Carlo method, this book describes discrete time Markov chains, the Poisson process and continuous time Markov chains. It also presents numerous applications including Markov Chain Monte Carlo, Simulated Annealing, Hidden Markov Models, Annotation and Alignment of Genomic sequences, Control and Filtering, Phylogenetic tree reconstruction and Queuing networks. The last chapter is an introduction to stochastic calculus and mathematical finance. Features include: The Monte Carlo method, discrete time Markov chains, the Poisson process and continuous time jump Markov processes. An introduction to diffusion processes, mathematical finance and stochastic calculus. Applications of Markov processes to various fields, ranging from mathematical biology, to financial engineering and computer science. Numerous exercises and problems with solutions to most of them
Description : In this rigorous account the author studies both discrete-time and continuous-time chains. A distinguishing feature is an introduction to more advanced topics such as martingales and potentials, in the established context of Markov chains. There are applications to simulation, economics, optimal control, genetics, queues and many other topics, and a careful selection of exercises and examples drawn both from theory and practice. This is an ideal text for seminars on random processes or for those that are more oriented towards applications, for advanced undergraduates or graduate students with some background in basic probability theory.