Description : Markov processes are among the most important stochastic processes for both theory and applications. This book develops the general theory of these processes, and applies this theory to various special examples. The initial chapter is devoted to the most important classical example - one dimensional Brownian motion. This, together with a chapter on continuous time Markov chains, provides the motivation for the general setup based on semigroups and generators. Chapters on stochastic calculus and probabilistic potential theory give an introduction to some of the key areas of application of Brownian motion and its relatives. A chapter on interacting particle systems treats a more recently developed class of Markov processes that have as their origin problems in physics and biology. This is a textbook for a graduate course that can follow one that covers basic probabilistic limit theorems and discrete time processes.
Description : This book gives a systematic treatment of singularly perturbed systems that naturally arise in control and optimization, queueing networks, manufacturing systems, and financial engineering. It presents results on asymptotic expansions of solutions of Komogorov forward and backward equations, properties of functional occupation measures, exponential upper bounds, and functional limit results for Markov chains with weak and strong interactions. To bridge the gap between theory and applications, a large portion of the book is devoted to applications in controlled dynamic systems, production planning, and numerical methods for controlled Markovian systems with large-scale and complex structures in the real-world problems. This second edition has been updated throughout and includes two new chapters on asymptotic expansions of solutions for backward equations and hybrid LQG problems. The chapters on analytic and probabilistic properties of two-time-scale Markov chains have been almost completely rewritten and the notation has been streamlined and simplified. This book is written for applied mathematicians, engineers, operations researchers, and applied scientists. Selected material from the book can also be used for a one semester advanced graduate-level course in applied probability and stochastic processes.
Description : Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.
Description : This book concerns continuous-time controlled Markov chains and Markov games. The former, which are also known as continuous-time Markov decision processes, form a class of stochastic control problems in which a single decision-maker has a wish to optimize a given objective function. In contrast, there are two or more decision-makers (or players, or controllers) trying to optimize its own objective function in a Markov game. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science among other areas. The main features of the control and game models studied in the book are the continuous time variable, the denumerable state space, and that the control (or action) sets are Borel spaces. Moreover, the transition and reward rates of the dynamical system may be unbounded. The authors are interested in some aspects of controlled Markov chains and Markov games such as characterizing the optimal reward functions, and determining optimal policies for each of the optimality criteria studied here. The main focus is on advanced optimality criteria (such as, bias, variance, sensitive discount, and Blackwell optimality), though they also deal with the basic optimality criteria (discounted and average reward). A particular emphasis is made on the application of the results presented in this book. One of the main concerns is to propose assumptions on the control and game models that are easily verifiable (and verified) in practice. Moreover, algorithmic and computational issues are also analyzed. In particular, the authors propose approximation results that allow precise numerical approximations of the solution to some problems of practical interest. Applications to population models and epidemic processes are also shown.
Description : Continuous time parameter Markov chains have been useful for modeling various random phenomena occurring in queueing theory, genetics, demography, epidemiology, and competing populations. This is the first book about those aspects of the theory of continuous time Markov chains which are useful in applications to such areas. It studies continuous time Markov chains through the transition function and corresponding q-matrix, rather than sample paths. An extensive discussion of birth and death processes, including the Stieltjes moment problem, and the Karlin-McGregor method of solution of the birth and death processes and multidimensional population processes is included, and there is an extensive bibliography. Virtually all of this material is appearing in book form for the first time.