Description : Markov processes are among the most important stochastic processes for both theory and applications. This book develops the general theory of these processes, and applies this theory to various special examples. The initial chapter is devoted to the most important classical example - one dimensional Brownian motion. This, together with a chapter on continuous time Markov chains, provides the motivation for the general setup based on semigroups and generators. Chapters on stochastic calculus and probabilistic potential theory give an introduction to some of the key areas of application of Brownian motion and its relatives. A chapter on interacting particle systems treats a more recently developed class of Markov processes that have as their origin problems in physics and biology. This is a textbook for a graduate course that can follow one that covers basic probabilistic limit theorems and discrete time processes.
Description : Using a singular perturbation approach, this is a systematic treatment of those systems that naturally arise in queuing theory, control and optimisation, and manufacturing, gathering a number of ideas which were previously scattered throughout the literature. The book presents results on asymptotic expansions of the corresponding probability distributions, functional occupation measures, exponential upper bounds, and asymptotic normality. To bridge the gap between theory and applications, a large portion of the book is devoted to various applications, thus reducing the dimensionality for problems under Markovian disturbances and providing tools for dealing with large-scale and complex real-world situations. Much of this stems from the authors'recent research, presenting results which have not appeared elsewhere. An important reference for researchers in applied mathematics, probability and stochastic processes, operations research, control theory, and optimisation.
Description : This textbook, now in its third edition, offers a rigorous and self-contained introduction to the theory of continuous-time stochastic processes, stochastic integrals, and stochastic differential equations. Expertly balancing theory and applications, the work features concrete examples of modeling real-world problems from biology, medicine, industrial applications, finance, and insurance using stochastic methods. No previous knowledge of stochastic processes is required. Key topics include: Markov processes Stochastic differential equations Arbitrage-free markets and financial derivatives Insurance risk Population dynamics, and epidemics Agent-based models New to the Third Edition: Infinitely divisible distributions Random measures Levy processes Fractional Brownian motion Ergodic theory Karhunen-Loeve expansion Additional applications Additional exercises Smoluchowski approximation of Langevin systems An Introduction to Continuous-Time Stochastic Processes, Third Edition will be of interest to a broad audience of students, pure and applied mathematicians, and researchers and practitioners in mathematical finance, biomathematics, biotechnology, and engineering. Suitable as a textbook for graduate or undergraduate courses, as well as European Masters courses (according to the two-year-long second cycle of the “Bologna Scheme”), the work may also be used for self-study or as a reference. Prerequisites include knowledge of calculus and some analysis; exposure to probability would be helpful but not required since the necessary fundamentals of measure and integration are provided. From reviews of previous editions: "The book is ... an account of fundamental concepts as they appear in relevant modern applications and literature. ... The book addresses three main groups: first, mathematicians working in a different field; second, other scientists and professionals from a business or academic background; third, graduate or advanced undergraduate students of a quantitative subject related to stochastic theory and/or applications." -Zentralblatt MATH
Description : Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.
Description : Continuous time parameter Markov chains have been useful for modeling various random phenomena occurring in queueing theory, genetics, demography, epidemiology, and competing populations. This is the first book about those aspects of the theory of continuous time Markov chains which are useful in applications to such areas. It studies continuous time Markov chains through the transition function and corresponding q-matrix, rather than sample paths. An extensive discussion of birth and death processes, including the Stieltjes moment problem, and the Karlin-McGregor method of solution of the birth and death processes and multidimensional population processes is included, and there is an extensive bibliography. Virtually all of this material is appearing in book form for the first time.
Description : This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas. An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown. This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book. Contents:IntroductionControlled Markov ChainsBasic Optimality CriteriaPolicy Iteration and Approximation TheoremsOvertaking, Bias, and Variance OptimalitySensitive Discount OptimalityBlackwell OptimalityConstrained Controlled Markov ChainsApplicationsZero-Sum Markov GamesBias and Overtaking Equilibria for Markov Games Readership: Graduate students and researchers in the fields of stochastic control and stochastic analysis. Keywords:Markov Decision Processes;Continuous-Time Controlled Markov Chains;Stochastic Dynamic Programming;Stochastic GamesKey Features:This book presents a reader-friendly, extensive, self-contained, and up-to-date analysis of advanced optimality criteria for continuous-time controlled Markov chains and Markov games. Most of the material herein is quite recent (it has been published in high-impact journals during the last five years) and it appears in book form for the first timeThis book introduces approximation theorems which, in particular, allow the reader to obtain numerical approximations of the solution to several control problems of practical interest. To the best of our knowledge, this is the first time that such computational issues are studied for denumerable state continuous-time controlled Markov chains. Hence, the book has an adequate balance between, on the one hand, theoretical results and, on the other hand, applications and computational issuesThe books that analyze continuous-time controlled Markov chains usually restrict themselves to the case of bounded transition and reward rates, which can be reduced to discrete-time models by using the uniformization technique. In our case, however, the transition and the reward rates might be unbounded, and so the uniformization technique cannot be used. By the way, let us mention that in models of practical interest the transition and the reward rates are, typically, unboundedReviews:“The book contains a large number of recent research results on CMCs and Markov games and puts them in perspective. It is written in a very conscious manner, contains detailed proofs of all main results, as well as extensive bibliographic remarks. The book is a very valuable piece of work for researchers on continuous-time CMCs and Markov games.”Zentralblatt MATH
Description : An introduction to stochastic processes through the use of R Introduction to Stochastic Processes with R is an accessible and well-balanced presentation of the theory of stochastic processes, with an emphasis on real-world applications of probability theory in the natural and social sciences. The use of simulation, by means of the popular statistical freeware R, makes theoretical results come alive with practical, hands-on demonstrations. Written by a highly-qualified expert in the field, the author presents numerous examples from a wide array of disciplines, which are used to illustrate concepts and highlight computational and theoretical results. Developing readers’ problem-solving skills and mathematical maturity, Introduction to Stochastic Processes with R features: Over 200 examples and 600 end-of-chapter exercises A tutorial for getting started with R, and appendices that contain review material in probability and matrix algebra Discussions of many timely and interesting supplemental topics including Markov chain Monte Carlo, random walk on graphs, card shuffling, Black-Scholes options pricing, applications in biology and genetics, cryptography, martingales, and stochastic calculus Introductions to mathematics as needed in order to suit readers at many mathematical levels A companion website that includes relevant data files as well as all R code and scripts used throughout the book Introduction to Stochastic Processes with R is an ideal textbook for an introductory course in stochastic processes. The book is aimed at undergraduate and beginning graduate-level students in the science, technology, engineering, and mathematics disciplines. The book is also an excellent reference for applied mathematicians and statisticians who are interested in a review of the topic.
Description : Markov processes are processes that have limited memory. In particular, their dependence on the past is only through the previous state. They are used to model the behavior of many systems including communications systems, transportation networks, image segmentation and analysis, biological systems and DNA sequence analysis, random atomic motion and diffusion in physics, social mobility, population studies, epidemiology, animal and insect migration, queueing systems, resource management, dams, financial engineering, actuarial science, and decision systems. Covering a wide range of areas of application of Markov processes, this second edition is revised to highlight the most important aspects as well as the most recent trends and applications of Markov processes. The author spent over 16 years in the industry before returning to academia, and he has applied many of the principles covered in this book in multiple research projects. Therefore, this is an applications-oriented book that also includes enough theory to provide a solid ground in the subject for the reader. Presents both the theory and applications of the different aspects of Markov processes Includes numerous solved examples as well as detailed diagrams that make it easier to understand the principle being presented Discusses different applications of hidden Markov models, such as DNA sequence analysis and speech analysis.
Description : The purpose, level, and style of this new edition conform to the tenets set forth in the original preface. The authors continue with their tack of developing simultaneously theory and applications, intertwined so that they refurbish and elucidate each other. The authors have made three main kinds of changes. First, they have enlarged on the topics treated in the first edition. Second, they have added many exercises and problems at the end of each chapter. Third, and most important, they have supplied, in new chapters, broad introductory discussions of several classes of stochastic processes not dealt with in the first edition, notably martingales, renewal and fluctuation phenomena associated with random sums, stationary stochastic processes, and diffusion theory.