## 09 Mar Markov Chains and Decision Processes for Engineers and Managers

Download this post as PDF (will not include images).

CRC Press, 2010, 492 pp.

Writing yet another book about Markov chains and Markov decision processes needs — without doubt — some justification. The author of the present textbook reveals his motives in the preface: most books on these topics are highly theoretical or, else, merely provide algorithms to solve particular problems, but without explaining the intuition behind the single steps of these algorithms. This book was written with the explicit intention to embrace a bit of both ends of the spectrum.

The author introduces the basics of Markov chains, Markov chains with rewards and Markov decision processes for finite state spaces. (Keep this in mind when you come across the statement that all irreducible Markov chains are recurrent!). Many standard quantities around these processes are discussed, such as stationary distributions, properties of first passage times, and expected average rewards. There is also a section on state reduction techniques and hidden Markov chains. As promised in the book’s preface, almost no proofs are given. Instead, the author often justifies a general formula by doing the corresponding calculations with quite concrete Markov chains, either considering just a few states or assuming a simplifying structure in the transition probability matrix.

With this approach, the book does, on one hand, provide the practitioner (engineer and managers) with concrete formulae to tackle specific questions involving Markov chains. On the other hand, the author tries to give the reader some insight into how these results are derived — in this case, as non-mathematicians are targeted, by means of solid heuristics, rather than mathematical proofs. These are noble motives — isn’t that exactly what we expect of a good textbook? To develop a solid piece of theory (of course not too technical, please), clearly explained and well motivated, exemplified with many, many interesting applications? As for every textbook, the author has to find the difficult balance between all these objectives, which, at the end of the day, is probably a matter of taste. I agree that it is helpful for the student to calculate the formula for the stationary distribution of a general two- or three-state Markov chain and provide some examples with explicit numbers. But how rewarding can it be to go through the reward evaluation equations for a general five-state Markov chain with rewards? Probably not as much as the name of these equations would suggest. With the availability of modern software, solving linear equations is very easy and many calculations in the book seem therefore dispensable.

So, would I use this textbook for my course on Markov chains? I will certainly take it out of my shelf should I need to provide my students with more exercises, as there are plenty of interesting hands-on examples in this book. But to explain to them the abstract concepts behind Markov chains and Markov decision processes? Maybe not.

**Adrian Röllin.**

National University of Singapore.

*Source:- Asia Pacific Mathematics Newsletter, Volume 1 No. 3 (July 2011).*

*It has been republished here with a special permission from World Scientific.*

[ad#ad-2]

## No Comments