Full description not available
E**U
Classic book, new print.
Prof. Bertsekas’ books are always good to read. It’s an old book but the printing is very new (2021).
E**E
Great book
Great book. Bear in mind this is an advanced book in reinforcement learning.
D**H
Awesome and definitive!
The updated version is a definitive treatise of the math that go along with the ideas of reinforcement learning, approximate dynamic programming. Thoroughly enjoying the book.
Y**N
Excellent
Excellent text book for those who are interested in mathematical foundations of reinforcement learning.
H**N
Five Stars
Great book!
B**B
I like it.
New one. I like it.
W**L
From the author of Approximate Dynamic Programming
Neuro-Dynamic Programming was, and is, a foundational reference for anyone wishing to work in the field that goes under names such as approximate dynamic programming, adaptive dynamic programming, reinforcement learning or, as a result of this book, neuro-dynamic programming. This is a clearly written treatment of the theory behind methods to solve dynamic programs by approximating the value function (in this book, the cost-to-go function).The book is primarily for doctoral students and researchers. It provides descriptions of many solution strategies, but not at the level of detailed recipes. The presentation focuses on theory, but at a very readable level. Building on the prior work of the authors, this is the first book that brings together approximation methods in dynamic programming, with the theory of stochastic approximation methods (with its origins in Robbins and Monro) that provide the foundation for convergence proofs.This book, along with Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning) by Sutton and Barto, were major references when I started my own work in this field, leading up to my book: Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics) published by John Wiley and Sons. My students still use Neuro-Dynamic Programming as a reference for their research.Warren PowellProfessorOperations Research and Financial EngineeringPrinceton University
A**G
One of my favorites
This was one of my favorite books as a student and it still is. The book presents much of the theory underlying reinforcement learning (the authors like to call it neuro-dynamic programming) in a clear and compact manner. While there are detailed mathematical proofs in many chapters, the book is actually pretty easy to read! It for sure helped me build an intuitive understanding of the mechanism of reinforcement learning when I was a student. The book is certaintly useful for beginners to this area; Ph.D. students are likely to get even more out of it.Some special features of the book are:* A clear discussion on the connection between classical dynamic programming and reinforcement learning (RL) along with the links to the Robbins-Monro algorithm* Discussion on numerous types of approximate policy iteration* A clear discussion on TD(lambda)* Extensions to average reward (cost) problems* A rigorous discussion of how function approximation works within this framework* Treatment of the stochastic shortest path problem* Detailed proofs of convergence of numerous NDP/RL algorithms* Material/ideas on numerous topics on NDP/RL (for future research)
Trustpilot
2 weeks ago
3 days ago