Bayesian Estimation of Finite-Horizon Discrete Choice Dynamic Programming Models

Bayesian Estimation of Finite-Horizon Discrete Choice Dynamic Programming Models PDF Author: Masakazu Ishihara
Publisher:
ISBN:
Category :
Languages : en
Pages : 29

Get Book Here

Book Description
We develop a Bayesian Markov chain Monte Carlo (MCMC) algorithm for estimating finite-horizon discrete choice dynamic programming (DDP) models. The proposed algorithm has the potential to reduce the computational burden significantly when some of the state variables are continuous. In a conventional approach to estimating such a finite-horizon DDP model, researchers achieve a reduction in estimation time by evaluating value functions at only a subset of state points and applying an interpolation method to approximate value functions at the remaining state points (e.g., Keane and Wolpin 1994). Although this approach has proven to be effective, the computational burden could still be high if the model has multiple continuous state variables or the number of periods in the time horizon is large. We propose a new estimation algorithm to reduce the computational burden for estimating this class of models. It extends the Bayesian MCMC algorithm for stationary infinite-horizon DDP models proposed by Imai, Jain and Ching (2009) (IJC). In our algorithm, we solve value functions at only one randomly chosen state point per time period, store those partially solved value functions period by period, and approximate expected value functions nonparametrically using the set of those partially solved value functions. We conduct Monte Carlo exercises and show that our algorithm is able to recover the true parameter values well. Finally, similar to IJC, our algorithm allows researchers to incorporate flexible unobserved heterogeneity, which is often computationally infeasible in the conventional two-step estimation approach (e.g., Hotz and Miller 1993; Aguirregabiria and Mira 2002).

Bayesian Estimation of Finite-Horizon Discrete Choice Dynamic Programming Models

Bayesian Estimation of Finite-Horizon Discrete Choice Dynamic Programming Models PDF Author: Masakazu Ishihara
Publisher:
ISBN:
Category :
Languages : en
Pages : 29

Get Book Here

Book Description
We develop a Bayesian Markov chain Monte Carlo (MCMC) algorithm for estimating finite-horizon discrete choice dynamic programming (DDP) models. The proposed algorithm has the potential to reduce the computational burden significantly when some of the state variables are continuous. In a conventional approach to estimating such a finite-horizon DDP model, researchers achieve a reduction in estimation time by evaluating value functions at only a subset of state points and applying an interpolation method to approximate value functions at the remaining state points (e.g., Keane and Wolpin 1994). Although this approach has proven to be effective, the computational burden could still be high if the model has multiple continuous state variables or the number of periods in the time horizon is large. We propose a new estimation algorithm to reduce the computational burden for estimating this class of models. It extends the Bayesian MCMC algorithm for stationary infinite-horizon DDP models proposed by Imai, Jain and Ching (2009) (IJC). In our algorithm, we solve value functions at only one randomly chosen state point per time period, store those partially solved value functions period by period, and approximate expected value functions nonparametrically using the set of those partially solved value functions. We conduct Monte Carlo exercises and show that our algorithm is able to recover the true parameter values well. Finally, similar to IJC, our algorithm allows researchers to incorporate flexible unobserved heterogeneity, which is often computationally infeasible in the conventional two-step estimation approach (e.g., Hotz and Miller 1993; Aguirregabiria and Mira 2002).

A Practitioner's Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models

A Practitioner's Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models PDF Author: Andrew T. Ching
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description
This paper provides a step-by-step guide to estimating infinite horizon discrete choice dynamic programming (DDP) models using a new Bayesian estimation algorithm (Imai, Jain and Ching, Econometrica 77:1865-1899, 2009) (IJC). In the conventional nested fixed point algorithm, most of the information obtained in the past iterations remains unused in the current iteration. In contrast, the IJC algorithm extensively uses the computational results obtained from the past iterations to help solve the DDP model at the current iterated parameter values. Consequently, it has the potential to significantly alleviate the computational burden of estimating DDP models. To illustrate this new estimation method, we use a simple dynamic store choice model where stores offer "frequent-buyer" type reward programs. We show that the parameters of this model, including the discount factor, are well-identified. Our Monte Carlo results demonstrate that the IJC method is able to recover the true parameter values of this model quite precisely. We also show that the IJC method could reduce the estimation time significantly when estimating DDP models with unobserved heterogeneity, especially when the discount factor is close to 1.

Bayesian Estimation of Dynamic Discrete Choice Models

Bayesian Estimation of Dynamic Discrete Choice Models PDF Author: Susumu Imai
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description
We propose a new methodology for structural estimation of infinite horizon dynamic discrete choice models. We combine the Dynamic Programming (DP) solution algorithm with the Bayesian Markov Chain Monte Carlo algorithm into a single algorithm that solves the DP problem and estimates the parameters simultaneously. As a result, the computational burden of estimating a dynamic model becomes comparable to that of a static model. Another feature of our algorithm is that even though per solution-estimation iteration, the number of grid points on the state variable is small, the number of effective grid points increases with the number of estimation iterations. This is how we help ease the "Curse of Dimensionality." We simulate and estimate several versions of a simple model of entry and exit to illustrate our methodology. We also prove that under standard conditions, the parameters converge in probability to the true posterior distribution, regardless of the starting values.

A Practitioner's Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models

A Practitioner's Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models PDF Author:
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description


Bayesian Estimation of Dynamic Discrete Choice Models

Bayesian Estimation of Dynamic Discrete Choice Models PDF Author:
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description


Bayesian Procedures as a Numerical Tool for the Estimation of Dynamic Discrete Choice Models

Bayesian Procedures as a Numerical Tool for the Estimation of Dynamic Discrete Choice Models PDF Author: Peter Haan
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description


Discrete Choice Methods with Simulation

Discrete Choice Methods with Simulation PDF Author: Kenneth Train
Publisher: Cambridge University Press
ISBN: 0521766559
Category : Business & Economics
Languages : en
Pages : 399

Get Book Here

Book Description
This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum stimulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. The second edition adds chapters on endogeneity and expectation-maximization (EM) algorithms. No other book incorporates all these fields, which have arisen in the past 25 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.

Bayesian Inference in Dynamic Discrete Choice Models

Bayesian Inference in Dynamic Discrete Choice Models PDF Author: Andriy Norets
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description


Semiparametric Bayesian Estimation of Discrete Choice Models

Semiparametric Bayesian Estimation of Discrete Choice Models PDF Author: Sylvie Tchumtchoua
Publisher:
ISBN:
Category : Mathematical statistics
Languages : en
Pages : 62

Get Book Here

Book Description


Economic Dynamics in Discrete Time

Economic Dynamics in Discrete Time PDF Author: Jianjun Miao
Publisher: MIT Press
ISBN: 0262325608
Category : Business & Economics
Languages : en
Pages : 737

Get Book Here

Book Description
A unified, comprehensive, and up-to-date introduction to the analytical and numerical tools for solving dynamic economic problems. This book offers a unified, comprehensive, and up-to-date treatment of analytical and numerical tools for solving dynamic economic problems. The focus is on introducing recursive methods—an important part of every economist's set of tools—and readers will learn to apply recursive methods to a variety of dynamic economic problems. The book is notable for its combination of theoretical foundations and numerical methods. Each topic is first described in theoretical terms, with explicit definitions and rigorous proofs; numerical methods and computer codes to implement these methods follow. Drawing on the latest research, the book covers such cutting-edge topics as asset price bubbles, recursive utility, robust control, policy analysis in dynamic New Keynesian models with the zero lower bound on interest rates, and Bayesian estimation of dynamic stochastic general equilibrium (DSGE) models. The book first introduces the theory of dynamical systems and numerical methods for solving dynamical systems, and then discusses the theory and applications of dynamic optimization. The book goes on to treat equilibrium analysis, covering a variety of core macroeconomic models, and such additional topics as recursive utility (increasingly used in finance and macroeconomics), dynamic games, and recursive contracts. The book introduces Dynare, a widely used software platform for handling a range of economic models; readers will learn to use Dynare for numerically solving DSGE models and performing Bayesian estimation of DSGE models. Mathematical appendixes present all the necessary mathematical concepts and results. Matlab codes used to solve examples are indexed and downloadable from the book's website. A solutions manual for students is available for sale from the MIT Press; a downloadable instructor's manual is available to qualified instructors.