Least Squares Model Averaging by Prediction Criterion

Least Squares Model Averaging by Prediction Criterion PDF Author: Tian Xie
Publisher:
ISBN:
Category :
Languages : en
Pages : 40

Get Book Here

Book Description

Least Squares Model Averaging by Prediction Criterion

Least Squares Model Averaging by Prediction Criterion PDF Author: Tian Xie
Publisher:
ISBN:
Category :
Languages : en
Pages : 40

Get Book Here

Book Description


Generalized Least Squares Model Averaging

Generalized Least Squares Model Averaging PDF Author: Qingfeng Liu
Publisher:
ISBN:
Category :
Languages : en
Pages : 54

Get Book Here

Book Description
In this paper, we propose a method of averaging generalized least squares estimators for linear regression models with heteroskedastic errors. The averaging weights are chosen to minimize Mallows' Cp-like criterion. We show that the weight vector selected by our method is optimal. It is also shown that this optimality holds even when the variances of the error terms are estimated and the feasible generalized least squares estimators are averaged. The variances can be estimated parametrically or nonparametrically. Monte Carlo simulation results are encouraging. An empirical example illustrates that the proposed method is useful for predicting a measure of firms' performance.

Essays on Least Squares Model Averaging

Essays on Least Squares Model Averaging PDF Author: Tian Xie
Publisher:
ISBN:
Category :
Languages : en
Pages : 246

Get Book Here

Book Description
This dissertation adds to the literature on least squares model averaging by studying and extending current least squares model averaging techniques. The first chapter reviews existing literature and discusses the contributions of this dissertation. The second chapter proposes a new estimator for least squares model averaging. A model average estimator is a weighted average of common estimates obtained from a set of models. I propose computing weights by minimizing a model average prediction criterion (MAPC). I prove that the MAPC estimator is asymptotically optimal in the sense of achieving the lowest possible mean squared error. For statistical inference, I derive asymptotic tests on the average coefficients for the "core" regressors. These regressors are of primary interest to researchers and are included in every approximation model. In Chapter Three, two empirical applications for the MAPC method are conducted. I revisit the economic growth models in Barro (1991) in the first application. My results provide significant evidence to support Barro's (1991) findings. In the second application, I revisit the work by Durlauf, Kourtellos and Tan (2008) (hereafter DKT). Many of my results are consistent with DKT's findings and some of my results provide an alternative explanation to those outlined by DKT. In the fourth chapter, I propose using the model averaging method to construct optimal instruments for IV estimation when there are many potential instrument sets. The empirical weights are computed by minimizing the model averaging IV (MAIV) criterion through convex optimization. I propose a new loss function to evaluate the performance of the estimator. I prove that the instrument set obtained by the MAIV estimator is asymptotically optimal in the sense of achieving the lowest possible value of the loss function. The fifth chapter develops a new forecast combination method based on MAPC. The empirical weights are obtained through a convex optimization of MAPC. I prove that with stationary observations, the MAPC estimator is asymptotically optimal for forecast combination in that it achieves the lowest possible one-step-ahead second-order mean squared forecast error (MSFE). I also show that MAPC is asymptotically equivalent to the in-sample mean squared error (MSE) and MSFE.

Least Squares Model Averaging

Least Squares Model Averaging PDF Author: Xinyu Zhang
Publisher:
ISBN:
Category :
Languages : en
Pages : 9

Get Book Here

Book Description
This note is in response to a recent paper by Hansen (2007, Econometrica) who proposed an optimal model average estimator with weights selected by minimizing a Mallows criterion. The main contribution of Hansen's paper is a demonstration that the Mallows criterion is asymptotically equivalent to the squared error, so the model average estimator that minimizes the Mallows criterion also minimizes the squared error in large samples. We are concerned with two assumptions that accompany Hansen's approach. First is the assumption that the approximating models are strictly nested in a way that depends on the ordering of regressors. Often there is no clear basis for the ordering and the approach does not permit non-nested models which are more realistic in a practical sense. Second, for the optimality result to hold the model weights are required to lie within a special discrete set. In fact, Hansen (2007) noted both difficulties and called for extensions of the proof techniques. We provide an alternative proof which shows that the result on the optimality of the Mallows criterion in fact holds for continuous model weights and under a non-nested set-up that allows any linear combination of regressors in the approximating models that make up the model average estimator. These are important extensions and our results provide a stronger theoretical basis for the use of the Mallows criterion in model averaging by strengthening existing findings.

Least Squares Model Combining by Mallows Criterion

Least Squares Model Combining by Mallows Criterion PDF Author: Xinyu Zhang
Publisher:
ISBN:
Category :
Languages : en
Pages : 11

Get Book Here

Book Description
This note is in response to a recent paper by Hansen (2007, Econometrica) who proposed an optimal model average estimator with weights selected by minimizing a Mallows criterion. The main contribution of Hansen's paper is a demonstration that the Mallows criterion is asymptotically equivalent to the squared error, so the model average estimator that minimizes the Mallows criterion also minimizes the squared error in large samples. We are concerned with two assumptions that accompany Hansen's approach. First is the assumption that the approximating models are strictly nested in a way that depends on the ordering of regressors. Often there is no clear basis for the ordering and the approach does not permit non-nested models which are more realistic in a practical sense. Second, for the optimality result to hold the model weights are required to lie within a special discrete set. In fact, Hansen (2007) noted both difficulties and called for extensions of the proof techniques. We provide an alternative proof which shows that the result on the optimality of the Mallows criterion in fact holds for continuous model weights and under a non-nested set-up that allows any linear combination of regressors in the approximating models that make up the model average estimator. These are important extensions and our results provide a stronger theoretical basis for the use of the Mallows criterion in model averaging by strengthening existing findings.

A New Study on Asymptotic Optimality of Least Squares Model Averaging

A New Study on Asymptotic Optimality of Least Squares Model Averaging PDF Author: Xinyu Zhang
Publisher:
ISBN:
Category :
Languages : en
Pages : 23

Get Book Here

Book Description
In this paper, we present a comprehensive study of asymptotic optimality of least squares model averaging methods. The concept of asymptotic optimality is that in a large-sample sense, the method results in the model averaging estimator with the smallest possible prediction loss among all such estimators. In the literature, asymptotic optimality is usually proved under specific weights restriction or using hardly interpretable assumptions. This paper provides a new approach to proving asymptotic optimality, in which a general weight set is adopted, and some easily interpretable assumptions are imposed. In particular, we do not impose any assumptions on the maximum selection risk and allow a larger number of regressors than that of existing studies.

The Oxford Handbook of Applied Nonparametric and Semiparametric Econometrics and Statistics

The Oxford Handbook of Applied Nonparametric and Semiparametric Econometrics and Statistics PDF Author: Jeffrey Racine
Publisher: Oxford University Press
ISBN: 0199857946
Category : Business & Economics
Languages : en
Pages : 562

Get Book Here

Book Description
This volume, edited by Jeffrey Racine, Liangjun Su, and Aman Ullah, contains the latest research on nonparametric and semiparametric econometrics and statistics. Chapters by leading international econometricians and statisticians highlight the interface between econometrics and statistical methods for nonparametric and semiparametric procedures.

Selecting Models from Data

Selecting Models from Data PDF Author: P. Cheeseman
Publisher: Springer Science & Business Media
ISBN: 1461226600
Category : Mathematics
Languages : en
Pages : 475

Get Book Here

Book Description
This volume is a selection of papers presented at the Fourth International Workshop on Artificial Intelligence and Statistics held in January 1993. These biennial workshops have succeeded in bringing together researchers from Artificial Intelligence and from Statistics to discuss problems of mutual interest. The exchange has broadened research in both fields and has strongly encour aged interdisciplinary work. The theme ofthe 1993 AI and Statistics workshop was: "Selecting Models from Data". The papers in this volume attest to the diversity of approaches to model selection and to the ubiquity of the problem. Both statistics and artificial intelligence have independently developed approaches to model selection and the corresponding algorithms to implement them. But as these papers make clear, there is a high degree of overlap between the different approaches. In particular, there is agreement that the fundamental problem is the avoidence of "overfitting"-Le., where a model fits the given data very closely, but is a poor predictor for new data; in other words, the model has partly fitted the "noise" in the original data.

Model Averaging

Model Averaging PDF Author: David Fletcher
Publisher: Springer
ISBN: 3662585413
Category : Mathematics
Languages : en
Pages : 107

Get Book Here

Book Description
This book provides a concise and accessible overview of model averaging, with a focus on applications. Model averaging is a common means of allowing for model uncertainty when analysing data, and has been used in a wide range of application areas, such as ecology, econometrics, meteorology and pharmacology. The book presents an overview of the methods developed in this area, illustrating many of them with examples from the life sciences involving real-world data. It also includes an extensive list of references and suggestions for further research. Further, it clearly demonstrates the links between the methods developed in statistics, econometrics and machine learning, as well as the connection between the Bayesian and frequentist approaches to model averaging. The book appeals to statisticians and scientists interested in what methods are available, how they differ and what is known about their properties. It is assumed that readers are familiar with the basic concepts of statistical theory and modelling, including probability, likelihood and generalized linear models.

Elements of Large-Sample Theory

Elements of Large-Sample Theory PDF Author: E.L. Lehmann
Publisher: Springer Science & Business Media
ISBN: 0387227296
Category : Mathematics
Languages : en
Pages : 640

Get Book Here

Book Description
Written by one of the main figures in twentieth century statistics, this book provides a unified treatment of first-order large-sample theory. It discusses a broad range of applications including introductions to density estimation, the bootstrap, and the asymptotics of survey methodology. The book is written at an elementary level making it accessible to most readers.