Generalized Least Squares Model Averaging

Generalized Least Squares Model Averaging PDF Author: Qingfeng Liu
Publisher:
ISBN:
Category :
Languages : en
Pages : 54

Get Book Here

Book Description
In this paper, we propose a method of averaging generalized least squares estimators for linear regression models with heteroskedastic errors. The averaging weights are chosen to minimize Mallows' Cp-like criterion. We show that the weight vector selected by our method is optimal. It is also shown that this optimality holds even when the variances of the error terms are estimated and the feasible generalized least squares estimators are averaged. The variances can be estimated parametrically or nonparametrically. Monte Carlo simulation results are encouraging. An empirical example illustrates that the proposed method is useful for predicting a measure of firms' performance.

Generalized Least Squares Model Averaging

Generalized Least Squares Model Averaging PDF Author: Qingfeng Liu
Publisher:
ISBN:
Category :
Languages : en
Pages : 54

Get Book Here

Book Description
In this paper, we propose a method of averaging generalized least squares estimators for linear regression models with heteroskedastic errors. The averaging weights are chosen to minimize Mallows' Cp-like criterion. We show that the weight vector selected by our method is optimal. It is also shown that this optimality holds even when the variances of the error terms are estimated and the feasible generalized least squares estimators are averaged. The variances can be estimated parametrically or nonparametrically. Monte Carlo simulation results are encouraging. An empirical example illustrates that the proposed method is useful for predicting a measure of firms' performance.

Least Squares Model Averaging

Least Squares Model Averaging PDF Author: Xinyu Zhang
Publisher:
ISBN:
Category :
Languages : en
Pages : 9

Get Book Here

Book Description
This note is in response to a recent paper by Hansen (2007, Econometrica) who proposed an optimal model average estimator with weights selected by minimizing a Mallows criterion. The main contribution of Hansen's paper is a demonstration that the Mallows criterion is asymptotically equivalent to the squared error, so the model average estimator that minimizes the Mallows criterion also minimizes the squared error in large samples. We are concerned with two assumptions that accompany Hansen's approach. First is the assumption that the approximating models are strictly nested in a way that depends on the ordering of regressors. Often there is no clear basis for the ordering and the approach does not permit non-nested models which are more realistic in a practical sense. Second, for the optimality result to hold the model weights are required to lie within a special discrete set. In fact, Hansen (2007) noted both difficulties and called for extensions of the proof techniques. We provide an alternative proof which shows that the result on the optimality of the Mallows criterion in fact holds for continuous model weights and under a non-nested set-up that allows any linear combination of regressors in the approximating models that make up the model average estimator. These are important extensions and our results provide a stronger theoretical basis for the use of the Mallows criterion in model averaging by strengthening existing findings.

Bayesian Model Averaging and Weighted Average Least Squares

Bayesian Model Averaging and Weighted Average Least Squares PDF Author: Giuseppe De Luca
Publisher:
ISBN:
Category :
Languages : en
Pages : 29

Get Book Here

Book Description


Least Squares Model Averaging by Prediction Criterion

Least Squares Model Averaging by Prediction Criterion PDF Author: Tian Xie
Publisher:
ISBN:
Category :
Languages : en
Pages : 40

Get Book Here

Book Description


Essays on Least Squares Model Averaging

Essays on Least Squares Model Averaging PDF Author: Tian Xie
Publisher:
ISBN:
Category :
Languages : en
Pages : 246

Get Book Here

Book Description
This dissertation adds to the literature on least squares model averaging by studying and extending current least squares model averaging techniques. The first chapter reviews existing literature and discusses the contributions of this dissertation. The second chapter proposes a new estimator for least squares model averaging. A model average estimator is a weighted average of common estimates obtained from a set of models. I propose computing weights by minimizing a model average prediction criterion (MAPC). I prove that the MAPC estimator is asymptotically optimal in the sense of achieving the lowest possible mean squared error. For statistical inference, I derive asymptotic tests on the average coefficients for the "core" regressors. These regressors are of primary interest to researchers and are included in every approximation model. In Chapter Three, two empirical applications for the MAPC method are conducted. I revisit the economic growth models in Barro (1991) in the first application. My results provide significant evidence to support Barro's (1991) findings. In the second application, I revisit the work by Durlauf, Kourtellos and Tan (2008) (hereafter DKT). Many of my results are consistent with DKT's findings and some of my results provide an alternative explanation to those outlined by DKT. In the fourth chapter, I propose using the model averaging method to construct optimal instruments for IV estimation when there are many potential instrument sets. The empirical weights are computed by minimizing the model averaging IV (MAIV) criterion through convex optimization. I propose a new loss function to evaluate the performance of the estimator. I prove that the instrument set obtained by the MAIV estimator is asymptotically optimal in the sense of achieving the lowest possible value of the loss function. The fifth chapter develops a new forecast combination method based on MAPC. The empirical weights are obtained through a convex optimization of MAPC. I prove that with stationary observations, the MAPC estimator is asymptotically optimal for forecast combination in that it achieves the lowest possible one-step-ahead second-order mean squared forecast error (MSFE). I also show that MAPC is asymptotically equivalent to the in-sample mean squared error (MSE) and MSFE.

A New Study on Asymptotic Optimality of Least Squares Model Averaging

A New Study on Asymptotic Optimality of Least Squares Model Averaging PDF Author: Xinyu Zhang
Publisher:
ISBN:
Category :
Languages : en
Pages : 23

Get Book Here

Book Description
In this paper, we present a comprehensive study of asymptotic optimality of least squares model averaging methods. The concept of asymptotic optimality is that in a large-sample sense, the method results in the model averaging estimator with the smallest possible prediction loss among all such estimators. In the literature, asymptotic optimality is usually proved under specific weights restriction or using hardly interpretable assumptions. This paper provides a new approach to proving asymptotic optimality, in which a general weight set is adopted, and some easily interpretable assumptions are imposed. In particular, we do not impose any assumptions on the maximum selection risk and allow a larger number of regressors than that of existing studies.

Distribution Theory of the Least Squares Averaging Estimator

Distribution Theory of the Least Squares Averaging Estimator PDF Author: Chu-An Liu
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description
This paper derives the limiting distributions of least squares averaging estimators for linear regression models in a local asymptotic framework. We show that the averaging estimators with fixed weights are asymptotically normal and then develop a plug-in averaging estimator that minimizes the sample analog of the asymptotic mean squared error. We investigate the focused information criterion (Claeskens and Hjort, 2003), the plug-in averaging estimator, the Mallows model averaging estimator (Hansen, 2007), and the jackknife model averaging estimator (Hansen and Racine, 2012). We find that the asymptotic distributions of averaging estimators with data-dependent weights are nonstandard and cannot be approximated by simulation. To address this issue, we propose a simple procedure to construct valid confidence intervals with improved coverage probability. Monte Carlo simulations show that the plug-in averaging estimator generally has smaller expected squared error than other existing model averaging methods, and the coverage probability of proposed confidence intervals achieves the nominal level. As an empirical illustration, the proposed methodology is applied to cross-country growth regressions.

Inference After Model Averaging in Linear Regression Models

Inference After Model Averaging in Linear Regression Models PDF Author: Xinyu Zhang
Publisher:
ISBN:
Category :
Languages : en
Pages : 32

Get Book Here

Book Description
This paper considers the problem of inference for nested least squares averaging estimators. We study the asymptotic behavior of the Mallows model averaging estimator (MMA; Hansen, 2007) and the jackknife model averaging estimator (JMA; Hansen and Racine, 2012) under the standard asymptotics with fixed parameters setup. We find that both MMA and JMA estimators asymptotically assign zero weight to the under-fitted models, and MMA and JMA weights of just-fitted and over-fitted models are asymptotically random. Building on the asymptotic behavior of model weights, we derive the asymptotic distributions of MMA and JMA estimators and propose a simulation-based confidence interval for the least squares averaging estimator. Monte Carlo simulations show that the coverage probabilities of proposed confidence intervals achieve the nominal level.

Least Squares Model Combining by Mallows Criterion

Least Squares Model Combining by Mallows Criterion PDF Author: Xinyu Zhang
Publisher:
ISBN:
Category :
Languages : en
Pages : 11

Get Book Here

Book Description
This note is in response to a recent paper by Hansen (2007, Econometrica) who proposed an optimal model average estimator with weights selected by minimizing a Mallows criterion. The main contribution of Hansen's paper is a demonstration that the Mallows criterion is asymptotically equivalent to the squared error, so the model average estimator that minimizes the Mallows criterion also minimizes the squared error in large samples. We are concerned with two assumptions that accompany Hansen's approach. First is the assumption that the approximating models are strictly nested in a way that depends on the ordering of regressors. Often there is no clear basis for the ordering and the approach does not permit non-nested models which are more realistic in a practical sense. Second, for the optimality result to hold the model weights are required to lie within a special discrete set. In fact, Hansen (2007) noted both difficulties and called for extensions of the proof techniques. We provide an alternative proof which shows that the result on the optimality of the Mallows criterion in fact holds for continuous model weights and under a non-nested set-up that allows any linear combination of regressors in the approximating models that make up the model average estimator. These are important extensions and our results provide a stronger theoretical basis for the use of the Mallows criterion in model averaging by strengthening existing findings.

Bayesian Model Averaging with Exponentiated Least Square Loss

Bayesian Model Averaging with Exponentiated Least Square Loss PDF Author: Dong Dai
Publisher:
ISBN:
Category : Bayesian statistical decision theory
Languages : en
Pages : 116

Get Book Here

Book Description
Given a finite family of functions, the goal of model averaging is to construct a procedure that mimics the function from this family that is the closest to an unknown regression function. More precisely, we consider a general regression model with fi xed design and measure the distance between functions by mean squared error (MSE) at the design points. In this thesis, we propose a new method Bayesian model averaging with exponentiated least square loss (BMAX) to solve the model averaging problem optimally in a minimax sense.