Variable Selection by Regularization Methods for Generalized Mixed Models

Variable Selection by Regularization Methods for Generalized Mixed Models PDF Author: Andreas Groll
Publisher:
ISBN: 9783869559636
Category :
Languages : en
Pages : 160

Get Book Here

Book Description

Variable Selection by Regularization Methods for Generalized Mixed Models

Variable Selection by Regularization Methods for Generalized Mixed Models PDF Author: Andreas Groll
Publisher:
ISBN: 9783869559636
Category :
Languages : en
Pages : 160

Get Book Here

Book Description


Multivariate Statistical Modelling Based on Generalized Linear Models

Multivariate Statistical Modelling Based on Generalized Linear Models PDF Author: Ludwig Fahrmeir
Publisher: Springer Science & Business Media
ISBN: 1489900101
Category : Mathematics
Languages : en
Pages : 440

Get Book Here

Book Description
Concerned with the use of generalised linear models for univariate and multivariate regression analysis, this is a detailed introductory survey of the subject, based on the analysis of real data drawn from a variety of subjects such as the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account.

Variable Selection Procedures for Generalized Linear Mixed Models in Longitudinal Data Analysis

Variable Selection Procedures for Generalized Linear Mixed Models in Longitudinal Data Analysis PDF Author: Hongmei Yang
Publisher:
ISBN:
Category :
Languages : en
Pages : 114

Get Book Here

Book Description
Keywords: variance component, Laplace approximation, generalized linear mixed model, quasi-likelihood, generalized estimation equation, approximate marginal likelihood, SCAD.

Bayesian Hierarchical Models

Bayesian Hierarchical Models PDF Author: Peter D. Congdon
Publisher: CRC Press
ISBN: 0429532903
Category : Mathematics
Languages : en
Pages : 506

Get Book Here

Book Description
An intermediate-level treatment of Bayesian hierarchical models and their applications, this book demonstrates the advantages of a Bayesian approach to data sets involving inferences for collections of related units or variables, and in methods where parameters can be treated as random collections. Through illustrative data analysis and attention to statistical computing, this book facilitates practical implementation of Bayesian hierarchical methods. The new edition is a revision of the book Applied Bayesian Hierarchical Methods. It maintains a focus on applied modelling and data analysis, but now using entirely R-based Bayesian computing options. It has been updated with a new chapter on regression for causal effects, and one on computing options and strategies. This latter chapter is particularly important, due to recent advances in Bayesian computing and estimation, including the development of rjags and rstan. It also features updates throughout with new examples. The examples exploit and illustrate the broader advantages of the R computing environment, while allowing readers to explore alternative likelihood assumptions, regression structures, and assumptions on prior densities. Features: Provides a comprehensive and accessible overview of applied Bayesian hierarchical modelling Includes many real data examples to illustrate different modelling topics R code (based on rjags, jagsUI, R2OpenBUGS, and rstan) is integrated into the book, emphasizing implementation Software options and coding principles are introduced in new chapter on computing Programs and data sets available on the book’s website

Linear Mixed Model Selection Via Minimum Approximated Information Criterion

Linear Mixed Model Selection Via Minimum Approximated Information Criterion PDF Author: Olivia Abena Atutey
Publisher:
ISBN:
Category : Linear models (Statistics)
Languages : en
Pages : 110

Get Book Here

Book Description
The analyses of correlated, repeated measures, or multilevel data with a Gaussian response are often based on models known as the linear mixed models (LMMs). LMMs are modeled using both fixed effects and random effects. The random intercepts (RI) and random intercepts and slopes (RIS) models are two exceptional cases from the linear mixed models that are taken into consideration. Our primary focus in this dissertation is to propose an approach for simultaneous selection and estimation of fixed effects only in LMMs. This dissertation, inspired by recent research of methods and criteria for model selection, aims to extend a variable selection procedure referred to as minimum approximated information criterion (MIC) of Su et al. (2018). Our contribution presents further use of the MIC for variable selection and sparse estimation in LMMs. Thus, we design a penalized log-likelihood procedure referred to as the minimum approximated information criterion for LMMs (lmmMAIC), which is used to find a parsimonious model that better generalizes data with a group structure. Our proposed lmmMAIC method enforces variable selection and sparse estimation simultaneously by adding a penalty term to the negative log-likelihood of the linear mixed model. The method differs from existing regularized methods mainly due to the penalty parameter and the penalty function.With regards to the penalty function, the lmmMAIC mimics the traditional Bayesian information criterion (BIC)-based best subset selection (BSS) method but requires a continuous or smooth approximation to the L0 norm penalty of BSS. In this context, lmmMAIC performs sparse estimation by optimizing an approximated information criterion, which substantially requires approximating that L0 norm penalty of BSS with a continuous unit dent function. A unit dent function, motivated by bump functions called mollifiers (Friedrichs, 1944), is an even continuous function with a [0, 1] range. Among several unit dent functions, incorporating a hyperbolic tangent function is most preferred. The approximation changes the discrete nature of the L0 norm penalty of BSS to a continuous or smooth one making our method less computationally expensive. Besides, the hyperbolic tangent function has a simple form and it is much easier to compute its derivatives. This shrinkage-based method fits a linear mixed model containing all p predictors instead of comparing and selecting a correct sub-model across 2p candidate models. On this account, the lmmMAIC is feasible for high-dimensional data. The replacement, however, does not enforce sparsity since the hyperbolic tangent function is not singular at its origin. To better handle this issue, a reparameterization trick of the regression coefficients is needed to achieve sparsity.For a finite number of parameters, numerical investigations demonstrated by Shi and Tsai (2002) prove that traditional information criterion (IC)-based procedure like BIC can consistently identify a model. Following these suggestions of consistent variable selection and computational efficiency, we maintain the BIC fixed penalty parameter. Thus, our newly proposed procedure is free of using the frequently applied practices such as generalized cross validation (GCV) in selecting an optimal penalty parameter for our penalized likelihood framework. The lmmMAIC enjoys less computational time compared to other regularization methods.We formulate the lmmMAIC procedure as a smooth optimization problem and seek to solve for the fixed effects parameters by minimizing the penalized log-likelihood function. The implementation of the lmmMAIC involves an initial step of using the simulated annealing algorithm to obtain estimates. We proceed using these estimates as starting values by applying the modified Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm until convergence. After this step, we plug estimates obtained from the modified BFGS into the reparameterized hyperbolic tangent function to obtain our fixed effects estimates. Alternatively, the optimization of the penalized log-likelihood can be solved using generalized simulation annealing.Our research explores the performance and asymptotic properties of the lmmMAIC method by conducting extensive simulation studies using different model settings. The numerical results of our simulations for our proposed variable selection and estimation method are compared to other standard LMMs shrinkage-based methods such as Lasso, ridge, and elastic net. The results provide evidence that lmmMAIC is more consistent and efficient than the existing shrinkage-based methods under study. Furthermore, two applications with real-life examples are illustrated to evaluate the effectiveness of the lmmMAIC method.

Effective Statistical Learning Methods for Actuaries I

Effective Statistical Learning Methods for Actuaries I PDF Author: Michel Denuit
Publisher: Springer Nature
ISBN: 3030258203
Category : Business & Economics
Languages : en
Pages : 441

Get Book Here

Book Description
This book summarizes the state of the art in generalized linear models (GLMs) and their various extensions: GAMs, mixed models and credibility, and some nonlinear variants (GNMs). In order to deal with tail events, analytical tools from Extreme Value Theory are presented. Going beyond mean modeling, it considers volatility modeling (double GLMs) and the general modeling of location, scale and shape parameters (GAMLSS). Actuaries need these advanced analytical tools to turn the massive data sets now at their disposal into opportunities. The exposition alternates between methodological aspects and case studies, providing numerical illustrations using the R statistical software. The technical prerequisites are kept at a reasonable level in order to reach a broad readership. This is the first of three volumes entitled Effective Statistical Learning Methods for Actuaries. Written by actuaries for actuaries, this series offers a comprehensive overview of insurance data analytics with applications to P&C, life and health insurance. Although closely related to the other two volumes, this volume can be read independently.

Mixed Models

Mixed Models PDF Author: Eugene Demidenko
Publisher: John Wiley & Sons
ISBN: 1118091574
Category : Mathematics
Languages : en
Pages : 768

Get Book Here

Book Description
Praise for the First Edition “This book will serve to greatly complement the growing number of texts dealing with mixed models, and I highly recommend including it in one’s personal library.” —Journal of the American Statistical Association Mixed modeling is a crucial area of statistics, enabling the analysis of clustered and longitudinal data. Mixed Models: Theory and Applications with R, Second Edition fills a gap in existing literature between mathematical and applied statistical books by presenting a powerful examination of mixed model theory and application with special attention given to the implementation in R. The new edition provides in-depth mathematical coverage of mixed models’ statistical properties and numerical algorithms, as well as nontraditional applications, such as regrowth curves, shapes, and images. The book features the latest topics in statistics including modeling of complex clustered or longitudinal data, modeling data with multiple sources of variation, modeling biological variety and heterogeneity, Healthy Akaike Information Criterion (HAIC), parameter multidimensionality, and statistics of image processing. Mixed Models: Theory and Applications with R, Second Edition features unique applications of mixed model methodology, as well as: Comprehensive theoretical discussions illustrated by examples and figures Over 300 exercises, end-of-section problems, updated data sets, and R subroutines Problems and extended projects requiring simulations in R intended to reinforce material Summaries of major results and general points of discussion at the end of each chapter Open problems in mixed modeling methodology, which can be used as the basis for research or PhD dissertations Ideal for graduate-level courses in mixed statistical modeling, the book is also an excellent reference for professionals in a range of fields, including cancer research, computer science, and engineering.

Statistical Learning with Sparsity

Statistical Learning with Sparsity PDF Author: Trevor Hastie
Publisher: CRC Press
ISBN: 1498712177
Category : Business & Economics
Languages : en
Pages : 354

Get Book Here

Book Description
Discover New Methods for Dealing with High-Dimensional DataA sparse statistical model has only a small number of nonzero parameters or weights; therefore, it is much easier to estimate and interpret than a dense model. Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underl

Regression

Regression PDF Author: Ludwig Fahrmeir
Publisher: Springer Nature
ISBN: 3662638827
Category : Mathematics
Languages : en
Pages : 759

Get Book Here

Book Description
Now in its second edition, this textbook provides an applied and unified introduction to parametric, nonparametric and semiparametric regression that closes the gap between theory and application. The most important models and methods in regression are presented on a solid formal basis, and their appropriate application is shown through numerous examples and case studies. The most important definitions and statements are concisely summarized in boxes, and the underlying data sets and code are available online on the book’s dedicated website. Availability of (user-friendly) software has been a major criterion for the methods selected and presented. The chapters address the classical linear model and its extensions, generalized linear models, categorical regression models, mixed models, nonparametric regression, structured additive regression, quantile regression and distributional regression models. Two appendices describe the required matrix algebra, as well as elements of probability calculus and statistical inference. In this substantially revised and updated new edition the overview on regression models has been extended, and now includes the relation between regression models and machine learning, additional details on statistical inference in structured additive regression models have been added and a completely reworked chapter augments the presentation of quantile regression with a comprehensive introduction to distributional regression models. Regularization approaches are now more extensively discussed in most chapters of the book. The book primarily targets an audience that includes students, teachers and practitioners in social, economic, and life sciences, as well as students and teachers in statistics programs, and mathematicians and computer scientists with interests in statistical modeling and data analysis. It is written at an intermediate mathematical level and assumes only knowledge of basic probability, calculus, matrix algebra and statistics.

Shrinkage-Based Variable Selection Methods for Linear Regression and Mixed-Effects Models

Shrinkage-Based Variable Selection Methods for Linear Regression and Mixed-Effects Models PDF Author:
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
KRISHNA, ARUN. Shrinkage-Based Variable Selection Methods for Linear Regression and Mixed-Effects Models. (Under the direction of Professors H.D. Bondell and S.K. Ghosh). In this dissertation we propose two new shrinkage-based variable selection approaches. We first propose a Bayesian selection technique for linear regression models, which allows for highly correlated predictors to enter or exit the model, simultaneously. The second variable selection method proposed is for linear mixed-effects models, where we develop a new technique to jointly select the important fixed and random effects parameters. We briefly summarize each of these methods below. The problem of selecting the correct subset of predictors within a linear model has received much attention in recent literature. Within the Bayesian framework, a popular choice of prior has been Zellnerâ€"! g-prior which is based on the inverse of empirical covariance matrix of the predictors. We propose an extension of Zellnerâ€"! gprior which allow for a power parameter on the empirical covariance of the predictors. The power parameter helps control the degree to which correlated predictors are smoothed towards or away from one another. In addition, the empirical covariance of the predictors is used to obtain suitable priors over model space. In this manner, the power parameter also helps to determine whether models containing highly collinear predictors are preferred or avoided. The proposed power parameter can be chosen via an empirical Bayes method which leads to a data adaptive choice of prior. Simulation studies and a real data example are presented to show how the power parameter is well determined from the degree of cross-correlation within predictors. The proposed modification compares favorably to the standard use of Zellnerâ€"! prior and an intrinsic prior in these examples. We propose a new method of simultaneously identifying the important predictors that correspond to both the fixed and random effects.