Author: Sam Efromovich
Publisher: CRC Press
ISBN: 135167983X
Category : Mathematics
Languages : en
Pages : 867
Book Description
This book presents a systematic and unified approach for modern nonparametric treatment of missing and modified data via examples of density and hazard rate estimation, nonparametric regression, filtering signals, and time series analysis. All basic types of missing at random and not at random, biasing, truncation, censoring, and measurement errors are discussed, and their treatment is explained. Ten chapters of the book cover basic cases of direct data, biased data, nondestructive and destructive missing, survival data modified by truncation and censoring, missing survival data, stationary and nonstationary time series and processes, and ill-posed modifications. The coverage is suitable for self-study or a one-semester course for graduate students with a prerequisite of a standard course in introductory probability. Exercises of various levels of difficulty will be helpful for the instructor and self-study. The book is primarily about practically important small samples. It explains when consistent estimation is possible, and why in some cases missing data should be ignored and why others must be considered. If missing or data modification makes consistent estimation impossible, then the author explains what type of action is needed to restore the lost information. The book contains more than a hundred figures with simulated data that explain virtually every setting, claim, and development. The companion R software package allows the reader to verify, reproduce and modify every simulation and used estimators. This makes the material fully transparent and allows one to study it interactively. Sam Efromovich is the Endowed Professor of Mathematical Sciences and the Head of the Actuarial Program at the University of Texas at Dallas. He is well known for his work on the theory and application of nonparametric curve estimation and is the author of Nonparametric Curve Estimation: Methods, Theory, and Applications. Professor Sam Efromovich is a Fellow of the Institute of Mathematical Statistics and the American Statistical Association.
Missing and Modified Data in Nonparametric Estimation
Author: Sam Efromovich
Publisher: CRC Press
ISBN: 135167983X
Category : Mathematics
Languages : en
Pages : 867
Book Description
This book presents a systematic and unified approach for modern nonparametric treatment of missing and modified data via examples of density and hazard rate estimation, nonparametric regression, filtering signals, and time series analysis. All basic types of missing at random and not at random, biasing, truncation, censoring, and measurement errors are discussed, and their treatment is explained. Ten chapters of the book cover basic cases of direct data, biased data, nondestructive and destructive missing, survival data modified by truncation and censoring, missing survival data, stationary and nonstationary time series and processes, and ill-posed modifications. The coverage is suitable for self-study or a one-semester course for graduate students with a prerequisite of a standard course in introductory probability. Exercises of various levels of difficulty will be helpful for the instructor and self-study. The book is primarily about practically important small samples. It explains when consistent estimation is possible, and why in some cases missing data should be ignored and why others must be considered. If missing or data modification makes consistent estimation impossible, then the author explains what type of action is needed to restore the lost information. The book contains more than a hundred figures with simulated data that explain virtually every setting, claim, and development. The companion R software package allows the reader to verify, reproduce and modify every simulation and used estimators. This makes the material fully transparent and allows one to study it interactively. Sam Efromovich is the Endowed Professor of Mathematical Sciences and the Head of the Actuarial Program at the University of Texas at Dallas. He is well known for his work on the theory and application of nonparametric curve estimation and is the author of Nonparametric Curve Estimation: Methods, Theory, and Applications. Professor Sam Efromovich is a Fellow of the Institute of Mathematical Statistics and the American Statistical Association.
Publisher: CRC Press
ISBN: 135167983X
Category : Mathematics
Languages : en
Pages : 867
Book Description
This book presents a systematic and unified approach for modern nonparametric treatment of missing and modified data via examples of density and hazard rate estimation, nonparametric regression, filtering signals, and time series analysis. All basic types of missing at random and not at random, biasing, truncation, censoring, and measurement errors are discussed, and their treatment is explained. Ten chapters of the book cover basic cases of direct data, biased data, nondestructive and destructive missing, survival data modified by truncation and censoring, missing survival data, stationary and nonstationary time series and processes, and ill-posed modifications. The coverage is suitable for self-study or a one-semester course for graduate students with a prerequisite of a standard course in introductory probability. Exercises of various levels of difficulty will be helpful for the instructor and self-study. The book is primarily about practically important small samples. It explains when consistent estimation is possible, and why in some cases missing data should be ignored and why others must be considered. If missing or data modification makes consistent estimation impossible, then the author explains what type of action is needed to restore the lost information. The book contains more than a hundred figures with simulated data that explain virtually every setting, claim, and development. The companion R software package allows the reader to verify, reproduce and modify every simulation and used estimators. This makes the material fully transparent and allows one to study it interactively. Sam Efromovich is the Endowed Professor of Mathematical Sciences and the Head of the Actuarial Program at the University of Texas at Dallas. He is well known for his work on the theory and application of nonparametric curve estimation and is the author of Nonparametric Curve Estimation: Methods, Theory, and Applications. Professor Sam Efromovich is a Fellow of the Institute of Mathematical Statistics and the American Statistical Association.
Combining, Modelling and Analyzing Imprecision, Randomness and Dependence
Author: Jonathan Ansari
Publisher: Springer Nature
ISBN: 3031659937
Category :
Languages : en
Pages : 579
Book Description
Publisher: Springer Nature
ISBN: 3031659937
Category :
Languages : en
Pages : 579
Book Description
Nonparametric Models for Longitudinal Data
Author: Colin O. Wu
Publisher: CRC Press
ISBN: 0429939086
Category : Mathematics
Languages : en
Pages : 583
Book Description
Nonparametric Models for Longitudinal Data with Implementations in R presents a comprehensive summary of major advances in nonparametric models and smoothing methods with longitudinal data. It covers methods, theories, and applications that are particularly useful for biomedical studies in the era of big data and precision medicine. It also provides flexible tools to describe the temporal trends, covariate effects and correlation structures of repeated measurements in longitudinal data. This book is intended for graduate students in statistics, data scientists and statisticians in biomedical sciences and public health. As experts in this area, the authors present extensive materials that are balanced between theoretical and practical topics. The statistical applications in real-life examples lead into meaningful interpretations and inferences. Features: • Provides an overview of parametric and semiparametric methods • Shows smoothing methods for unstructured nonparametric models • Covers structured nonparametric models with time-varying coefficients • Discusses nonparametric shared-parameter and mixed-effects models • Presents nonparametric models for conditional distributions and functionals • Illustrates implementations using R software packages • Includes datasets and code in the authors’ website • Contains asymptotic results and theoretical derivations
Publisher: CRC Press
ISBN: 0429939086
Category : Mathematics
Languages : en
Pages : 583
Book Description
Nonparametric Models for Longitudinal Data with Implementations in R presents a comprehensive summary of major advances in nonparametric models and smoothing methods with longitudinal data. It covers methods, theories, and applications that are particularly useful for biomedical studies in the era of big data and precision medicine. It also provides flexible tools to describe the temporal trends, covariate effects and correlation structures of repeated measurements in longitudinal data. This book is intended for graduate students in statistics, data scientists and statisticians in biomedical sciences and public health. As experts in this area, the authors present extensive materials that are balanced between theoretical and practical topics. The statistical applications in real-life examples lead into meaningful interpretations and inferences. Features: • Provides an overview of parametric and semiparametric methods • Shows smoothing methods for unstructured nonparametric models • Covers structured nonparametric models with time-varying coefficients • Discusses nonparametric shared-parameter and mixed-effects models • Presents nonparametric models for conditional distributions and functionals • Illustrates implementations using R software packages • Includes datasets and code in the authors’ website • Contains asymptotic results and theoretical derivations
Sequential Change Detection and Hypothesis Testing
Author: Alexander Tartakovsky
Publisher: CRC Press
ISBN: 1498757596
Category : Mathematics
Languages : en
Pages : 321
Book Description
Statistical methods for sequential hypothesis testing and changepoint detection have applications across many fields, including quality control, biomedical engineering, communication networks, econometrics, image processing, security, etc. This book presents an overview of methodology in these related areas, providing a synthesis of research from the last few decades. The methods are illustrated through real data examples, and software is referenced where possible. The emphasis is on providing all the theoretical details in a unified framework, with pointers to new research directions.
Publisher: CRC Press
ISBN: 1498757596
Category : Mathematics
Languages : en
Pages : 321
Book Description
Statistical methods for sequential hypothesis testing and changepoint detection have applications across many fields, including quality control, biomedical engineering, communication networks, econometrics, image processing, security, etc. This book presents an overview of methodology in these related areas, providing a synthesis of research from the last few decades. The methods are illustrated through real data examples, and software is referenced where possible. The emphasis is on providing all the theoretical details in a unified framework, with pointers to new research directions.
The Statistical Analysis of Multivariate Failure Time Data
Author: Ross L. Prentice
Publisher: CRC Press
ISBN: 0429529708
Category : Mathematics
Languages : en
Pages : 110
Book Description
The Statistical Analysis of Multivariate Failure Time Data: A Marginal Modeling Approach provides an innovative look at methods for the analysis of correlated failure times. The focus is on the use of marginal single and marginal double failure hazard rate estimators for the extraction of regression information. For example, in a context of randomized trial or cohort studies, the results go beyond that obtained by analyzing each failure time outcome in a univariate fashion. The book is addressed to researchers, practitioners, and graduate students, and can be used as a reference or as a graduate course text. Much of the literature on the analysis of censored correlated failure time data uses frailty or copula models to allow for residual dependencies among failure times, given covariates. In contrast, this book provides a detailed account of recently developed methods for the simultaneous estimation of marginal single and dual outcome hazard rate regression parameters, with emphasis on multiplicative (Cox) models. Illustrations are provided of the utility of these methods using Women’s Health Initiative randomized controlled trial data of menopausal hormones and of a low-fat dietary pattern intervention. As byproducts, these methods provide flexible semiparametric estimators of pairwise bivariate survivor functions at specified covariate histories, as well as semiparametric estimators of cross ratio and concordance functions given covariates. The presentation also describes how these innovative methods may extend to handle issues of dependent censorship, missing and mismeasured covariates, and joint modeling of failure times and covariates, setting the stage for additional theoretical and applied developments. This book extends and continues the style of the classic Statistical Analysis of Failure Time Data by Kalbfleisch and Prentice. Ross L. Prentice is Professor of Biostatistics at the Fred Hutchinson Cancer Research Center and University of Washington in Seattle, Washington. He is the recipient of COPSS Presidents and Fisher awards, the AACR Epidemiology/Prevention and Team Science awards, and is a member of the National Academy of Medicine. Shanshan Zhao is a Principal Investigator at the National Institute of Environmental Health Sciences in Research Triangle Park, North Carolina.
Publisher: CRC Press
ISBN: 0429529708
Category : Mathematics
Languages : en
Pages : 110
Book Description
The Statistical Analysis of Multivariate Failure Time Data: A Marginal Modeling Approach provides an innovative look at methods for the analysis of correlated failure times. The focus is on the use of marginal single and marginal double failure hazard rate estimators for the extraction of regression information. For example, in a context of randomized trial or cohort studies, the results go beyond that obtained by analyzing each failure time outcome in a univariate fashion. The book is addressed to researchers, practitioners, and graduate students, and can be used as a reference or as a graduate course text. Much of the literature on the analysis of censored correlated failure time data uses frailty or copula models to allow for residual dependencies among failure times, given covariates. In contrast, this book provides a detailed account of recently developed methods for the simultaneous estimation of marginal single and dual outcome hazard rate regression parameters, with emphasis on multiplicative (Cox) models. Illustrations are provided of the utility of these methods using Women’s Health Initiative randomized controlled trial data of menopausal hormones and of a low-fat dietary pattern intervention. As byproducts, these methods provide flexible semiparametric estimators of pairwise bivariate survivor functions at specified covariate histories, as well as semiparametric estimators of cross ratio and concordance functions given covariates. The presentation also describes how these innovative methods may extend to handle issues of dependent censorship, missing and mismeasured covariates, and joint modeling of failure times and covariates, setting the stage for additional theoretical and applied developments. This book extends and continues the style of the classic Statistical Analysis of Failure Time Data by Kalbfleisch and Prentice. Ross L. Prentice is Professor of Biostatistics at the Fred Hutchinson Cancer Research Center and University of Washington in Seattle, Washington. He is the recipient of COPSS Presidents and Fisher awards, the AACR Epidemiology/Prevention and Team Science awards, and is a member of the National Academy of Medicine. Shanshan Zhao is a Principal Investigator at the National Institute of Environmental Health Sciences in Research Triangle Park, North Carolina.
Multistate Models for the Analysis of Life History Data
Author: Richard J Cook
Publisher: CRC Press
ISBN: 1498715613
Category : Mathematics
Languages : en
Pages : 441
Book Description
Multistate Models for the Analysis of Life History Data provides the first comprehensive treatment of multistate modeling and analysis, including parametric, nonparametric and semiparametric methods applicable to many types of life history data. Special models such as illness-death, competing risks and progressive processes are considered, as well as more complex models. The book provides both theoretical development and illustrations of analysis based on data from randomized trials and observational cohort studies in health research. It features: Discusses a wide range of applications of multistate models, Presents methods for both continuously and intermittently observed life history processes, Gives a thorough discussion of conditionally independent censoring and observation processes, Discusses models with random effects and joint models for two or more multistate processes, Discusses and illustrates software for multistate analysis that is available in R, Target audience includes those engaged in research and applications involving multistate models.
Publisher: CRC Press
ISBN: 1498715613
Category : Mathematics
Languages : en
Pages : 441
Book Description
Multistate Models for the Analysis of Life History Data provides the first comprehensive treatment of multistate modeling and analysis, including parametric, nonparametric and semiparametric methods applicable to many types of life history data. Special models such as illness-death, competing risks and progressive processes are considered, as well as more complex models. The book provides both theoretical development and illustrations of analysis based on data from randomized trials and observational cohort studies in health research. It features: Discusses a wide range of applications of multistate models, Presents methods for both continuously and intermittently observed life history processes, Gives a thorough discussion of conditionally independent censoring and observation processes, Discusses models with random effects and joint models for two or more multistate processes, Discusses and illustrates software for multistate analysis that is available in R, Target audience includes those engaged in research and applications involving multistate models.
Dynamic Treatment Regimes
Author: Anastasios A. Tsiatis
Publisher: CRC Press
ISBN: 1498769780
Category : Mathematics
Languages : en
Pages : 619
Book Description
Dynamic Treatment Regimes: Statistical Methods for Precision Medicine provides a comprehensive introduction to statistical methodology for the evaluation and discovery of dynamic treatment regimes from data. Researchers and graduate students in statistics, data science, and related quantitative disciplines with a background in probability and statistical inference and popular statistical modeling techniques will be prepared for further study of this rapidly evolving field. A dynamic treatment regime is a set of sequential decision rules, each corresponding to a key decision point in a disease or disorder process, where each rule takes as input patient information and returns the treatment option he or she should receive. Thus, a treatment regime formalizes how a clinician synthesizes patient information and selects treatments in practice. Treatment regimes are of obvious relevance to precision medicine, which involves tailoring treatment selection to patient characteristics in an evidence-based way. Of critical importance to precision medicine is estimation of an optimal treatment regime, one that, if used to select treatments for the patient population, would lead to the most beneficial outcome on average. Key methods for estimation of an optimal treatment regime from data are motivated and described in detail. A dedicated companion website presents full accounts of application of the methods using a comprehensive R package developed by the authors. The authors’ website www.dtr-book.com includes updates, corrections, new papers, and links to useful websites.
Publisher: CRC Press
ISBN: 1498769780
Category : Mathematics
Languages : en
Pages : 619
Book Description
Dynamic Treatment Regimes: Statistical Methods for Precision Medicine provides a comprehensive introduction to statistical methodology for the evaluation and discovery of dynamic treatment regimes from data. Researchers and graduate students in statistics, data science, and related quantitative disciplines with a background in probability and statistical inference and popular statistical modeling techniques will be prepared for further study of this rapidly evolving field. A dynamic treatment regime is a set of sequential decision rules, each corresponding to a key decision point in a disease or disorder process, where each rule takes as input patient information and returns the treatment option he or she should receive. Thus, a treatment regime formalizes how a clinician synthesizes patient information and selects treatments in practice. Treatment regimes are of obvious relevance to precision medicine, which involves tailoring treatment selection to patient characteristics in an evidence-based way. Of critical importance to precision medicine is estimation of an optimal treatment regime, one that, if used to select treatments for the patient population, would lead to the most beneficial outcome on average. Key methods for estimation of an optimal treatment regime from data are motivated and described in detail. A dedicated companion website presents full accounts of application of the methods using a comprehensive R package developed by the authors. The authors’ website www.dtr-book.com includes updates, corrections, new papers, and links to useful websites.
Large Covariance and Autocovariance Matrices
Author: Arup Bose
Publisher: CRC Press
ISBN: 1351398164
Category : Mathematics
Languages : en
Pages : 297
Book Description
Estimation of large dispersion and autocovariance matrices using banding and tapering Joint convergence of high dimensional generalized dispersion matrices Limiting spectral distribution of symmetric polynomials in sample autocovariance matrices and normality of traces Application of free probability in high dimensional time series Estimation of coefficient matrices in high dimensional autoregressive process
Publisher: CRC Press
ISBN: 1351398164
Category : Mathematics
Languages : en
Pages : 297
Book Description
Estimation of large dispersion and autocovariance matrices using banding and tapering Joint convergence of high dimensional generalized dispersion matrices Limiting spectral distribution of symmetric polynomials in sample autocovariance matrices and normality of traces Application of free probability in high dimensional time series Estimation of coefficient matrices in high dimensional autoregressive process
Sufficient Dimension Reduction
Author: Bing Li
Publisher: CRC Press
ISBN: 1351645730
Category : Mathematics
Languages : en
Pages : 362
Book Description
Sufficient dimension reduction is a rapidly developing research field that has wide applications in regression diagnostics, data visualization, machine learning, genomics, image processing, pattern recognition, and medicine, because they are fields that produce large datasets with a large number of variables. Sufficient Dimension Reduction: Methods and Applications with R introduces the basic theories and the main methodologies, provides practical and easy-to-use algorithms and computer codes to implement these methodologies, and surveys the recent advances at the frontiers of this field. Features Provides comprehensive coverage of this emerging research field. Synthesizes a wide variety of dimension reduction methods under a few unifying principles such as projection in Hilbert spaces, kernel mapping, and von Mises expansion. Reflects most recent advances such as nonlinear sufficient dimension reduction, dimension folding for tensorial data, as well as sufficient dimension reduction for functional data. Includes a set of computer codes written in R that are easily implemented by the readers. Uses real data sets available online to illustrate the usage and power of the described methods. Sufficient dimension reduction has undergone momentous development in recent years, partly due to the increased demands for techniques to process high-dimensional data, a hallmark of our age of Big Data. This book will serve as the perfect entry into the field for the beginning researchers or a handy reference for the advanced ones. The author Bing Li obtained his Ph.D. from the University of Chicago. He is currently a Professor of Statistics at the Pennsylvania State University. His research interests cover sufficient dimension reduction, statistical graphical models, functional data analysis, machine learning, estimating equations and quasilikelihood, and robust statistics. He is a fellow of the Institute of Mathematical Statistics and the American Statistical Association. He is an Associate Editor for The Annals of Statistics and the Journal of the American Statistical Association.
Publisher: CRC Press
ISBN: 1351645730
Category : Mathematics
Languages : en
Pages : 362
Book Description
Sufficient dimension reduction is a rapidly developing research field that has wide applications in regression diagnostics, data visualization, machine learning, genomics, image processing, pattern recognition, and medicine, because they are fields that produce large datasets with a large number of variables. Sufficient Dimension Reduction: Methods and Applications with R introduces the basic theories and the main methodologies, provides practical and easy-to-use algorithms and computer codes to implement these methodologies, and surveys the recent advances at the frontiers of this field. Features Provides comprehensive coverage of this emerging research field. Synthesizes a wide variety of dimension reduction methods under a few unifying principles such as projection in Hilbert spaces, kernel mapping, and von Mises expansion. Reflects most recent advances such as nonlinear sufficient dimension reduction, dimension folding for tensorial data, as well as sufficient dimension reduction for functional data. Includes a set of computer codes written in R that are easily implemented by the readers. Uses real data sets available online to illustrate the usage and power of the described methods. Sufficient dimension reduction has undergone momentous development in recent years, partly due to the increased demands for techniques to process high-dimensional data, a hallmark of our age of Big Data. This book will serve as the perfect entry into the field for the beginning researchers or a handy reference for the advanced ones. The author Bing Li obtained his Ph.D. from the University of Chicago. He is currently a Professor of Statistics at the Pennsylvania State University. His research interests cover sufficient dimension reduction, statistical graphical models, functional data analysis, machine learning, estimating equations and quasilikelihood, and robust statistics. He is a fellow of the Institute of Mathematical Statistics and the American Statistical Association. He is an Associate Editor for The Annals of Statistics and the Journal of the American Statistical Association.
Multivariate Kernel Smoothing and Its Applications
Author: José E. Chacón
Publisher: CRC Press
ISBN: 0429939132
Category : Mathematics
Languages : en
Pages : 255
Book Description
Kernel smoothing has greatly evolved since its inception to become an essential methodology in the data science tool kit for the 21st century. Its widespread adoption is due to its fundamental role for multivariate exploratory data analysis, as well as the crucial role it plays in composite solutions to complex data challenges. Multivariate Kernel Smoothing and Its Applications offers a comprehensive overview of both aspects. It begins with a thorough exposition of the approaches to achieve the two basic goals of estimating probability density functions and their derivatives. The focus then turns to the applications of these approaches to more complex data analysis goals, many with a geometric/topological flavour, such as level set estimation, clustering (unsupervised learning), principal curves, and feature significance. Other topics, while not direct applications of density (derivative) estimation but sharing many commonalities with the previous settings, include classification (supervised learning), nearest neighbour estimation, and deconvolution for data observed with error. For a data scientist, each chapter contains illustrative Open data examples that are analysed by the most appropriate kernel smoothing method. The emphasis is always placed on an intuitive understanding of the data provided by the accompanying statistical visualisations. For a reader wishing to investigate further the details of their underlying statistical reasoning, a graduated exposition to a unified theoretical framework is provided. The algorithms for efficient software implementation are also discussed. José E. Chacón is an associate professor at the Department of Mathematics of the Universidad de Extremadura in Spain. Tarn Duong is a Senior Data Scientist for a start-up which provides short distance carpooling services in France. Both authors have made important contributions to kernel smoothing research over the last couple of decades.
Publisher: CRC Press
ISBN: 0429939132
Category : Mathematics
Languages : en
Pages : 255
Book Description
Kernel smoothing has greatly evolved since its inception to become an essential methodology in the data science tool kit for the 21st century. Its widespread adoption is due to its fundamental role for multivariate exploratory data analysis, as well as the crucial role it plays in composite solutions to complex data challenges. Multivariate Kernel Smoothing and Its Applications offers a comprehensive overview of both aspects. It begins with a thorough exposition of the approaches to achieve the two basic goals of estimating probability density functions and their derivatives. The focus then turns to the applications of these approaches to more complex data analysis goals, many with a geometric/topological flavour, such as level set estimation, clustering (unsupervised learning), principal curves, and feature significance. Other topics, while not direct applications of density (derivative) estimation but sharing many commonalities with the previous settings, include classification (supervised learning), nearest neighbour estimation, and deconvolution for data observed with error. For a data scientist, each chapter contains illustrative Open data examples that are analysed by the most appropriate kernel smoothing method. The emphasis is always placed on an intuitive understanding of the data provided by the accompanying statistical visualisations. For a reader wishing to investigate further the details of their underlying statistical reasoning, a graduated exposition to a unified theoretical framework is provided. The algorithms for efficient software implementation are also discussed. José E. Chacón is an associate professor at the Department of Mathematics of the Universidad de Extremadura in Spain. Tarn Duong is a Senior Data Scientist for a start-up which provides short distance carpooling services in France. Both authors have made important contributions to kernel smoothing research over the last couple of decades.