Minimum Divergence Methods in Statistical Machine Learning

Minimum Divergence Methods in Statistical Machine Learning PDF Author: Shinto Eguchi
Publisher: Springer Nature
ISBN: 4431569227
Category : Mathematics
Languages : en
Pages : 224

Get Book Here

Book Description
This book explores minimum divergence methods of statistical machine learning for estimation, regression, prediction, and so forth, in which we engage in information geometry to elucidate their intrinsic properties of the corresponding loss functions, learning algorithms, and statistical models. One of the most elementary examples is Gauss's least squares estimator in a linear regression model, in which the estimator is given by minimization of the sum of squares between a response vector and a vector of the linear subspace hulled by explanatory vectors. This is extended to Fisher's maximum likelihood estimator (MLE) for an exponential model, in which the estimator is provided by minimization of the Kullback-Leibler (KL) divergence between a data distribution and a parametric distribution of the exponential model in an empirical analogue. Thus, we envisage a geometric interpretation of such minimization procedures such that a right triangle is kept with Pythagorean identity in the sense of the KL divergence. This understanding sublimates a dualistic interplay between a statistical estimation and model, which requires dual geodesic paths, called m-geodesic and e-geodesic paths, in a framework of information geometry. We extend such a dualistic structure of the MLE and exponential model to that of the minimum divergence estimator and the maximum entropy model, which is applied to robust statistics, maximum entropy, density estimation, principal component analysis, independent component analysis, regression analysis, manifold learning, boosting algorithm, clustering, dynamic treatment regimes, and so forth. We consider a variety of information divergence measures typically including KL divergence to express departure from one probability distribution to another. An information divergence is decomposed into the cross-entropy and the (diagonal) entropy in which the entropy associates with a generative model as a family of maximum entropy distributions; the cross entropy associates with a statistical estimation method via minimization of the empirical analogue based on given data. Thus any statistical divergence includes an intrinsic object between the generative model and the estimation method. Typically, KL divergence leads to the exponential model and the maximum likelihood estimation. It is shown that any information divergence leads to a Riemannian metric and a pair of the linear connections in the framework of information geometry. We focus on a class of information divergence generated by an increasing and convex function U, called U-divergence. It is shown that any generator function U generates the U-entropy and U-divergence, in which there is a dualistic structure between the U-divergence method and the maximum U-entropy model. We observe that a specific choice of U leads to a robust statistical procedure via the minimum U-divergence method. If U is selected as an exponential function, then the corresponding U-entropy and U-divergence are reduced to the Boltzmann-Shanon entropy and the KL divergence; the minimum U-divergence estimator is equivalent to the MLE. For robust supervised learning to predict a class label we observe that the U-boosting algorithm performs well for contamination of mislabel examples if U is appropriately selected. We present such maximal U-entropy and minimum U-divergence methods, in particular, selecting a power function as U to provide flexible performance in statistical machine learning.

Minimum Divergence Methods in Statistical Machine Learning

Minimum Divergence Methods in Statistical Machine Learning PDF Author: Shinto Eguchi
Publisher: Springer Nature
ISBN: 4431569227
Category : Mathematics
Languages : en
Pages : 224

Get Book Here

Book Description
This book explores minimum divergence methods of statistical machine learning for estimation, regression, prediction, and so forth, in which we engage in information geometry to elucidate their intrinsic properties of the corresponding loss functions, learning algorithms, and statistical models. One of the most elementary examples is Gauss's least squares estimator in a linear regression model, in which the estimator is given by minimization of the sum of squares between a response vector and a vector of the linear subspace hulled by explanatory vectors. This is extended to Fisher's maximum likelihood estimator (MLE) for an exponential model, in which the estimator is provided by minimization of the Kullback-Leibler (KL) divergence between a data distribution and a parametric distribution of the exponential model in an empirical analogue. Thus, we envisage a geometric interpretation of such minimization procedures such that a right triangle is kept with Pythagorean identity in the sense of the KL divergence. This understanding sublimates a dualistic interplay between a statistical estimation and model, which requires dual geodesic paths, called m-geodesic and e-geodesic paths, in a framework of information geometry. We extend such a dualistic structure of the MLE and exponential model to that of the minimum divergence estimator and the maximum entropy model, which is applied to robust statistics, maximum entropy, density estimation, principal component analysis, independent component analysis, regression analysis, manifold learning, boosting algorithm, clustering, dynamic treatment regimes, and so forth. We consider a variety of information divergence measures typically including KL divergence to express departure from one probability distribution to another. An information divergence is decomposed into the cross-entropy and the (diagonal) entropy in which the entropy associates with a generative model as a family of maximum entropy distributions; the cross entropy associates with a statistical estimation method via minimization of the empirical analogue based on given data. Thus any statistical divergence includes an intrinsic object between the generative model and the estimation method. Typically, KL divergence leads to the exponential model and the maximum likelihood estimation. It is shown that any information divergence leads to a Riemannian metric and a pair of the linear connections in the framework of information geometry. We focus on a class of information divergence generated by an increasing and convex function U, called U-divergence. It is shown that any generator function U generates the U-entropy and U-divergence, in which there is a dualistic structure between the U-divergence method and the maximum U-entropy model. We observe that a specific choice of U leads to a robust statistical procedure via the minimum U-divergence method. If U is selected as an exponential function, then the corresponding U-entropy and U-divergence are reduced to the Boltzmann-Shanon entropy and the KL divergence; the minimum U-divergence estimator is equivalent to the MLE. For robust supervised learning to predict a class label we observe that the U-boosting algorithm performs well for contamination of mislabel examples if U is appropriately selected. We present such maximal U-entropy and minimum U-divergence methods, in particular, selecting a power function as U to provide flexible performance in statistical machine learning.

Information Theory and Statistical Learning

Information Theory and Statistical Learning PDF Author: Frank Emmert-Streib
Publisher: Springer Science & Business Media
ISBN: 0387848150
Category : Computers
Languages : en
Pages : 443

Get Book Here

Book Description
This interdisciplinary text offers theoretical and practical results of information theoretic methods used in statistical learning. It presents a comprehensive overview of the many different methods that have been developed in numerous contexts.

Introduction to Statistical Machine Learning

Introduction to Statistical Machine Learning PDF Author: Masashi Sugiyama
Publisher: Morgan Kaufmann
ISBN: 0128023503
Category : Mathematics
Languages : en
Pages : 535

Get Book Here

Book Description
Machine learning allows computers to learn and discern patterns without actually being programmed. When Statistical techniques and machine learning are combined together they are a powerful tool for analysing various kinds of data in many computer science/engineering areas including, image processing, speech processing, natural language processing, robot control, as well as in fundamental sciences such as biology, medicine, astronomy, physics, and materials. Introduction to Statistical Machine Learning provides a general introduction to machine learning that covers a wide range of topics concisely and will help you bridge the gap between theory and practice. Part I discusses the fundamental concepts of statistics and probability that are used in describing machine learning algorithms. Part II and Part III explain the two major approaches of machine learning techniques; generative methods and discriminative methods. While Part III provides an in-depth look at advanced topics that play essential roles in making machine learning algorithms more useful in practice. The accompanying MATLAB/Octave programs provide you with the necessary practical skills needed to accomplish a wide range of data analysis tasks. Provides the necessary background material to understand machine learning such as statistics, probability, linear algebra, and calculus Complete coverage of the generative approach to statistical pattern recognition and the discriminative approach to statistical machine learning Includes MATLAB/Octave programs so that readers can test the algorithms numerically and acquire both mathematical and practical skills in a wide range of data analysis tasks Discusses a wide range of applications in machine learning and statistics and provides examples drawn from image processing, speech processing, natural language processing, robot control, as well as biology, medicine, astronomy, physics, and materials

Geometric Science of Information

Geometric Science of Information PDF Author: Frank Nielsen
Publisher: Springer Nature
ISBN: 3031382714
Category : Computers
Languages : en
Pages : 641

Get Book Here

Book Description
This book constitutes the proceedings of the 6th International Conference on Geometric Science of Information, GSI 2023, held in St. Malo, France, during August 30-September 1, 2023. The 125 full papers presented in this volume were carefully reviewed and selected from 161 submissions. They cover all the main topics and highlights in the domain of geometric science of information, including information geometry manifolds of structured data/information and their advanced applications. The papers are organized in the following topics: geometry and machine learning; divergences and computational information geometry; statistics, topology and shape spaces; geometry and mechanics; geometry, learning dynamics and thermodynamics; quantum information geometry; geometry and biological structures; geometry and applications.

Statistical Inference

Statistical Inference PDF Author: Ayanendranath Basu
Publisher: CRC Press
ISBN: 1420099663
Category : Computers
Languages : en
Pages : 424

Get Book Here

Book Description
In many ways, estimation by an appropriate minimum distance method is one of the most natural ideas in statistics. However, there are many different ways of constructing an appropriate distance between the data and the model: the scope of study referred to by "Minimum Distance Estimation" is literally huge. Filling a statistical resource gap, Stati

Rank-Based Methods for Shrinkage and Selection

Rank-Based Methods for Shrinkage and Selection PDF Author: A. K. Ehsanes Saleh
Publisher: John Wiley & Sons Incorporated
ISBN: 9781119625438
Category : Mathematics
Languages : en
Pages : 0

Get Book Here

Book Description
"The purpose of this book is to lay the groundwork for robust data science using rankbased methods. The field of machine learning has not yet fully embraced a class of robust estimators that would address issues that limit the value of least-squares estimation. For example, outliers in data sets may produce misleading results that are not suitable for inference. They can also affect results obtained from penalty estimators. We believe that robust estimators for regression problems are well-suited to data science. This book is intended to provide both practical and mathematical foundations in the study of rank-based methods. It will introduce a number of new ideas and approaches to the practice and theory of robust estimation and encourage readers to pursue further investigation in this field. While the main goal of this book is to provide a rigorous treatment of the subject matter, we begin with some introductory material to build insight and intuition about rank-based regression and penalty estimators, especially for those who are new to the topic and those looking to understand key concepts. To motivate the need for such methods, we will start with a discussion of the median as it is the key to rank-based methods and then build on that concept towards the notion of robust data science"--

Algorithmic Learning Theory

Algorithmic Learning Theory PDF Author: Ricard Gavalda
Publisher: Springer Science & Business Media
ISBN: 3540202919
Category : Computers
Languages : en
Pages : 325

Get Book Here

Book Description
This book constitutes the refereed proceedings of the 14th International Conference on Algorithmic Learning Theory, ALT 2003, held in Sapporo, Japan in October 2003. The 19 revised full papers presented together with 2 invited papers and abstracts of 3 invited talks were carefully reviewed and selected from 37 submissions. The papers are organized in topical sections on inductive inference, learning and information extraction, learning with queries, learning with non-linear optimization, learning from random examples, and online prediction.

Data Analysis and Related Applications 4

Data Analysis and Related Applications 4 PDF Author: Yiannis Dimotikalis
Publisher: John Wiley & Sons
ISBN: 1786309920
Category : Computers
Languages : en
Pages : 420

Get Book Here

Book Description


Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers PDF Author: Stephen Boyd
Publisher: Now Publishers Inc
ISBN: 160198460X
Category : Computers
Languages : en
Pages : 138

Get Book Here

Book Description
Surveys the theory and history of the alternating direction method of multipliers, and discusses its applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others.

Machine Learning for Signal Processing

Machine Learning for Signal Processing PDF Author: Max A. Little
Publisher: Oxford University Press
ISBN: 0191024317
Category : Computers
Languages : en
Pages : 378

Get Book Here

Book Description
This book describes in detail the fundamental mathematics and algorithms of machine learning (an example of artificial intelligence) and signal processing, two of the most important and exciting technologies in the modern information economy. Taking a gradual approach, it builds up concepts in a solid, step-by-step fashion so that the ideas and algorithms can be implemented in practical software applications. Digital signal processing (DSP) is one of the 'foundational' engineering topics of the modern world, without which technologies such the mobile phone, television, CD and MP3 players, WiFi and radar, would not be possible. A relative newcomer by comparison, statistical machine learning is the theoretical backbone of exciting technologies such as automatic techniques for car registration plate recognition, speech recognition, stock market prediction, defect detection on assembly lines, robot guidance, and autonomous car navigation. Statistical machine learning exploits the analogy between intelligent information processing in biological brains and sophisticated statistical modelling and inference. DSP and statistical machine learning are of such wide importance to the knowledge economy that both have undergone rapid changes and seen radical improvements in scope and applicability. Both make use of key topics in applied mathematics such as probability and statistics, algebra, calculus, graphs and networks. Intimate formal links between the two subjects exist and because of this many overlaps exist between the two subjects that can be exploited to produce new DSP tools of surprising utility, highly suited to the contemporary world of pervasive digital sensors and high-powered, yet cheap, computing hardware. This book gives a solid mathematical foundation to, and details the key concepts and algorithms in this important topic.