The Science of High Performance Algorithms for Hierarchical Matrices

The Science of High Performance Algorithms for Hierarchical Matrices PDF Author: Chen-Han Yu (Ph. D.)
Publisher:
ISBN:
Category :
Languages : en
Pages : 230

Get Book Here

Book Description
Many matrices in scientific computing, statistical inference, and machine learning exhibit sparse and low-rank structure. Typically, such structure is exposed by appropriate matrix permutation of rows and columns, and exploited by constructing an hierarchical approximation. That is, the matrix can be written as a summation of sparse and low-rank matrices and this structure repeats recursively. Matrices that admit such hierarchical approximation are known as hierarchical matrices (H-matrices in brief). H-matrix approximation methods are more general and scalable than solely using a sparse or low-rank matrix approximation. Classical numerical linear algebra operations on H-matrices-multiplication, factorization, and eigenvalue decomposition-can be accelerated by many orders of magnitude. Although the literature on H-matrices for problems in computational physics (low-dimensions) is vast, there is less work for generalization and problems appearing in machine learning. Also, there is limited work on high-performance computing algorithms for pure algebraic H-matrix methods. This dissertation tries to address these open problems on building hierarchical approximation for kernel matrices and generic symmetric positive definite (SPD) matrices. We propose a general tree-based framework (GOFMM) for appropriately permuting a matrix to expose its hierarchical structure. GOFMM supports both static and dynamic scheduling, shared memory and distributed memory architectures, and hardware accelerators. The supported algorithms include kernel methods, approximate matrix multiplication and factorization for large sparse and dense matrices.

The Science of High Performance Algorithms for Hierarchical Matrices

The Science of High Performance Algorithms for Hierarchical Matrices PDF Author: Chen-Han Yu (Ph. D.)
Publisher:
ISBN:
Category :
Languages : en
Pages : 230

Get Book Here

Book Description
Many matrices in scientific computing, statistical inference, and machine learning exhibit sparse and low-rank structure. Typically, such structure is exposed by appropriate matrix permutation of rows and columns, and exploited by constructing an hierarchical approximation. That is, the matrix can be written as a summation of sparse and low-rank matrices and this structure repeats recursively. Matrices that admit such hierarchical approximation are known as hierarchical matrices (H-matrices in brief). H-matrix approximation methods are more general and scalable than solely using a sparse or low-rank matrix approximation. Classical numerical linear algebra operations on H-matrices-multiplication, factorization, and eigenvalue decomposition-can be accelerated by many orders of magnitude. Although the literature on H-matrices for problems in computational physics (low-dimensions) is vast, there is less work for generalization and problems appearing in machine learning. Also, there is limited work on high-performance computing algorithms for pure algebraic H-matrix methods. This dissertation tries to address these open problems on building hierarchical approximation for kernel matrices and generic symmetric positive definite (SPD) matrices. We propose a general tree-based framework (GOFMM) for appropriately permuting a matrix to expose its hierarchical structure. GOFMM supports both static and dynamic scheduling, shared memory and distributed memory architectures, and hardware accelerators. The supported algorithms include kernel methods, approximate matrix multiplication and factorization for large sparse and dense matrices.

Hierarchical Matrices: Algorithms and Analysis

Hierarchical Matrices: Algorithms and Analysis PDF Author: Wolfgang Hackbusch
Publisher: Springer
ISBN: 3662473240
Category : Mathematics
Languages : en
Pages : 532

Get Book Here

Book Description
This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists in computational mathematics, physics, chemistry and engineering.

Computational Science — ICCS 2001

Computational Science — ICCS 2001 PDF Author: Vassil N. Alexandrov
Publisher: Springer
ISBN: 3540455450
Category : Computers
Languages : en
Pages : 1294

Get Book Here

Book Description
LNCS volumes 2073 and 2074 contain the proceedings of the International Conference on Computational Science, ICCS 2001, held in San Francisco, California, May 27 -31, 2001. The two volumes consist of more than 230 contributed and invited papers that reflect the aims of the conference to bring together researchers and scientists from mathematics and computer science as basic computing disciplines, researchers from various application areas who are pioneering advanced application of computational methods to sciences such as physics, chemistry, life sciences, and engineering, arts and humanitarian fields, along with software developers and vendors, to discuss problems and solutions in the area, to identify new issues, and to shape future directions for research, as well as to help industrial users apply various advanced computational techniques.

High Performance Algorithms for Structured Matrix Problems

High Performance Algorithms for Structured Matrix Problems PDF Author: Peter Arbenz
Publisher: Nova Publishers
ISBN: 9781560725947
Category : Business & Economics
Languages : en
Pages : 228

Get Book Here

Book Description
Comprises 10 contributions that summarize the state of the art in the areas of high performance solutions of structured linear systems and structured eigenvalue and singular-value problems. Topics covered range from parallel solvers for sparse or banded linear systems to parallel computation of eigenvalues and singular values of tridiagonal and bidiagonal matrices. Specific paper topics include: the stable parallel solution of general narrow banded linear systems; efficient algorithms for reducing banded matrices to bidiagonal and tridiagonal form; a numerical comparison of look-ahead Levinson and Schur algorithms for non-Hermitian Toeplitz systems; and parallel CG-methods automatically optimized for PC and workstation clusters. Annotation copyrighted by Book News, Inc., Portland, OR

Hierarchical Matrices: Algorithms and Analysis

Hierarchical Matrices: Algorithms and Analysis PDF Author: Wolfgang Hackbusch
Publisher:
ISBN: 9783662473252
Category :
Languages : en
Pages :

Get Book Here

Book Description
This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists in computational mathematics, physics, chemistry and engineering.

Matrix Computations

Matrix Computations PDF Author: Gene H. Golub
Publisher: JHU Press
ISBN: 1421408597
Category : Mathematics
Languages : en
Pages : 781

Get Book Here

Book Description
A comprehensive treatment of numerical linear algebra from the standpoint of both theory and practice. The fourth edition of Gene H. Golub and Charles F. Van Loan's classic is an essential reference for computational scientists and engineers in addition to researchers in the numerical linear algebra community. Anyone whose work requires the solution to a matrix problem and an appreciation of its mathematical properties will find this book to be an indispensible tool. This revision is a cover-to-cover expansion and renovation of the third edition. It now includes an introduction to tensor computations and brand new sections on • fast transforms • parallel LU • discrete Poisson solvers • pseudospectra • structured linear equation problems • structured eigenvalue problems • large-scale SVD methods • polynomial eigenvalue problems Matrix Computations is packed with challenging problems, insightful derivations, and pointers to the literature—everything needed to become a matrix-savvy developer of numerical methods and software. The second most cited math book of 2012 according to MathSciNet, the book has placed in the top 10 for since 2005.

Computational Science - ICCS 2004

Computational Science - ICCS 2004 PDF Author: Marian Bubak
Publisher: Springer Science & Business Media
ISBN: 3540221158
Category : Computers
Languages : en
Pages : 810

Get Book Here

Book Description
The International Conference on Computational Science (ICCS 2004) held in Krak ́ ow, Poland, June 6–9, 2004, was a follow-up to the highly successful ICCS 2003 held at two locations, in Melbourne, Australia and St. Petersburg, Russia; ICCS 2002 in Amsterdam, The Netherlands; and ICCS 2001 in San Francisco, USA. As computational science is still evolving in its quest for subjects of inves- gation and e?cient methods, ICCS 2004 was devised as a forum for scientists from mathematics and computer science, as the basic computing disciplines and application areas, interested in advanced computational methods for physics, chemistry, life sciences, engineering, arts and humanities, as well as computer system vendors and software developers. The main objective of this conference was to discuss problems and solutions in all areas, to identify new issues, to shape future directions of research, and to help users apply various advanced computational techniques. The event harvested recent developments in com- tationalgridsandnextgenerationcomputingsystems,tools,advancednumerical methods, data-driven systems, and novel application ?elds, such as complex - stems, ?nance, econo-physics and population evolution.

A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architectures

A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architectures PDF Author: Ian N. Dunn
Publisher: Springer Science & Business Media
ISBN: 1441986502
Category : Computers
Languages : en
Pages : 114

Get Book Here

Book Description
Despite five decades of research, parallel computing remains an exotic, frontier technology on the fringes of mainstream computing. Its much-heralded triumph over sequential computing has yet to materialize. This is in spite of the fact that the processing needs of many signal processing applications continue to eclipse the capabilities of sequential computing. The culprit is largely the software development environment. Fundamental shortcomings in the development environment of many parallel computer architectures thwart the adoption of parallel computing. Foremost, parallel computing has no unifying model to accurately predict the execution time of algorithms on parallel architectures. Cost and scarce programming resources prohibit deploying multiple algorithms and partitioning strategies in an attempt to find the fastest solution. As a consequence, algorithm design is largely an intuitive art form dominated by practitioners who specialize in a particular computer architecture. This, coupled with the fact that parallel computer architectures rarely last more than a couple of years, makes for a complex and challenging design environment. To navigate this environment, algorithm designers need a road map, a detailed procedure they can use to efficiently develop high performance, portable parallel algorithms. The focus of this book is to draw such a road map. The Parallel Algorithm Synthesis Procedure can be used to design reusable building blocks of adaptable, scalable software modules from which high performance signal processing applications can be constructed. The hallmark of the procedure is a semi-systematic process for introducing parameters to control the partitioning and scheduling of computation and communication. This facilitates the tailoring of software modules to exploit different configurations of multiple processors, multiple floating-point units, and hierarchical memories. To showcase the efficacy of this procedure, the book presents three case studies requiring various degrees of optimization for parallel execution.

Algorithms for Memory Hierarchies

Algorithms for Memory Hierarchies PDF Author: Ulrich Meyer
Publisher: Springer Science & Business Media
ISBN: 3540008837
Category : Computers
Languages : en
Pages : 443

Get Book Here

Book Description
Algorithms that have to process large data sets have to take into account that the cost of memory access depends on where the data is stored. Traditional algorithm design is based on the von Neumann model where accesses to memory have uniform cost. Actual machines increasingly deviate from this model: while waiting for memory access, nowadays, microprocessors can in principle execute 1000 additions of registers; for hard disk access this factor can reach six orders of magnitude. The 16 coherent chapters in this monograph-like tutorial book introduce and survey algorithmic techniques used to achieve high performance on memory hierarchies; emphasis is placed on methods interesting from a theoretical as well as important from a practical point of view.

High-Performance Scientific Computing

High-Performance Scientific Computing PDF Author: Michael W. Berry
Publisher: Springer Science & Business Media
ISBN: 1447124367
Category : Computers
Languages : en
Pages : 351

Get Book Here

Book Description
This book presents the state of the art in parallel numerical algorithms, applications, architectures, and system software. The book examines various solutions for issues of concurrency, scale, energy efficiency, and programmability, which are discussed in the context of a diverse range of applications. Features: includes contributions from an international selection of world-class authorities; examines parallel algorithm-architecture interaction through issues of computational capacity-based codesign and automatic restructuring of programs using compilation techniques; reviews emerging applications of numerical methods in information retrieval and data mining; discusses the latest issues in dense and sparse matrix computations for modern high-performance systems, multicores, manycores and GPUs, and several perspectives on the Spike family of algorithms for solving linear systems; presents outstanding challenges and developing technologies, and puts these in their historical context.