A Perspective on Single-Channel Frequency-Domain Speech Enhancement

A Perspective on Single-Channel Frequency-Domain Speech Enhancement PDF Author: Jacob Benesty
Publisher: Springer Nature
ISBN: 303102561X
Category : Technology & Engineering
Languages : en
Pages : 101

Get Book Here

Book Description
This book focuses on a class of single-channel noise reduction methods that are performed in the frequency domain via the short-time Fourier transform (STFT). The simplicity and relative effectiveness of this class of approaches make them the dominant choice in practical systems. Even though many popular algorithms have been proposed through more than four decades of continuous research, there are a number of critical areas where our understanding and capabilities still remain quite rudimentary, especially with respect to the relationship between noise reduction and speech distortion. All existing frequency-domain algorithms, no matter how they are developed, have one feature in common: the solution is eventually expressed as a gain function applied to the STFT of the noisy signal only in the current frame. As a result, the narrowband signal-to-noise ratio (SNR) cannot be improved, and any gains achieved in noise reduction on the fullband basis come with a price to pay, which is speech distortion. In this book, we present a new perspective on the problem by exploiting the difference between speech and typical noise in circularity and interframe self-correlation, which were ignored in the past. By gathering the STFT of the microphone signal of the current frame, its complex conjugate, and the STFTs in the previous frames, we construct several new, multiple-observation signal models similar to a microphone array system: there are multiple noisy speech observations, and their speech components are correlated but not completely coherent while their noise components are presumably uncorrelated. Therefore, the multichannel Wiener filter and the minimum variance distortionless response (MVDR) filter that were usually associated with microphone arrays will be developed for single-channel noise reduction in this book. This might instigate a paradigm shift geared toward speech distortionless noise reduction techniques. Table of Contents: Introduction / Problem Formulation / Performance Measures / Linear and Widely Linear Models / Optimal Filters with Model 1 / Optimal Filters with Model 2 / Optimal Filters with Model 3 / Optimal Filters with Model 4 / Experimental Study

A Perspective on Single-Channel Frequency-Domain Speech Enhancement

A Perspective on Single-Channel Frequency-Domain Speech Enhancement PDF Author: Jacob Benesty
Publisher: Springer Nature
ISBN: 303102561X
Category : Technology & Engineering
Languages : en
Pages : 101

Get Book Here

Book Description
This book focuses on a class of single-channel noise reduction methods that are performed in the frequency domain via the short-time Fourier transform (STFT). The simplicity and relative effectiveness of this class of approaches make them the dominant choice in practical systems. Even though many popular algorithms have been proposed through more than four decades of continuous research, there are a number of critical areas where our understanding and capabilities still remain quite rudimentary, especially with respect to the relationship between noise reduction and speech distortion. All existing frequency-domain algorithms, no matter how they are developed, have one feature in common: the solution is eventually expressed as a gain function applied to the STFT of the noisy signal only in the current frame. As a result, the narrowband signal-to-noise ratio (SNR) cannot be improved, and any gains achieved in noise reduction on the fullband basis come with a price to pay, which is speech distortion. In this book, we present a new perspective on the problem by exploiting the difference between speech and typical noise in circularity and interframe self-correlation, which were ignored in the past. By gathering the STFT of the microphone signal of the current frame, its complex conjugate, and the STFTs in the previous frames, we construct several new, multiple-observation signal models similar to a microphone array system: there are multiple noisy speech observations, and their speech components are correlated but not completely coherent while their noise components are presumably uncorrelated. Therefore, the multichannel Wiener filter and the minimum variance distortionless response (MVDR) filter that were usually associated with microphone arrays will be developed for single-channel noise reduction in this book. This might instigate a paradigm shift geared toward speech distortionless noise reduction techniques. Table of Contents: Introduction / Problem Formulation / Performance Measures / Linear and Widely Linear Models / Optimal Filters with Model 1 / Optimal Filters with Model 2 / Optimal Filters with Model 3 / Optimal Filters with Model 4 / Experimental Study

Speech Enhancement in the STFT Domain

Speech Enhancement in the STFT Domain PDF Author: Jacob Benesty
Publisher: Springer Science & Business Media
ISBN: 3642232507
Category : Technology & Engineering
Languages : en
Pages : 112

Get Book Here

Book Description
This work addresses this problem in the short-time Fourier transform (STFT) domain. We divide the general problem into five basic categories depending on the number of microphones being used and whether the interframe or interband correlation is considered. The first category deals with the single-channel problem where STFT coefficients at different frames and frequency bands are assumed to be independent. In this case, the noise reduction filter in each frequency band is basically a real gain. Since a gain does not improve the signal-to-noise ratio (SNR) for any given subband and frame, the noise reduction is basically achieved by liftering the subbands and frames that are less noisy while weighing down on those that are more noisy. The second category also concerns the single-channel problem. The difference is that now the interframe correlation is taken into account and a filter is applied in each subband instead of just a gain. The advantage of using the interframe correlation is that we can improve not only the long-time fullband SNR, but the frame-wise subband SNR as well. The third and fourth classes discuss the problem of multichannel noise reduction in the STFT domain with and without interframe correlation, respectively. In the last category, we consider the interband correlation in the design of the noise reduction filters. We illustrate the basic principle for the single-channel case as an example, while this concept can be generalized to other scenarios. In all categories, we propose different optimization cost functions from which we derive the optimal filters and we also define the performance measures that help analyzing them.

Canonical Correlation Analysis in Speech Enhancement

Canonical Correlation Analysis in Speech Enhancement PDF Author: Jacob Benesty
Publisher: Springer
ISBN: 3319670204
Category : Technology & Engineering
Languages : en
Pages : 124

Get Book Here

Book Description
This book focuses on the application of canonical correlation analysis (CCA) to speech enhancement using the filtering approach. The authors explain how to derive different classes of time-domain and time-frequency-domain noise reduction filters, which are optimal from the CCA perspective for both single-channel and multichannel speech enhancement. Enhancement of noisy speech has been a challenging problem for many researchers over the past few decades and remains an active research area. Typically, speech enhancement algorithms operate in the short-time Fourier transform (STFT) domain, where the clean speech spectral coefficients are estimated using a multiplicative gain function. A filtering approach, which can be performed in the time domain or in the subband domain, obtains an estimate of the clean speech sample at every time instant or time-frequency bin by applying a filtering vector to the noisy speech vector. Compared to the multiplicative gain approach, the filtering approach more naturally takes into account the correlation of the speech signal in adjacent time frames. In this study, the authors pursue the filtering approach and show how to apply CCA to the speech enhancement problem. They also address the problem of adaptive beamforming from the CCA perspective, and show that the well-known Wiener and minimum variance distortionless response (MVDR) beamformers are particular cases of a general class of CCA-based adaptive beamformers.

Speech Enhancement

Speech Enhancement PDF Author: Jacob Benesty
Publisher: Elsevier
ISBN: 0128002530
Category : Technology & Engineering
Languages : en
Pages : 143

Get Book Here

Book Description
Speech enhancement is a classical problem in signal processing, yet still largely unsolved. Two of the conventional approaches for solving this problem are linear filtering, like the classical Wiener filter, and subspace methods. These approaches have traditionally been treated as different classes of methods and have been introduced in somewhat different contexts. Linear filtering methods originate in stochastic processes, while subspace methods have largely been based on developments in numerical linear algebra and matrix approximation theory. This book bridges the gap between these two classes of methods by showing how the ideas behind subspace methods can be incorporated into traditional linear filtering. In the context of subspace methods, the enhancement problem can then be seen as a classical linear filter design problem. This means that various solutions can more easily be compared and their performance bounded and assessed in terms of noise reduction and speech distortion. The book shows how various filter designs can be obtained in this framework, including the maximum SNR, Wiener, LCMV, and MVDR filters, and how these can be applied in various contexts, like in single-channel and multichannel speech enhancement, and in both the time and frequency domains. - First short book treating subspace approaches in a unified way for time and frequency domains, single-channel, multichannel, as well as binaural, speech enhancement - Bridges the gap between optimal filtering methods and subspace approaches - Includes original presentation of subspace methods from different perspectives

A Conceptual Framework for Noise Reduction

A Conceptual Framework for Noise Reduction PDF Author: Jacob Benesty
Publisher: Springer
ISBN: 3319129554
Category : Technology & Engineering
Languages : en
Pages : 95

Get Book Here

Book Description
Though noise reduction and speech enhancement problems have been studied for at least five decades, advances in our understanding and the development of reliable algorithms are more important than ever, as they support the design of tailored solutions for clearly defined applications. In this work, the authors propose a conceptual framework that can be applied to the many different aspects of noise reduction, offering a uniform approach to monaural and binaural noise reduction problems, in the time domain and in the frequency domain, and involving a single or multiple microphones. Moreover, the derivation of optimal filters is simplified, as are the performance measures used for their evaluation.

Acoustical Impulse Response Functions of Music Performance Halls

Acoustical Impulse Response Functions of Music Performance Halls PDF Author: Douglas Frey
Publisher: Springer Nature
ISBN: 3031025652
Category : Technology & Engineering
Languages : en
Pages : 102

Get Book Here

Book Description
Digital measurement of the analog acoustical parameters of a music performance hall is difficult. The aim of such work is to create a digital acoustical derivation that is an accurate numerical representation of the complex analog characteristics of the hall. The present study describes the exponential sine sweep (ESS) measurement process in the derivation of an acoustical impulse response function (AIRF) of three music performance halls in Canada. It examines specific difficulties of the process, such as preventing the external effects of the measurement transducers from corrupting the derivation, and provides solutions, such as the use of filtering techniques in order to remove such unwanted effects. In addition, the book presents a novel method of numerical verification through mean-squared error (MSE) analysis in order to determine how accurately the derived AIRF represents the acoustical behavior of the actual hall.

Speech Recognition Algorithms Using Weighted Finite-State Transducers

Speech Recognition Algorithms Using Weighted Finite-State Transducers PDF Author: Takaaki Hori
Publisher: Springer Nature
ISBN: 3031025628
Category : Technology & Engineering
Languages : en
Pages : 161

Get Book Here

Book Description
This book introduces the theory, algorithms, and implementation techniques for efficient decoding in speech recognition mainly focusing on the Weighted Finite-State Transducer (WFST) approach. The decoding process for speech recognition is viewed as a search problem whose goal is to find a sequence of words that best matches an input speech signal. Since this process becomes computationally more expensive as the system vocabulary size increases, research has long been devoted to reducing the computational cost. Recently, the WFST approach has become an important state-of-the-art speech recognition technology, because it offers improved decoding speed with fewer recognition errors compared with conventional methods. However, it is not easy to understand all the algorithms used in this framework, and they are still in a black box for many people. In this book, we review the WFST approach and aim to provide comprehensive interpretations of WFST operations and decoding algorithms to help anyone who wants to understand, develop, and study WFST-based speech recognizers. We also mention recent advances in this framework and its applications to spoken language processing. Table of Contents: Introduction / Brief Overview of Speech Recognition / Introduction to Weighted Finite-State Transducers / Speech Recognition by Weighted Finite-State Transducers / Dynamic Decoders with On-the-fly WFST Operations / Summary and Perspective

Articulatory Speech Synthesis from the Fluid Dynamics of the Vocal Apparatus

Articulatory Speech Synthesis from the Fluid Dynamics of the Vocal Apparatus PDF Author: Stephen Levinson
Publisher: Springer Nature
ISBN: 3031025636
Category : Technology & Engineering
Languages : en
Pages : 104

Get Book Here

Book Description
This book addresses the problem of articulatory speech synthesis based on computed vocal tract geometries and the basic physics of sound production in it. Unlike conventional methods based on analysis/synthesis using the well-known source filter model, which assumes the independence of the excitation and filter, we treat the entire vocal apparatus as one mechanical system that produces sound by means of fluid dynamics. The vocal apparatus is represented as a three-dimensional time-varying mechanism and the sound propagation inside it is due to the non-planar propagation of acoustic waves through a viscous, compressible fluid described by the Navier-Stokes equations. We propose a combined minimum energy and minimum jerk criterion to compute the dynamics of the vocal tract during articulation. Theoretical error bounds and experimental results show that this method obtains a close match to the phonetic target positions while avoiding abrupt changes in the articulatory trajectory. The vocal folds are set into aerodynamic oscillation by the flow of air from the lungs. The modulated air stream then excites the moving vocal tract. This method shows strong evidence for source-filter interaction. Based on our results, we propose that the articulatory speech production model has the potential to synthesize speech and provide a compact parameterization of the speech signal that can be useful in a wide variety of speech signal processing problems. Table of Contents: Introduction / Literature Review / Estimation of Dynamic Articulatory Parameters / Construction of Articulatory Model Based on MRI Data / Vocal Fold Excitation Models / Experimental Results of Articulatory Synthesis / Conclusion

Fundamentals of Speech Enhancement

Fundamentals of Speech Enhancement PDF Author: Jacob Benesty
Publisher: Springer
ISBN: 3319745247
Category : Technology & Engineering
Languages : en
Pages : 112

Get Book Here

Book Description
This book presents and develops several important concepts of speech enhancement in a simple but rigorous way. Many of the ideas are new; not only do they shed light on this old problem but they also offer valuable tips on how to improve on some well-known conventional approaches. The book unifies all aspects of speech enhancement, from single channel, multichannel, beamforming, time domain, frequency domain and time–frequency domain, to binaural in a clear and flexible framework. It starts with an exhaustive discussion on the fundamental best (linear and nonlinear) estimators, showing how they are connected to various important measures such as the coefficient of determination, the correlation coefficient, the conditional correlation coefficient, and the signal-to-noise ratio (SNR). It then goes on to show how to exploit these measures in order to derive all kinds of noise reduction algorithms that can offer an accurate and versatile compromise between noise reduction and speech distortion.

Speech Signal Processing Based on Deep Learning in Complex Acoustic Environments

Speech Signal Processing Based on Deep Learning in Complex Acoustic Environments PDF Author: Xiao-Lei Zhang
Publisher: Elsevier
ISBN: 0443248575
Category : Computers
Languages : en
Pages : 282

Get Book Here

Book Description
Speech Signal Processing Based on Deep Learning in Complex Acoustic Environments provides a detailed discussion of deep learning-based robust speech processing and its applications. The book begins by looking at the basics of deep learning and common deep network models, followed by front-end algorithms for deep learning-based speech denoising, speech detection, single-channel speech enhancement multi-channel speech enhancement, multi-speaker speech separation, and the applications of deep learning-based speech denoising in speaker verification and speech recognition. - Provides a comprehensive introduction to the development of deep learning-based robust speech processing - Covers speech detection, speech enhancement, dereverberation, multi-speaker speech separation, robust speaker verification, and robust speech recognition - Focuses on a historical overview and then covers methods that demonstrate outstanding performance in practical applications