Articulatory Speech Synthesis from the Fluid Dynamics of the Vocal Apparatus

Articulatory Speech Synthesis from the Fluid Dynamics of the Vocal Apparatus PDF Author: Stephen Levinson
Publisher: Springer Nature
ISBN: 3031025636
Category : Technology & Engineering
Languages : en
Pages : 104

Get Book Here

Book Description
This book addresses the problem of articulatory speech synthesis based on computed vocal tract geometries and the basic physics of sound production in it. Unlike conventional methods based on analysis/synthesis using the well-known source filter model, which assumes the independence of the excitation and filter, we treat the entire vocal apparatus as one mechanical system that produces sound by means of fluid dynamics. The vocal apparatus is represented as a three-dimensional time-varying mechanism and the sound propagation inside it is due to the non-planar propagation of acoustic waves through a viscous, compressible fluid described by the Navier-Stokes equations. We propose a combined minimum energy and minimum jerk criterion to compute the dynamics of the vocal tract during articulation. Theoretical error bounds and experimental results show that this method obtains a close match to the phonetic target positions while avoiding abrupt changes in the articulatory trajectory. The vocal folds are set into aerodynamic oscillation by the flow of air from the lungs. The modulated air stream then excites the moving vocal tract. This method shows strong evidence for source-filter interaction. Based on our results, we propose that the articulatory speech production model has the potential to synthesize speech and provide a compact parameterization of the speech signal that can be useful in a wide variety of speech signal processing problems. Table of Contents: Introduction / Literature Review / Estimation of Dynamic Articulatory Parameters / Construction of Articulatory Model Based on MRI Data / Vocal Fold Excitation Models / Experimental Results of Articulatory Synthesis / Conclusion

Articulatory Speech Synthesis from the Fluid Dynamics of the Vocal Apparatus

Articulatory Speech Synthesis from the Fluid Dynamics of the Vocal Apparatus PDF Author: Stephen Levinson
Publisher: Springer Nature
ISBN: 3031025636
Category : Technology & Engineering
Languages : en
Pages : 104

Get Book Here

Book Description
This book addresses the problem of articulatory speech synthesis based on computed vocal tract geometries and the basic physics of sound production in it. Unlike conventional methods based on analysis/synthesis using the well-known source filter model, which assumes the independence of the excitation and filter, we treat the entire vocal apparatus as one mechanical system that produces sound by means of fluid dynamics. The vocal apparatus is represented as a three-dimensional time-varying mechanism and the sound propagation inside it is due to the non-planar propagation of acoustic waves through a viscous, compressible fluid described by the Navier-Stokes equations. We propose a combined minimum energy and minimum jerk criterion to compute the dynamics of the vocal tract during articulation. Theoretical error bounds and experimental results show that this method obtains a close match to the phonetic target positions while avoiding abrupt changes in the articulatory trajectory. The vocal folds are set into aerodynamic oscillation by the flow of air from the lungs. The modulated air stream then excites the moving vocal tract. This method shows strong evidence for source-filter interaction. Based on our results, we propose that the articulatory speech production model has the potential to synthesize speech and provide a compact parameterization of the speech signal that can be useful in a wide variety of speech signal processing problems. Table of Contents: Introduction / Literature Review / Estimation of Dynamic Articulatory Parameters / Construction of Articulatory Model Based on MRI Data / Vocal Fold Excitation Models / Experimental Results of Articulatory Synthesis / Conclusion

Speech Recognition Algorithms Using Weighted Finite-State Transducers

Speech Recognition Algorithms Using Weighted Finite-State Transducers PDF Author: Takaaki Hori
Publisher: Springer Nature
ISBN: 3031025628
Category : Technology & Engineering
Languages : en
Pages : 161

Get Book Here

Book Description
This book introduces the theory, algorithms, and implementation techniques for efficient decoding in speech recognition mainly focusing on the Weighted Finite-State Transducer (WFST) approach. The decoding process for speech recognition is viewed as a search problem whose goal is to find a sequence of words that best matches an input speech signal. Since this process becomes computationally more expensive as the system vocabulary size increases, research has long been devoted to reducing the computational cost. Recently, the WFST approach has become an important state-of-the-art speech recognition technology, because it offers improved decoding speed with fewer recognition errors compared with conventional methods. However, it is not easy to understand all the algorithms used in this framework, and they are still in a black box for many people. In this book, we review the WFST approach and aim to provide comprehensive interpretations of WFST operations and decoding algorithms to help anyone who wants to understand, develop, and study WFST-based speech recognizers. We also mention recent advances in this framework and its applications to spoken language processing. Table of Contents: Introduction / Brief Overview of Speech Recognition / Introduction to Weighted Finite-State Transducers / Speech Recognition by Weighted Finite-State Transducers / Dynamic Decoders with On-the-fly WFST Operations / Summary and Perspective

DFT-Domain Based Single-Microphone Noise Reduction for Speech Enhancement

DFT-Domain Based Single-Microphone Noise Reduction for Speech Enhancement PDF Author: Richard C. Hendriks
Publisher: Springer Nature
ISBN: 3031025644
Category : Technology & Engineering
Languages : en
Pages : 70

Get Book Here

Book Description
As speech processing devices like mobile phones, voice controlled devices, and hearing aids have increased in popularity, people expect them to work anywhere and at any time without user intervention. However, the presence of acoustical disturbances limits the use of these applications, degrades their performance, or causes the user difficulties in understanding the conversation or appreciating the device. A common way to reduce the effects of such disturbances is through the use of single-microphone noise reduction algorithms for speech enhancement. The field of single-microphone noise reduction for speech enhancement comprises a history of more than 30 years of research. In this survey, we wish to demonstrate the significant advances that have been made during the last decade in the field of discrete Fourier transform domain-based single-channel noise reduction for speech enhancement.Furthermore, our goal is to provide a concise description of a state-of-the-art speech enhancement system, and demonstrate the relative importance of the various building blocks of such a system. This allows the non-expert DSP practitioner to judge the relevance of each building block and to implement a close-to-optimal enhancement system for the particular application at hand. Table of Contents: Introduction / Single Channel Speech Enhancement: General Principles / DFT-Based Speech Enhancement Methods: Signal Model and Notation / Speech DFT Estimators / Speech Presence Probability Estimation / Noise PSD Estimation / Speech PSD Estimation / Performance Evaluation Methods / Simulation Experiments with Single-Channel Enhancement Systems / Future Directions

Acoustical Impulse Response Functions of Music Performance Halls

Acoustical Impulse Response Functions of Music Performance Halls PDF Author: Douglas Frey
Publisher: Springer Nature
ISBN: 3031025652
Category : Technology & Engineering
Languages : en
Pages : 102

Get Book Here

Book Description
Digital measurement of the analog acoustical parameters of a music performance hall is difficult. The aim of such work is to create a digital acoustical derivation that is an accurate numerical representation of the complex analog characteristics of the hall. The present study describes the exponential sine sweep (ESS) measurement process in the derivation of an acoustical impulse response function (AIRF) of three music performance halls in Canada. It examines specific difficulties of the process, such as preventing the external effects of the measurement transducers from corrupting the derivation, and provides solutions, such as the use of filtering techniques in order to remove such unwanted effects. In addition, the book presents a novel method of numerical verification through mean-squared error (MSE) analysis in order to determine how accurately the derived AIRF represents the acoustical behavior of the actual hall.

Articulatory Speech Synthesis

Articulatory Speech Synthesis PDF Author: Anastasiia Tsukanova
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description
The thesis is set in the domain of articulatory speech synthesis and consists of three major parts: the first two are dedicated to the development of two articulatory speech synthesizers and the third addresses how we can relate them to each other. The first approach results from a rule-based approach to articulatory speech synthesis that aimed to have a comprehensive control over the articulators (the jaw, the tongue, the lips, the velum, the larynx and the epiglottis). This approach used a dataset of static mid-sagittal magnetic resonance imaging (MRI) captures showing blocked articulation of French vowels and a set of consonant-vowel syllables; that dataset was encoded with a PCA-based vocal tract model. Then the system comprised several components: using the recorded articulatory configurations to drive a rule-based articulatory speech synthesizer as a source of target positions to attain (which is the main contribution of this first part); adjusting the obtained vocal tract shapes from the phonetic perspective; running an acoustic simulation unit to obtain the sound. The results of this synthesis were evaluated visually, acoustically and perceptually, and the problems encountered were broken down by their origin: the dataset, its modeling, the algorithm for managing the vocal tract shapes, their translation to the area functions, and the acoustic simulation. We concluded that, among our test examples, the articulatory strategies for vowels and stops are most correct, followed by those of nasals and fricatives. The second explored approach started off a baseline deep feed-forward neural network-based speech synthesizer trained with the standard recipe of Merlin on the audio recorded during real-time MRI (RT-MRI) acquisitions: denoised (and yet containing a considerable amount of noise of the MRI machine) speech in French and force-aligned state labels encoding phonetic and linguistic information. This synthesizer was augmented with eight parameters representing articulatory information--the lips opening and protrusion, the distance between the tongue and the velum, the velum and the pharyngeal wall and the tongue and the pharyngeal wall--that were automatically extracted from the captures and aligned with the audio signal and the linguistic specification. The jointly synthesized speech and articulatory sequences were evaluated objectively with dynamic time warping (DTW) distance, mean mel-cepstrum distortion (MCD), BAP (band aperiodicity prediction error), and three measures for F0: RMSE (root mean square error), CORR (correlation coefficient) and V/UV (frame-level voiced/unvoiced error). The consistency of articulatory parameters with the phonetic label was analyzed as well. I concluded that the generated articulatory parameter sequences matched the original ones acceptably closely, despite struggling more at attaining a contact between the articulators, and that the addition of articulatory parameters did not hinder the original acoustic model. The two approaches above are linked through the use of two different kinds of MRI speech data. This motivated a search for such coarticulation-aware targets as those that we had in the static case to be present or absent in the real-time data. To compare static and real-time MRI captures, the measures of structural similarity, Earth mover's distance, and SIFT were utilized; having analyzed these measures for validity and consistency, I qualitatively and quantitatively studied their temporal behavior, interpreted it and analyzed the identified similarities. I concluded that SIFT and structural similarity did capture some articulatory information and that their behavior, overall, validated the static MRI dataset. [...].

Dynamic Aspects of Speech Production

Dynamic Aspects of Speech Production PDF Author: Masayuki Sawashima
Publisher:
ISBN:
Category : Language Arts & Disciplines
Languages : en
Pages : 442

Get Book Here

Book Description


Identification of Control Parameters in an Articulatory Vocal Tract Model, with Applications to the Synthesis of Singing

Identification of Control Parameters in an Articulatory Vocal Tract Model, with Applications to the Synthesis of Singing PDF Author: Perry Raymond Cook
Publisher:
ISBN:
Category : Singing
Languages : en
Pages : 208

Get Book Here

Book Description


Dynamics of Speech Production and Perception

Dynamics of Speech Production and Perception PDF Author: P.L. Divenyi
Publisher: IOS Press
ISBN: 1607502038
Category : Language Arts & Disciplines
Languages : en
Pages : 388

Get Book Here

Book Description
The idea that speech is a dynamic process is a tautology: whether from the standpoint of the talker, the listener, or the engineer, speech is an action, a sound, or a signal continuously changing in time. Yet, because phonetics and speech science are offspring of classical phonology, speech has been viewed as a sequence of discrete events-positions of the articulatory apparatus, waveform segments, and phonemes. Although this perspective has been mockingly referred to as "beads on a string", from the time of Henry Sweet's 19th century treatise almost up to our days specialists of speech science and speech technology have continued to conceptualize the speech signal as a sequence of static states interleaved with transitional elements reflecting the quasi-continuous nature of vocal production. This book, a collection of papers of which each looks at speech as a dynamic process and highlights one of its particularities, is dedicated to the memory of Ludmilla Andreevna Chistovich. At the outset, it was planned to be a Chistovich festschrift but, sadly, she passed away a few months before the book went to press. The 24 chapters of this volume testify to the enormous influence that she and her colleagues have had over the four decades since the publication of their 1965 monograph.

Developments in Speech Synthesis

Developments in Speech Synthesis PDF Author: Mark Tatham
Publisher: John Wiley & Sons
ISBN: 0470012595
Category : Technology & Engineering
Languages : en
Pages : 356

Get Book Here

Book Description
With a growing need for understanding the process involved in producing and perceiving spoken language, this timely publication answers these questions in an accessible reference. Containing material resulting from many years’ teaching and research, Speech Synthesis provides a complete account of the theory of speech. By bringing together the common goals and methods of speech synthesis into a single resource, the book will lead the way towards a comprehensive view of the process involved in human speech. The book includes applications in speech technology and speech synthesis. It is ideal for intermediate students of linguistics and phonetics who wish to proceed further, as well as researchers and engineers in telecommunications working in speech technology and speech synthesis who need a comprehensive overview of the field and who wish to gain an understanding of the objectives and achievements of the study of speech production and perception.

Survey of the State of the Art in Human Language Technology

Survey of the State of the Art in Human Language Technology PDF Author: Giovanni Battista Varile
Publisher: Cambridge University Press
ISBN: 9780521592772
Category : Computers
Languages : en
Pages : 546

Get Book Here

Book Description
Languages, in all their forms, are the more efficient and natural means for people to communicate. Enormous quantities of information are produced, distributed and consumed using languages. Human language technology's main purpose is to allow the use of automatic systems and tools to assist humans in producing and accessing information, to improve communication between humans, and to assist humans in communicating with machines. This book, sponsored by the Directorate General XIII of the European Union and the Information Science and Engineering Directorate of the National Science Foundation, USA, offers the first comprehensive overview of the human language technology field.