Details

Parametric Time-Frequency Domain Spatial Audio


Parametric Time-Frequency Domain Spatial Audio


IEEE Press 1. Aufl.

von: Ville Pulkki, Symeon Delikaris-Manias, Archontis Politis

106,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 11.10.2017
ISBN/EAN: 9781119252610
Sprache: englisch
Anzahl Seiten: 416

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>A comprehensive guide that addresses the theory and practice of spatial audio</b></p> <p>This book provides readers with the principles and best practices in spatial audio signal processing. It describes how sound fields and their perceptual attributes are captured and analyzed within the time-frequency domain, how essential representation parameters are coded, and how such signals are efficiently reproduced for practical applications. The book is split into four parts starting with an overview of the fundamentals. It then goes on to explain the reproduction of spatial sound before offering an examination of signal-dependent spatial filtering. The book finishes with coverage of both current and future applications and the direction that spatial audio research is heading in.</p> <p><i>Parametric Time-frequency Domain Spatial Audio</i> focuses on applications in entertainment audio, including music, home cinema, and gaming—covering the capturing and reproduction of spatial sound as well as its generation, transduction, representation, transmission, and perception. This book will teach readers the tools needed for such processing, and provides an overview to existing research. It also shows recent up-to-date projects and commercial applications built on top of the systems.</p> <ul> <li>Provides an in-depth presentation of the principles, past developments, state-of-the-art methods, and future research directions of spatial audio technologies</li> <li>Includes contributions from leading researchers in the field</li> <li>Offers MATLAB codes with selected chapters</li> </ul> <p>An advanced book aimed at readers who are capable of digesting mathematical expressions about digital signal processing and sound field analysis<i>, Parametric Time-frequency Domain Spatial Audio</i> is best suited for researchers in academia and in the audio industry.</p>
<p>Contents</p> <p>List of Contributors xiii</p> <p>Preface xv</p> <p>About the Companion Website xix</p> <p><b>Part I Analysis and Synthesis of Spatial Sound 1</b></p> <p>Time–Frequency Processing: Methods and Tools 3<br /><i>Juha Vilkamo and Tom Ba¨ckstro¨m</i></p> <p>1.1 Introduction 3</p> <p>1.2 Time–Frequency Processing 4</p> <p>1.2.1 Basic Structure 4</p> <p>1.2.2 Uniform Filter Banks 5</p> <p>1.2.3 Prototype Filters and Modulation 6</p> <p>1.2.4 A Robust Complex-Modulated Filter Bank, and Comparison with STFT 8</p> <p>1.2.5 Overlap-Add and Windowing 12</p> <p>1.2.6 Example Implementation of a Robust Filter Bank in Matlab 13</p> <p>1.2.7 Cascaded Filters 15</p> <p>1.3 Processing of Spatial Audio 16</p> <p>1.3.1 Stochastic Estimates 17</p> <p>1.3.2 Decorrelation 18</p> <p>1.3.3 Optimal and Generalized Solution for Spatial Sound Processing Using Covariance Matrices 19</p> <p>References 23</p> <p><b>2 Spatial Decomposition by Spherical Array Processing 25<br /></b><i>David Lou Alon and Boaz Rafaely</i></p> <p>2.1 Introduction 25</p> <p>2.2 Sound Field Measurement by a Spherical Array 26</p> <p>2.3 Array Processing and Plane-Wave Decomposition 26</p> <p>2.4 Sensitivity to Noise and Standard Regularization Methods 29</p> <p>2.5 Optimal Noise-Robust Design 32</p> <p>2.5.1 PWD Estimation Error Measure 32</p> <p>2.5.2 PWD Error Minimization 34</p> <p>2.5.3 R-PWD Simulation Study 35</p> <p>2.6 Spatial Aliasing and High Frequency Performance Limit 37</p> <p>2.7 High Frequency Bandwidth Extension by Aliasing Cancellation 39</p> <p>2.7.1 Spatial Aliasing Error 39</p> <p>2.7.2 AC-PWD Simulation Study 40</p> <p>2.8 High Performance Broadband PWD Example 42</p> <p>2.8.1 Broadband Measurement Model 42</p> <p>2.8.2 Minimizing Broadband PWD Error 42</p> <p>2.8.3 BB-PWD Simulation Study 44</p> <p>2.9 Summary 45</p> <p>2.10 Acknowledgment 46</p> <p>References 46</p> <p><b>3 Sound Field Analysis Using Sparse Recovery 49<br /></b><i>Craig T. Jin, Nicolas Epain, and Tahereh Noohi</i></p> <p>3.1 Introduction 49</p> <p>3.2 The Plane-Wave Decomposition Problem 50</p> <p>3.2.1 Sparse Plane-Wave Decomposition 51</p> <p>3.2.2 The Iteratively Reweighted Least-Squares Algorithm 51</p> <p>3.3 Bayesian Approach to Plane-Wave Decomposition 53</p> <p>3.4 Calculating the IRLS Noise-Power Regularization Parameter 55</p> <p>3.4.1 Estimation of the Relative Noise Power 56</p> <p>3.5 Numerical Simulations 58</p> <p>3.6 Experiment: Echoic Sound Scene Analysis 59</p> <p>3.7 Conclusions 65</p> <p>Appendix 65</p> <p>References 66</p> <p><b>Part II Reproduction of Spatial Sound 69</b></p> <p>Overview of Time–Frequency Domain Parametric Spatial Audio Techniques 71<br /><i>Archontis Politis, Symeon Delikaris-Manias, and Ville Pulkki</i></p> <p>4.1 Introduction 71</p> <p>4.2 Parametric Processing Overview 73</p> <p>4.2.1 Analysis Principles 74</p> <p>4.2.2 Synthesis Principles 75</p> <p>4.2.3 Spatial Audio Coding and Up-Mixing 76</p> <p>4.2.4 Spatial Sound Recording and Reproduction 78</p> <p>4.2.5 Auralization of Measured Room Acoustics and Spatial Rendering of Room Impulse Responses 81</p> <p>References 82</p> <p><b>5 First-Order Directional Audio Coding (DirAC) 89<br /></b><i>Ville Pulkki, Archontis Politis, Mikko-Ville Laitinen, Juha Vilkamo, and Jukka Ahonen</i></p> <p>5.1 Representing Spatial Sound with First-Order B-Format Signals 89</p> <p>5.2 Some Notes on the Evolution of the Technique 92</p> <p>5.3 DirAC with Ideal B-Format Signals 94</p> <p>5.4 Analysis of Directional Parameters with Real Microphone Setups 97</p> <p>5.4.1 DOA Analysis with Open 2D Microphone Arrays 97</p> <p>5.4.2 DOA Analysis with 2D Arrays with a Rigid Baffle 99</p> <p>5.4.3 DOA Analysis in Underdetermined Cases 101</p> <p>5.4.4 DOA Analysis: Further Methods 102</p> <p>5.4.5 Effect of Spatial Aliasing and Microphone Noise on the Analysis of Diffuseness 103</p> <p>5.5 First-Order DirAC with Monophonic Audio Transmission 105</p> <p>5.6 First-Order DirAC with Multichannel Audio Transmission 106</p> <p>5.6.1 Stream-Based Virtual Microphone Rendering 106</p> <p>5.6.2 Evaluation of Virtual Microphone DirAC 109</p> <p>5.6.3 Discussion of Virtual Microphone DirAC 111</p> <p>5.6.4 Optimized DirAC Synthesis 111</p> <p>5.6.5 DirAC-Based Reproduction of Spaced-Array Recordings 114</p> <p>5.7 DirAC Synthesis for Headphones and for Hearing Aids 117</p> <p>5.7.1 Reproduction of B-Format Signals 117</p> <p>5.7.2 DirAC in Hearing Aids 118</p> <p>5.8 Optimizing the Time–Frequency Resolution of DirAC for Critical Signals 119</p> <p>5.9 Example Implementation 120</p> <p>5.9.1 Executing DirAC and Plotting Parameter History 122</p> <p>5.9.2 DirAC Initialization 125</p> <p>5.9.3 DirAC Runtime 131</p> <p>5.9.4 Simplistic Binaural Synthesis of Loudspeaker Listening 136</p> <p>5.10 Summary 137</p> <p>References 138</p> <p><b>6 Higher-Order Directional Audio Coding 141<br /></b><i>Archontis Politis and Ville Pulkki</i></p> <p>6.1 Introduction 141</p> <p>6.2 Sound Field Model 144</p> <p>6.3 Energetic Analysis and Estimation of Parameters 145</p> <p>6.3.1 Analysis of Intensity and Diffuseness in the Spherical Harmonic Domain 146</p> <p>6.3.2 Higher-Order Energetic Analysis 147</p> <p>6.3.3 Sector Profiles 149</p> <p>6.4 Synthesis of Target Setup Signals 151</p> <p>6.4.1 Loudspeaker Rendering 152</p> <p>6.4.2 Binaural Rendering 155</p> <p>6.5 Subjective Evaluation 157</p> <p>6.6 Conclusions 157</p> <p>References 158</p> <p><b>7 Multi-Channel Sound Acquisition Using a Multi-Wave Sound Field Model 161<br /></b><i>Oliver Thiergart and Emanue¨l Habets</i></p> <p>7.1 Introduction 161</p> <p>7.2 Parametric Sound Acquisition and Processing 163</p> <p>7.2.1 Problem Formulation 163</p> <p>7.2.2 Principal Estimation of the Target Signal 166</p> <p>7.3 Multi-Wave Sound Field and Signal Model 167</p> <p>7.3.1 Direct Sound Model 168</p> <p>7.3.2 Diffuse Sound Model 169</p> <p>7.3.3 Noise Model 169</p> <p>7.4 Direct and Diffuse Signal Estimation 170</p> <p>7.4.1 Estimation of the Direct Signal Ys(k, n) 170</p> <p>7.4.2 Estimation of the Diffuse Signal Yd(k, n) 176</p> <p>7.5 Parameter Estimation 179</p> <p>7.5.1 Estimation of the Number of Sources 179</p> <p>7.5.2 Direction of Arrival Estimation 181</p> <p>7.5.3 Microphone Input PSD Matrix 181</p> <p>7.5.4 Noise PSD Estimation 182</p> <p>7.5.5 Diffuse Sound PSD Estimation 182</p> <p>7.5.6 Signal PSD Estimation in Multi-Wave Scenarios 185</p> <p>7.6 Application to Spatial Sound Reproduction 186</p> <p>7.6.1 State of the Art 186</p> <p>7.6.2 Spatial Sound Reproduction Based on Informed Spatial Filtering 187</p> <p>7.7 Summary 194</p> <p>References 195</p> <p><b>8 Adaptive Mixing of Excessively Directive and Robust Beamformers for Reproduction of Spatial Sound</b> <b>201<br /></b><i>Symeon Delikaris-Manias and Juha Vilkamo</i></p> <p>8.1 Introduction 201</p> <p>8.2 Notation and Signal Model 202</p> <p>8.3 Overview of the Method 203</p> <p>8.4 Loudspeaker-Based Spatial Sound Reproduction 204</p> <p>8.4.1 Estimation of the Target Covariance Matrix Cy 204</p> <p>8.4.2 Estimation of the Synthesis Beamforming Signals Ws 206</p> <p>8.4.4 Processing the Synthesis Signals (Wsx) to Obtain the Target Covariance Matrix Cy 206</p> <p>Spatial Energy Distribution 207</p> <p>8.4.5 Listening Tests 208</p> <p>8.5 Binaural-Based Spatial Sound Reproduction 209</p> <p>8.5.1 Estimation of the Analysis and Synthesis Beamforming Weight Matrices 210</p> <p>8.5.2 Diffuse-Field Equalization of HRTFs 210</p> <p>8.5.3 Adaptive Mixing and Decorrelation 211</p> <p>8.5.4 Subjective Evaluation 211</p> <p>8.6 Conclusions 212</p> <p>References 212</p> <p><b>9 Source Separation and Reconstruction of Spatial Audio Using Spectrogram Factorization 215<br /></b><i>Joonas Nikunen and Tuomas Virtanen</i></p> <p>9.1 Introduction 215</p> <p>9.2 Spectrogram Factorization 217</p> <p>9.2.1 Mixtures of Sounds 217</p> <p>9.2.2 Magnitude Spectrogram Models 218</p> <p>9.2.3 Complex-Valued Spectrogram Models 221</p> <p>9.2.4 Source Separation by Time–Frequency Filtering 225</p> <p>9.3 Array Signal Processing and Spectrogram Factorization 226</p> <p>9.3.1 Spaced Microphone Arrays 226</p> <p>9.3.2 Model for Spatial Covariance Based on Direction of Arrival 227</p> <p>9.3.3 Complex-Valued NMF with the Spatial Covariance Model 229</p> <p>9.4 Applications of Spectrogram Factorization in Spatial Audio 231</p> <p>9.4.1 Parameterization of Surround Sound: Upmixing by Time–Frequency Filtering 231</p> <p>9.4.2 Source Separation Using a Compact Microphone Array 233</p> <p>9.4.3 Reconstruction of Binaural Sound Through Source Separation 238</p> <p>9.5 Discussion 243</p> <p>9.6 Matlab Example 243</p> <p>References 247</p> <p><b>Part III Signal-Dependent Spatial Filtering 251</b></p> <p><b>10 Time–Frequency Domain Spatial Audio Enhancement 253<br /></b><i>Symeon Delikaris-Manias and Pasi Pertila</i></p> <p>10.1 Introduction 253</p> <p>10.2 Signal-Independent Enhancement 254</p> <p>10.3 Signal-Dependent Enhancement 255</p> <p>10.3.1 Adaptive Beamformers 255</p> <p>10.3.2 Post-Filters 257</p> <p>10.3.3 Post-Filter Types 257</p> <p>10.3.4 Estimating Post-Filters with Machine Learning 259</p> <p>10.3.5 Post-Filter Design Based on Spatial Parameters 259</p> <p>References 261</p> <p><b>11 Cross-Spectrum-Based Post-Filter Utilizing Noisy and Robust Beamformers 265<br /></b><i>Symeon Delikaris-Manias and Ville Pulkki</i></p> <p>11.1 Introduction 265</p> <p>11.2 Notation and Signal Model 267</p> <p>11.2.1 Virtual Microphone Design Utilizing Pressure Microphones 268</p> <p>11.3 Estimation of the Cross-Spectrum-Based Post-Filter 269</p> <p>11.3.1 Post-Filter Estimation Utilizing Two Static Beamformers 270</p> <p>11.3.2 Post-Filter Estimation Utilizing a Static and an Adaptive Beamformer 272</p> <p>11.3.3 Smoothing Techniques 277</p> <p>11.4 Implementation Examples 279</p> <p>11.4.1 Ideal Conditions 279</p> <p>11.4.2 Prototype Microphone Arrays 281</p> <p>11.5 Conclusions and Further Remarks 283</p> <p>11.6 Source Code 284</p> <p>References 287</p> <p><b>12 Microphone-Array-Based Speech Enhancement Using Neural Networks<br /></b><i>Pasi Pertila</i> 291</p> <p>12.1 Introduction 291</p> <p>12.2 Time–Frequency Masks for Speech Enhancement Using Supervised Learning 293</p> <p>12.2.1 Beamforming with Post-Filtering 293</p> <p>12.2.2 Overview of Mask Prediction 294</p> <p>12.2.3 Features for Mask Learning 295</p> <p>12.2.4 Target Mask Design 297</p> <p>12.3 Artificial Neural Networks 298</p> <p>12.3.1 Learning the Weights 299</p> <p>12.3.2 Generalization 301</p> <p>12.3.3 Deep Neural Networks 305</p> <p>12.4 Mask Learning: A Simulated Example 305</p> <p>12.4.1 Feature Extraction 306</p> <p>12.4.2 Target Mask Design 306</p> <p>12.4.3 Neural Network Training 307</p> <p>12.4.4 Results 308</p> <p>12.5 Mask Learning: A Real-World Example 310</p> <p>12.5.1 Brief Description of the Third CHiME Challenge Data 310</p> <p>12.5.2 Data Processing and Beamforming 312</p> <p>12.5.3 Description of Network Structure, Features, and Targets 312</p> <p>12.5.4 Mask Prediction Results and Discussion 314</p> <p>12.5.5 Speech Enhancement Results 316</p> <p>12.6 Conclusions 318</p> <p>12.7 Source Code 318</p> <p>12.7.1 Matlab Code for Neural-Network-Based Sawtooth Denoising Example 318</p> <p>12.7.2 Matlab Code for Phase Feature Extraction 321</p> <p>References 324</p> <p><b>Part IV Applications 327</b></p> <p><b>13 Upmixing and Beamforming in Professional Audio 329<br /></b><i>Christof Faller</i></p> <p>13.1 Introduction 329</p> <p>13.2 Stereo-to-Multichannel Upmix Processor 329</p> <p>13.2.1 Product Description 329</p> <p>13.2.2 Considerations for Professional Audio and Broadcast 331</p> <p>13.2.3 Signal Processing 332</p> <p>13.3 Digitally Enhanced Shotgun Microphone 336</p> <p>13.3.1 Product Description 336</p> <p>13.3.2 Concept 336</p> <p>13.3.3 Signal Processing 336</p> <p>13.3.4 Evaluations and Measurements 339</p> <p>13.4 Surround Microphone System Based on Two Microphone Elements 341</p> <p>13.4.1 Product Description 341</p> <p>13.4.2 Concept 344</p> <p>13.5 Summary 345</p> <p>References 345</p> <p><b>14 Spatial Sound Scene Synthesis and Manipulation for Virtual Reality and Audio Effects 347<br /></b><i>Ville Pulkki, Archontis Politis, Tapani Pihlajama¨ki, and Mikko-Ville Laitinen</i></p> <p>14.1 Introduction 347</p> <p>14.2 Parametric Sound Scene Synthesis for Virtual Reality 348</p> <p>14.2.1 Overall Structure 348</p> <p>14.2.2 Synthesis of Virtual Sources 350</p> <p>14.2.3 Synthesis of Room Reverberation 352</p> <p>14.2.4 Augmentation of Virtual Reality with Real Spatial Recordings 352</p> <p>14.2.5 Higher-Order Processing 353</p> <p>14.2.6 Loudspeaker-Signal Bus 354</p> <p>14.3 Spatial Manipulation of Sound Scenes 355</p> <p>14.3.1 Parametric Directional Transformations 356</p> <p>14.3.2 Sweet-Spot Translation and Zooming 356</p> <p>14.3.3 Spatial Filtering 356</p> <p>14.3.4 Spatial Modulation 357</p> <p>14.3.5 Diffuse Field Level Control 358</p> <p>14.3.6 Ambience Extraction 359</p> <p>14.3.7 Spatialization of Monophonic Signals 360</p> <p>14.4 Summary 360</p> <p>References 361</p> <p><b>15 Parametric Spatial Audio Techniques in Teleconferencing and Remote Presence 363<br /></b><i>Anastasios Alexandridis, Despoina Pavlidi, Nikolaos Stefanakis, and Athanasios Mouchtaris</i></p> <p>15.1 Introduction and Motivation 363</p> <p>15.2 Background 365</p> <p>15.3 Immersive Audio Communication System (ImmACS) 366</p> <p>15.3.1 Encoder 366</p> <p>15.3.2 Decoder 373</p> <p>15.4 Capture and Reproduction of Crowded Acoustic Environments 376</p> <p>15.4.1 Sound Source Positioning Based on VBAP 376</p> <p>15.4.2 Non-Parametric Approach 377</p> <p>15.4.3 Parametric Approach 379</p> <p>15.4.4 Example Application 382</p> <p>15.5 Conclusions 384</p> <p>References 384</p> <p>Index 387</p>
<p><b>VILLE PULKKI, P<small>H</small>D,</b> is an Associate Professor leading the Communication Acoustics Research Group in the Department of Signal Processing and Acoustics, Aalto University, Finland. He has received distinguished medal awards from Society of Motion Picture and Television Engineers and from Audio Engineering Society. <p><b>SYMEON DELIKARIS-MANIAS</b> is a postdoc researcher affiliated with the Communication Acoustics Research Group in the Department of Signal Processing and Acoustics at Aalto University, Finland. <p><b>ARCHONTIS POLITIS, P<small>H</small>D,</b> is a postdoc researcher affiliated with the Communication Acoustics Research Group in the Department of Signal Processing and Acoustics at Aalto University and Tampere University of Technology in Finland.
<p><b>A Comprehensive Guide that Addresses the Theory and Practice of Spatial Audio</b> <p>This book provides readers with the principles and best practices in spatial audio signal processing. It describes how sound fields and their perceptual attributes are captured and analyzed within the time-frequency domain, how essential representation parameters are coded, and how such signals are efficiently reproduced for practical applications. The book is split into four parts starting with an overview of the fundamentals. It then goes on to explain the reproduction of spatial sound before offering an examination of signal-dependent spatial filtering. The book finishes with coverage of both current and future applications and the direction that spatial audio research is heading in. <p><i>Parametric Time-Frequency Domain Spatial Audio</i> focuses on applications in entertainment audio, including music, home cinema, and gaming—covering the capturing and reproduction of spatial sound as well as its generation, transduction, representation, transmission, and perception. This book will teach readers the tools needed for such processing, and provides an overview to existing research. It also shows recent up-to-date projects and commercial applications built on top of the systems. <ul> <li>Provides an in-depth presentation of the principles, past developments, state-of-the-art methods, and future research directions of spatial audio technologies</li> <li>Includes contributions from leading researchers in the field</li> <li>Offers MATLAB codes with selected chapters</li> </ul> <p>An advanced book aimed at readers who are capable of digesting mathematical expressions about digital signal processing and sound field analysis, <i>Parametric Time-Frequency Domain Spatial Audio</i> is best suited for researchers in academia and in the audio industry.

Diese Produkte könnten Sie auch interessieren:

Strategies to the Prediction, Mitigation and Management of Product Obsolescence
Strategies to the Prediction, Mitigation and Management of Product Obsolescence
von: Bjoern Bartels, Ulrich Ermel, Peter Sandborn, Michael G. Pecht
PDF ebook
116,99 €