Details

Machine Learning for Risk Calculations


Machine Learning for Risk Calculations

A Practitioner's View
The Wiley Finance Series 1. Aufl.

von: Ignacio Ruiz, Mariano Zeron

61,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 20.12.2021
ISBN/EAN: 9781119791409
Sprache: englisch
Anzahl Seiten: 464

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>State-of-the-art algorithmic deep learning and tensoring techniques for financial institutions</b></p> <p>The computational demand of risk calculations in financial institutions has ballooned and shows no sign of stopping. It is no longer viable to simply add more computing power to deal with this increased demand. The solution? Algorithmic solutions based on deep learning and Chebyshev tensors represent a practical way to reduce costs while simultaneously increasing risk calculation capabilities. <i>Machine Learning for Risk Calculations: A Practitioner’s View</i> provides an in-depth review of a number of algorithmic solutions and demonstrates how they can be used to overcome the massive computational burden of risk calculations in financial institutions.</p> <p>This book will get you started by reviewing fundamental techniques, including deep learning and Chebyshev tensors. You’ll then discover algorithmic tools that, in combination with the fundamentals, deliver actual solutions to the real problems financial institutions encounter on a regular basis. Numerical tests and examples demonstrate how these solutions can be applied to practical problems, including XVA and Counterparty Credit Risk, IMM capital, PFE, VaR, FRTB, Dynamic Initial Margin, pricing function calibration, volatility surface parametrisation, portfolio optimisation and others. Finally, you’ll uncover the benefits these techniques provide, the practicalities of implementing them, and the software which can be used.</p> <ul> <li>Review the fundamentals of deep learning and Chebyshev tensors</li> <li>Discover pioneering algorithmic techniques that can create new opportunities in complex risk calculation</li> <li>Learn how to apply the solutions to a wide range of real-life risk calculations.</li> <li>Download sample code used in the book, so you can follow along and experiment with your own calculations</li> <li>Realize improved risk management whilst overcoming the burden of limited computational power</li> </ul> <p>Quants, IT professionals, and financial risk managers will benefit from this practitioner-oriented approach to state-of-the-art risk calculation.</p>
<p>Acknowledgements xvii</p> <p>Foreword xxi</p> <p>Motivation and aim of this book xxiii</p> <p><b>Part One Fundamental Approximation Methods</b></p> <p><b>Chapter 1 Machine Learning 3</b></p> <p>1.1 Introduction to Machine Learning 3</p> <p>1.1.1 A brief history of Machine Learning Methods 4</p> <p>1.1.2 Main sub-categories in Machine Learning 5</p> <p>1.1.3 Applications of interest 7</p> <p>1.2 The Linear Model 7</p> <p>1.2.1 General concepts 8</p> <p>1.2.2 The standard linear model 12</p> <p>1.3 Training and predicting 15</p> <p>1.3.1 The frequentist approach 18</p> <p>1.3.2 The Bayesian approach 21</p> <p>1.3.3 Testing—in search of consistent accurate predictions 25</p> <p>1.3.4 Underfitting and overfitting 25</p> <p>1.3.5 K-fold cross-validation 27</p> <p>1.4 Model complexity 28</p> <p>1.4.1 Regularisation 29</p> <p>1.4.2 Cross-validation for regularisation 31</p> <p>1.4.3 Hyper-parameter optimisation 33</p> <p><b>Chapter 2 Deep Neural Nets 39</b></p> <p>2.1 A brief history of Deep Neural Nets 39</p> <p>2.2 The basic Deep Neural Net model 41</p> <p>2.2.1 Single neuron 41</p> <p>2.2.2 Artificial Neural Net 43</p> <p>2.2.3 Deep Neural Net 46</p> <p>2.3 Universal Approximation Theorems 48</p> <p>2.4 Training of Deep Neural Nets 49</p> <p>2.4.1 Backpropagation 50</p> <p>2.4.2 Backpropagation example 51</p> <p>2.4.3 Optimisation of cost function 55</p> <p>2.4.4 Stochastic gradient descent 57</p> <p>2.4.5 Extensions of stochastic gradient descent 58</p> <p>2.5 More sophisticated DNNs 59</p> <p>2.5.1 Convolution Neural Nets 59</p> <p>2.5.2 Other famous architectures 63</p> <p>2.6 Summary of chapter 64</p> <p><b>Chapter 3 Chebyshev Tensors 65</b></p> <p>3.1 Approximating functions with polynomials 65</p> <p>3.2 Chebyshev Series 66</p> <p>3.2.1 Lipschitz continuity and Chebyshev projections 67</p> <p>3.2.2 Smooth functions and Chebyshev projections 70</p> <p>3.2.3 Analytic functions and Chebyshev projections 70</p> <p>3.3 Chebyshev Tensors and interpolants 72</p> <p>3.3.1 Tensors and polynomial interpolants 72</p> <p>3.3.2 Misconception over polynomial interpolation 73</p> <p>3.3.3 Chebyshev points 74</p> <p>3.3.4 Chebyshev interpolants 76</p> <p>3.3.5 Aliasing phenomenon 77</p> <p>3.3.6 Convergence rates of Chebyshev interpolants 77</p> <p>3.3.7 High-dimensional Chebyshev interpolants 79</p> <p>3.4 Ex ante error estimation 82</p> <p>3.5 What makes Chebyshev points unique 85</p> <p>3.6 Evaluation of Chebyshev interpolants 89</p> <p>3.6.1 Clenshaw algorithm 90</p> <p>3.6.2 Barycentric interpolation formula 91</p> <p>3.6.3 Evaluating high-dimensional tensors 93</p> <p>3.6.4 Example of numerical stability 94</p> <p>3.7 Derivative approximation 95</p> <p>3.7.1 Convergence of Chebyshev derivatives 95</p> <p>3.7.2 Computation of Chebyshev derivatives 96</p> <p>3.7.3 Derivatives in high dimensions 97</p> <p>3.8 Chebyshev Splines 99</p> <p>3.8.1 Gibbs phenomenon 99</p> <p>3.8.2 Splines 100</p> <p>3.8.3 Splines of Chebyshev 101</p> <p>3.8.4 Chebyshev Splines in high dimensions 101</p> <p>3.9 Algebraic operations with Chebyshev Tensors 101</p> <p>3.10 Chebyshev Tensors and Machine Learning 103</p> <p>3.11 Summary of chapter 104</p> <p><b>Part Two The toolkit — plugging in approximation methods</b></p> <p><b>Chapter 4 Introduction: why is a toolkit needed 107</b></p> <p>4.1 The pricing problem 107</p> <p>4.2 Risk calculation with proxy pricing 109</p> <p>4.3 The curse of dimensionality 110</p> <p>4.4 The techniques in the toolkit 112</p> <p><b>Chapter 5 Composition techniques 113</b></p> <p>5.1 Leveraging from existing parametrisations 114</p> <p>5.1.1 Risk factor generating models 114</p> <p>5.1.2 Pricing functions and model risk factors 115</p> <p>5.1.3 The tool obtained 116</p> <p>5.2 Creating a parametrisation 117</p> <p>5.2.1 Principal Component Analysis 117</p> <p>5.2.2 Autoencoders 119</p> <p>5.3 Summary of chapter 120</p> <p><b>Chapter 6 Tensors in TT format and Tensor Extension Algorithms 123</b></p> <p>6.1 Tensors in TT format 123</p> <p>6.1.1 Motivating example 124</p> <p>6.1.2 General case 124</p> <p>6.1.3 Basic operations 126</p> <p>6.1.4 Evaluation of Chebyshev Tensors in TT format 127</p> <p>6.2 Tensor Extension Algorithms 129</p> <p>6.3 Step 1—Optimising over tensors of fixed rank 129</p> <p>6.3.1 The Fundamental Completion Algorithm 131</p> <p>6.4 Step 2—Optimising over tensors of varying rank 133</p> <p>6.4.1 The Rank Adaptive Algorithm 134</p> <p>6.5 Step 3—Adapting the sampling set 135</p> <p>6.5.1 The Sample Adaptive Algorithm 136</p> <p>6.6 Summary of chapter 137</p> <p><b>Chapter 7 Sliding Technique 139</b></p> <p>7.1 Slide 139</p> <p>7.2 Slider 140</p> <p>7.3 Evaluating a slider 141</p> <p>7.3.1 Relation to Taylor approximation 142</p> <p>7.4 Summary of chapter 142</p> <p><b>Chapter 8 The Jacobian projection technique 143</b></p> <p>8.1 Setting the background 144</p> <p>8.2 What we can recover 145</p> <p>8.2.1 Intuition behind g and its derivative dg 146</p> <p>8.2.2 Using the derivative of f 147</p> <p>8.2.3 When <i>k</i> < n becomes a problem 149</p> <p>8.3 Partial derivatives via projections onto the Jacobian 149</p> <p><b>Part Three Hybrid solutions — approximation methods and the toolkit</b></p> <p><b>Chapter 9 Introduction 155</b></p> <p>9.1 The dimensionality problem revisited 155</p> <p>9.2 Exploiting the Composition Technique 156</p> <p><b>Chapter 10 The Toolkit and Deep Neural Nets 159</b></p> <p>10.1 Building on P using the image of<i> g</i> 159</p> <p>10.2 Building on<i> f</i> 160</p> <p><b>Chapter 11 The Toolkit and Chebyshev Tensors 161</b></p> <p>11.1 Full Chebyshev Tensor 161</p> <p>11.2 TT-format Chebyshev Tensor 162</p> <p>11.3 Chebyshev Slider 162</p> <p>11.4 A final note 163</p> <p><b>Chapter 12 Hybrid Deep Neural Nets and Chebyshev Tensors Frameworks 165</b></p> <p>12.1 The fundamental idea 165</p> <p>12.1.1 Factorable Functions 167</p> <p>12.2 DNN+CT with Static Training Set 168</p> <p>12.3 DNN+CT with Dynamic Training Set 171</p> <p>12.4 Numerical Tests 172</p> <p>12.4.1 Cost Function Minimisation 172</p> <p>12.4.2 Maximum Error 174</p> <p>12.5 Enhanced DNN+CT architectures and further research 174</p> <p><b>Part Four Applications</b></p> <p><b>Chapter 13 The aim 179</b></p> <p>13.1 Suitability of the approximation methods 179</p> <p>13.2 Understanding the variables at play 181</p> <p><b>Chapter 14 When to use Chebyshev Tensors and when to use Deep Neural Nets 185</b></p> <p>14.1 Speed and convergence 185</p> <p>14.1.1 Speed of evaluation 186</p> <p>14.1.2 Convergence 186</p> <p>14.1.3 Convergence Rate in Real-Life Contexts 187</p> <p>14.2 The question of dimension 190</p> <p>14.2.1 Taking into account the application 192</p> <p>14.3 Partial derivatives and ex ante error estimation 195</p> <p>14.4 Summary of chapter 197</p> <p><b>Chapter 15 Counterparty credit risk 199</b></p> <p>15.1 Monte Carlo simulations for CCR 200</p> <p>15.1.1 Scenario diffusion 200</p> <p>15.1.2 Pricing step—computational bottleneck 200</p> <p>15.2 Solution 201</p> <p>15.2.1 Popular solutions 201</p> <p>15.2.2 The hybrid solution 202</p> <p>15.2.3 Variables at play 203</p> <p>15.2.4 Optimal setup 207</p> <p>15.2.5 Possible proxies 207</p> <p>15.2.6 Portfolio calculations 209</p> <p>15.2.7 If the model space is not available 209</p> <p>15.3 Tests 211</p> <p>15.3.1 Trade types, risk factors and proxies 212</p> <p>15.3.2 Proxy at each time point 213</p> <p>15.3.3 Proxy for all time points 223</p> <p>15.3.4 Adding non-risk-driving variables 228</p> <p>15.3.5 High-dimensional problems 235</p> <p>15.4 Results Analysis and Conclusions 236</p> <p>15.5 Summary of chapter 239</p> <p><b>Chapter 16 Market Risk 241</b></p> <p>16.1 VaR-like calculations 242</p> <p>16.1.1 Common techniques in the computation of VaR 243</p> <p>16.2 Enhanced Revaluation Grids 245</p> <p>16.3 Fundamental Review of the Trading Book 246</p> <p>16.3.1 Challenges 247</p> <p>16.3.2 Solution 248</p> <p>16.3.3 The intuition behind Chebyshev Sliders 252</p> <p>16.4 Proof of concept 255</p> <p>16.4.1 Proof of concept specifics 255</p> <p>16.4.2 Test specifics 257</p> <p>16.4.3 Results for swap 260</p> <p>16.4.4 Results for swaptions 10-day liquidity horizon 262</p> <p>16.4.5 Results for swaptions 60-day liquidity horizon 265</p> <p>16.4.6 Daily computation and reusability 268</p> <p>16.4.7 Beyond regulatory minimum calculations 271</p> <p>16.5 Stability of technique 272</p> <p>16.6 Results beyond vanilla portfolios—further research 272</p> <p>16.7 Summary of chapter 273</p> <p><b>Chapter 17 Dynamic sensitivities 275</b></p> <p>17.1 Simulating sensitivities 276</p> <p>17.1.1 Scenario diffusion 276</p> <p>17.1.2 Computing sensitivities 276</p> <p>17.1.3 Computational cost 276</p> <p>17.1.4 Methods available 277</p> <p>17.2 The Solution 278</p> <p>17.2.1 Hybrid method 279</p> <p>17.3 An important use of dynamic sensitivities 282</p> <p>17.4 Numerical tests 283</p> <p>17.4.1 FX Swap 283</p> <p>17.4.2 European Spread Option 284</p> <p>17.5 Discussion of results 291</p> <p>17.6 Alternative methods 293</p> <p>17.7 Summary of chapter 294</p> <p><b>Chapter 18 Pricing model calibration 295</b></p> <p>18.1 Introduction 295</p> <p>18.1.1 Examples of pricing models 297</p> <p>18.2 Solution 298</p> <p>18.2.1 Variables at play 299</p> <p>18.2.2 Possible proxies 299</p> <p>18.2.3 Domain of approximation 300</p> <p>18.3 Test description 301</p> <p>18.3.1 Test setup 301</p> <p>18.4 Results with Chebyshev Tensors 304</p> <p>18.4.1 Rough Bergomi model with constant forward variance 304</p> <p>18.4.2 Rough Bergomi model with piece-wise constant forward variance 307</p> <p>18.5 Results with Deep Neural Nets 309</p> <p>18.6 Comparison of results via CT and DNN 310</p> <p>18.7 Summary of chapter 311</p> <p><b>Chapter 19 Approximation of the implied volatility function 313</b></p> <p>19.1 The computation of implied volatility 314</p> <p>19.1.1 Available methods 315</p> <p>19.2 Solution 316</p> <p>19.2.1 Reducing the dimension of the problem 317</p> <p>19.2.2 Two-dimensional CTs 318</p> <p>19.2.3 Domain of approximation 321</p> <p>19.2.4 Splitting the domain 323</p> <p>19.2.5 Scaling the time-scaled implied volatility 325</p> <p>19.2.6 Implementation 328</p> <p>19.3 Results 330</p> <p>19.3.1 Parameters used for CTs 330</p> <p>19.3.2 Comparisons to other methods 331</p> <p>19.4 Summary of chapter 334</p> <p><b>Chapter 20 Optimisation Problems 335</b></p> <p>20.1 Balance sheet optimisation 335</p> <p>20.2 Minimisation of margin funding cost 339</p> <p>20.3 Generalisation—currently “impossible” calculations 345</p> <p>20.4 Summary of chapter 346</p> <p><b>Chapter 21 Pricing Cloning 347</b></p> <p>21.1 Pricing function cloning 347</p> <p>21.1.1 Other benefits 352</p> <p>21.1.2 Software vendors 352</p> <p>21.2 Summary of chapter 353</p> <p><b>Chapter 22 XVA sensitivities 355</b></p> <p>22.1 Finite differences and proxy pricers 355</p> <p>22.1.1 Multiple proxies 356</p> <p>22.1.2 Single proxy 357</p> <p>22.2 Proxy pricers and AAD 358</p> <p><b>Chapter 23 Sensitivities of exotic derivatives 359</b></p> <p>23.1 Benchmark sensitivities computation 360</p> <p>23.2 Sensitivities via Chebyshev Tensors 361</p> <p><b>Chapter 24 Software libraries relevant to the book 365</b></p> <p>24.1 Relevant software libraries 365</p> <p>24.2 The MoCaX Suite 366</p> <p>24.2.1 MoCaX Library 366</p> <p>24.2.2 MoCaXExtend Library 377</p> <p><b>Appendices</b></p> <p>Appendix A Families of Orthogonal Polynomials 385</p> <p>Appendix B Exponential Convergence of Chebyshev Tensors 387</p> <p>Appendix C Chebyshev Splines on Functions with No Singularity Points 391</p> <p><b>Appendix D Computational savings details for CCR 395</b></p> <p>D.1 Barrier option 395</p> <p>D.2 Cross-currency swap 395</p> <p>D.3 Bermudan Swaption 397</p> <p>D.3.1 Using full Chebyshev Tensors 397</p> <p>D.3.2 Using Chebyshev Tensors in TT format 397</p> <p>D.3.3 Using Deep Neural Nets 399</p> <p>D.4 American option 399</p> <p>D.4.1 Using Chebyshev Tensors in TT format 400</p> <p>D.4.2 Using Deep Neural Nets 401</p> <p><b>Appendix E Computational savings details for dynamic sensitivities 403</b></p> <p>E.1 FX Swap 403</p> <p>E.2 European Spread Option 404</p> <p><b>Appendix F Dynamic sensitivities on the market space 407</b></p> <p>F.1 The parametrisation 408</p> <p>F.2 Numerical tests 410</p> <p>F.3 Future work . . . when <i>k</i> > 1 412</p> <p>Appendix G Dynamic sensitivities and IM via Jacobian Projection technique 415</p> <p>Appendix H MVA optimisation — further computational enhancement 419</p> <p>Bibliography 421</p> <p>Index 425 </p>
<p><b>IGNACIO RUIZ, PhD,</b> is the head of Counterparty Credit Risk Measurement and Analytics at Scotiabank. Prior to that he has been head quant for Counterparty Credit Risk Exposure Analytics at Credit Suisse, head of Equity Risk Analytics at BNP Paribas and he founded MoCaX Intelligence, from where he offered his services as an independent consultant. He holds a PhD in Physics from the University of Cambridge. </p> <p><b>MARIANO ZERON, PhD,</b> is Head of Research and Development at MoCaX Intelligence. Prior to that he was a quant researcher at Areski Capital. He has extensive experience with Chebyshev Tensors and Deep Neural Nets applied to risk calculations. He holds a PhD in Mathematics from the University of Cambridge.
<p><b>MACHINE LEARNING FOR RISK CALCULATIONS</b></p> <p><b>Reduce computational overload and improve risk calculations at your financial institution with deep learning and tensoring techniques </b> <p>Few techniques offer as much potential for reducing the computational demand of risk calculations as algorithmic solutions based on Deep Learning and Chebyshev Tensors. These practical strategies significantly reduce costs while increasing risk calculation capabilities. <p><i>Machine Learning for Risk Calculations: A Practitioner’s View</i> offers readers an in-depth review of many of these algorithmic solutions and demonstrates how to use them in real-world situations. <p>You’ll find applicable and concrete solutions to practical problems, including XVA and counterparty credit risk, IMM capital, PFE, Market Risk VaR and FRTB, dynamic initial margin simulation, pricing function calibration, volatility surface parametrization, portfolio optimization, exotic pricer sensitivities and more. <p>The book comments on existing software libraries for Deep Learning and Chebyshev Tensors. In particular, a companion website (mocaxintelligence.org) offers a software suite for Chebyshev Tensors that you can download and, together with your favourite Deep Learning library, this can be used to follow along with the book and experiment with your own calculations as you review fundamental and advanced topics. <p>Ideal for quants, IT professionals, and financial risk managers, <i>Machine Learning for Risk Calculations</i> is an indispensable, practitioner-oriented guide to state-of-the-art risk calculation.

Diese Produkte könnten Sie auch interessieren:

Mindfulness
Mindfulness
von: Gill Hasson
PDF ebook
12,99 €
Counterparty Credit Risk, Collateral and Funding
Counterparty Credit Risk, Collateral and Funding
von: Damiano Brigo, Massimo Morini, Andrea Pallavicini
EPUB ebook
69,99 €