Details

Linear Models and Time-Series Analysis


Linear Models and Time-Series Analysis

Regression, ANOVA, ARMA and GARCH
Wiley Series in Probability and Statistics 1. Aufl.

von: Marc S. Paolella

101,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 10.10.2018
ISBN/EAN: 9781119431985
Sprache: englisch
Anzahl Seiten: 896

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>A comprehensive and timely edition on an emerging new trend in time series</b></p> <p><i>Linear Models and Time-Series Analysis: Regression, ANOVA, ARMA and GARCH</i> sets a strong foundation, in terms of distribution theory, for the linear model (regression and ANOVA), univariate time series analysis (ARMAX and GARCH), and some multivariate models associated primarily with modeling financial asset returns (copula-based structures and the discrete mixed normal and Laplace). It builds on the author's previous book, <i>Fundamental Statistical Inference: A Computational Approach</i>, which introduced the major concepts of statistical inference. Attention is explicitly paid to application and numeric computation, with examples of Matlab code throughout. The code offers a framework for discussion and illustration of numerics, and shows the mapping from theory to computation. </p> <p>The topic of time series analysis is on firm footing, with numerous textbooks and research journals dedicated to it. With respect to the subject/technology, many chapters in <i>Linear Models and Time-Series Analysis</i> cover firmly entrenched topics (regression and ARMA). Several others are dedicated to very modern methods, as used in empirical finance, asset pricing, risk management, and portfolio optimization, in order to address the severe change in performance of many pension funds, and changes in how fund managers work. </p> <ul> <li>Covers traditional time series analysis with new guidelines</li> <li>Provides access to cutting edge topics that are at the forefront of financial econometrics and industry</li> <li>Includes latest developments and topics such as financial returns data, notably also in a multivariate context</li> <li>Written by a leading expert in time series analysis </li> <li>Extensively classroom tested</li> <li>Includes a tutorial on SAS</li> <li>Supplemented with a companion website containing numerous Matlab programs</li> <li>Solutions to most exercises are provided in the book</li> </ul> <p><i>Linear Models and Time-Series Analysis: Regression, ANOVA, ARMA and GARCH</i> is suitable for advanced masters students in statistics and quantitative finance, as well as doctoral students in economics and finance. It is also useful for quantitative financial practitioners in large financial institutions and smaller finance outlets.    </p>
<p>Preface xiii</p> <p><b>Part I Linear Models: Regression and ANOVA 1</b></p> <p><b>1 The Linear Model 3</b></p> <p>1.1 Regression, Correlation, and Causality 3</p> <p>1.2 Ordinary and Generalized Least Squares 7</p> <p>1.2.1 Ordinary Least Squares Estimation 7</p> <p>1.2.2 Further Aspects of Regression and OLS 8</p> <p>1.2.3 Generalized Least Squares 12</p> <p>1.3 The Geometric Approach to Least Squares 17</p> <p>1.3.1 Projection 17</p> <p>1.3.2 Implementation 22</p> <p>1.4 Linear Parameter Restrictions 26</p> <p>1.4.1 Formulation and Estimation 27</p> <p>1.4.2 Estimability and Identifiability 30</p> <p>1.4.3 Moments and the Restricted GLS Estimator 32</p> <p>1.4.4 Testing With h = 0 34</p> <p>1.4.5 Testing With Nonzero h 37</p> <p>1.4.6 Examples 37</p> <p>1.4.7 Confidence Intervals 42</p> <p>1.5 Alternative Residual Calculation 47</p> <p>1.6 Further Topics 51</p> <p>1.7 Problems 56</p> <p>1.A Appendix: Derivation of the BLUS Residual Vector 60</p> <p>1.B Appendix: The Recursive Residuals 64</p> <p>1.C Appendix: Solutions 66</p> <p><b>2 Fixed Effects ANOVA Models 77</b></p> <p>2.1 Introduction: Fixed, Random, and Mixed Effects Models 77</p> <p>2.2 Two Sample <i>t</i>-Tests for Differences in Means 78</p> <p>2.3 The Two Sample <i>t</i>-Test with Ignored Block Effects 84</p> <p>2.4 One-Way ANOVA with Fixed Effects 87</p> <p>2.4.1 The Model 87</p> <p>2.4.2 Estimation and Testing 88</p> <p>2.4.3 Determination of Sample Size 91</p> <p>2.4.4 The ANOVA Table 93</p> <p>2.4.5 Computing Confidence Intervals 97</p> <p>2.4.6 A Word on Model Assumptions 103</p> <p>2.5 Two-Way Balanced Fixed Effects ANOVA 107</p> <p>2.5.1 The Model and Use of the Interaction Terms 107</p> <p>2.5.2 Sums of Squares Decomposition without Interaction 108</p> <p>2.5.3 Sums of Squares Decomposition with Interaction 113</p> <p>2.5.4 Example and Codes 117</p> <p><b>3 Introduction to Random and Mixed Effects Models 127</b></p> <p>3.1 One-Factor Balanced Random Effects Model 128</p> <p>3.1.1 Model and Maximum Likelihood Estimation 128</p> <p>3.1.2 Distribution Theory and ANOVA Table 131</p> <p>3.1.3 Point Estimation, Interval Estimation, and Significance Testing 137</p> <p>3.1.4 Satterthwaite’s Method 139</p> <p>3.1.5 Use of SAS 142</p> <p>3.1.6 Approximate Inference in the Unbalanced Case 143</p> <p>3.1.6.1 Point Estimation in the Unbalanced Case 144</p> <p>3.1.6.2 Interval Estimation in the Unbalanced Case 150</p> <p>3.2 Crossed Random Effects Models 152</p> <p>3.2.1 Two Factors 154</p> <p>3.2.1.1 With Interaction Term 154</p> <p>3.2.1.2 Without Interaction Term 157</p> <p>3.2.2 Three Factors 157</p> <p>3.3 Nested Random Effects Models 162</p> <p>3.3.1 Two Factors 162</p> <p>3.3.1.1 Both Effects Random: Model and Parameter Estimation 162</p> <p>3.3.1.2 Both Effects Random: Exact and Approximate Confidence Intervals 167</p> <p>3.3.1.3 Mixed Model Case 170</p> <p>3.3.2 Three Factors 174</p> <p>3.3.2.1 All Effects Random 174</p> <p>3.3.2.2 Mixed: Classes Fixed 176</p> <p>3.3.2.3 Mixed: Classes and Subclasses Fixed 177</p> <p>3.4 Problems 177</p> <p>3.A Appendix: Solutions 178</p> <p><b>Part II Time-Series Analysis: ARMAX Processes 185</b></p> <p><b>4 The AR(1) Model 187</b></p> <p>4.1 Moments and Stationarity 188</p> <p>4.2 Order of Integration and Long-Run Variance 195</p> <p>4.3 Least Squares and ML Estimation 196</p> <p>4.3.1 OLS Estimator of <i>a</i> 196</p> <p>4.3.2 Likelihood Derivation I 196</p> <p>4.3.3 Likelihood Derivation II 198</p> <p>4.3.4 Likelihood Derivation III 198</p> <p>4.3.5 Asymptotic Distribution 199</p> <p>4.4 Forecasting 200</p> <p>4.5 Small Sample Distribution of the OLS and ML Point Estimators 204</p> <p>4.6 Alternative Point Estimators of <i>a</i> 208</p> <p>4.6.1 Use of the Jackknife for Bias Reduction 208</p> <p>4.6.2 Use of the Bootstrap for Bias Reduction 209</p> <p>4.6.3 Median-Unbiased Estimator 211</p> <p>4.6.4 Mean-Bias Adjusted Estimator 211</p> <p>4.6.5 Mode-Adjusted Estimator 212</p> <p>4.6.6 Comparison 213</p> <p>4.7 Confidence Intervals for <i>a</i> 215</p> <p>4.8 Problems 219</p> <p><b>5 Regression Extensions: AR(1) Errors and Time-varying Parameters 223</b></p> <p>5.1 The AR(1) Regression Model and the Likelihood 223</p> <p>5.2 OLS Point and Interval Estimation of <i>a</i> 225</p> <p>5.3 Testing <i>a</i> = 0 in the ARX(1) Model  229</p> <p>5.3.1 Use of Confidence Intervals 229</p> <p>5.3.2 The Durbin–Watson Test 229</p> <p>5.3.3 Other Tests for First-order Autocorrelation 231</p> <p>5.3.4 Further Details on the Durbin–Watson Test 236</p> <p>5.3.4.1 The Bounds Test, and Critique of Use of <i>p</i>-Values 236</p> <p>5.3.4.2 Limiting Power as <i>a</i> → ±1 239</p> <p>5.4 Bias-Adjusted Point Estimation 243</p> <p>5.5 Unit Root Testing in the ARX(1) Model 246</p> <p>5.5.1 Null is <i>a</i> = 1 248</p> <p>5.5.2 Null is <i>a</i> < 1 256</p> <p>5.6 Time-Varying Parameter Regression 259</p> <p>5.6.1 Motivation and Introductory Remarks 260</p> <p>5.6.2 The Hildreth–Houck Random Coefficient Model 261</p> <p>5.6.3 The TVP Random Walk Model 269</p> <p>5.6.3.1 Covariance Structure and Estimation 271</p> <p>5.6.3.2 Testing for Parameter Constancy 274</p> <p>5.6.4 Rosenberg Return to Normalcy Model 277</p> <p><b>6 Autoregressive and Moving Average Processes 281</b></p> <p>6.1 AR(<i>p</i>) Processes 281</p> <p>6.1.1 Stationarity and Unit Root Processes 282</p> <p>6.1.2 Moments 284</p> <p>6.1.3 Estimation 287</p> <p>6.1.3.1 Without Mean Term 287</p> <p>6.1.3.2 Starting Values 290</p> <p>6.1.3.3 With Mean Term 292</p> <p>6.1.3.4 Approximate Standard Errors 293</p> <p>6.2 Moving Average Processes 294</p> <p>6.2.1 MA(1) Process 294</p> <p>6.2.2 MA(<i>q</i>) Processes 299</p> <p>6.3 Problems 301</p> <p>6.A Appendix: Solutions 302</p> <p><b>7 ARMA Processes 311</b></p> <p>7.1 Basics of ARMA Models 311</p> <p>7.1.1 The Model 311</p> <p>7.1.2 Zero Pole Cancellation 312</p> <p>7.1.3 Simulation 313</p> <p>7.1.4 The ARIMA(<i>p, d, q</i>) Model 314</p> <p>7.2 Infinite AR and MA Representations 315</p> <p>7.3 Initial Parameter Estimation 317</p> <p>7.3.1 Via the Infinite AR Representation 318</p> <p>7.3.2 Via Infinite AR and Ordinary Least Squares 318</p> <p>7.4 Likelihood-Based Estimation 322</p> <p>7.4.1 Covariance Structure 322</p> <p>7.4.2 Point Estimation 324</p> <p>7.4.3 Interval Estimation 328</p> <p>7.4.4 Model Mis-specification 330</p> <p>7.5 Forecasting 331</p> <p>7.5.1 AR(<i>p</i>) Model 331</p> <p>7.5.2 MA(<i>q</i>) and ARMA(<i>p, q</i>) Models 335</p> <p>7.5.3 ARIMA(<i>p, d, q</i>) Models 339</p> <p>7.6 Bias-Adjusted Point Estimation: Extension to the ARMAX(1, <i>q</i>) model 339</p> <p>7.7 Some ARIMAX Model Extensions 343</p> <p>7.7.1 Stochastic Unit Root 344</p> <p>7.7.2 Threshold Autoregressive Models 346</p> <p>7.7.3 Fractionally Integrated ARMA (ARFIMA) 347</p> <p>7.8 Problems 349</p> <p>7.A Appendix: Generalized Least Squares for ARMA Estimation 351</p> <p>7.B Appendix: Multivariate AR(<i>p</i>) Processes and Stationarity, and General Block Toeplitz Matrix Inversion 357</p> <p><b>8 Correlograms 359</b></p> <p>8.1 Theoretical and Sample Autocorrelation Function 359</p> <p>8.1.1 Definitions 359</p> <p>8.1.2 Marginal Distributions 365</p> <p>8.1.3 Joint Distribution 371</p> <p>8.1.3.1 Support  371</p> <p>8.1.3.2 Asymptotic Distribution 372</p> <p>8.1.3.3 Small-Sample Joint Distribution Approximation 375</p> <p>8.1.4 Conditional Distribution Approximation 381</p> <p>8.2 Theoretical and Sample Partial Autocorrelation Function 384</p> <p>8.2.1 Partial Correlation 384</p> <p>8.2.2 Partial Autocorrelation Function 389</p> <p>8.2.2.1 TPACF: First Definition 389</p> <p>8.2.2.2 TPACF: Second Definition 390</p> <p>8.2.2.3 Sample Partial Autocorrelation Function 392</p> <p>8.3 Problems 396</p> <p>8.A Appendix: Solutions 397</p> <p>9 ARMA Model Identification 405</p> <p>9.1 Introduction 405</p> <p>9.2 Visual Correlogram Analysis 407</p> <p>9.3 Significance Tests 412</p> <p>9.4 Penalty Criteria 417</p> <p>9.5 Use of the Conditional SACF for Sequential Testing 421</p> <p>9.6 Use of the Singular Value Decomposition 436</p> <p>9.7 Further Methods: Pattern Identification 439</p> <p><b>Part III Modeling Financial Asset Returns 443 </b></p> <p><b>10 Univariate GARCH Modeling 445 </b></p> <p>10.1 Introduction 445</p> <p>10.2 Gaussian GARCH and Estimation 450</p> <p>10.2.1 Basic Properties 451</p> <p>10.2.2 Integrated GARCH 452</p> <p>10.2.3 Maximum Likelihood Estimation 453</p> <p>10.2.4 Variance Targeting Estimator 459</p> <p>10.3 Non-Gaussian ARMA-APARCH, QMLE, and Forecasting 459</p> <p>10.3.1 Extending the Volatility, Distribution, and Mean Equations 459</p> <p>10.3.2 Model Mis-specification and QMLE 464</p> <p>10.3.3 Forecasting 467</p> <p>10.4 Near-Instantaneous Estimation of NCT-APARCH(1,1) 468</p> <p>10.5 S<sub>𝛼,</sub><sub>𝛽</sub>-APARCH and Testing the IID Stable Hypothesis 473</p> <p>10.6 Mixed Normal GARCH 477</p> <p>10.6.1 Introduction 477</p> <p>10.6.2 The MixN(<i>k</i>)-GARCH(<i>r, s</i>) Model 478</p> <p>10.6.3 Parameter Estimation and Model Features 479</p> <p>10.6.4 Time-Varying Weights 482</p> <p>10.6.5 Markov Switching Extension 484</p> <p>10.6.6 Multivariate Extensions 484</p> <p><b>11 Risk Prediction and Portfolio Optimization  487</b></p> <p>11.1 Value at Risk and Expected Shortfall Prediction 487</p> <p>11.2 MGARCH Constructs Via Univariate GARCH 493</p> <p>11.2.1 Introduction 493</p> <p>11.2.2 The Gaussian CCC and DCC Models 494</p> <p>11.2.3 Morana Semi-Parametric DCC Model 497</p> <p>11.2.4 The COMFORT Class 499</p> <p>11.2.5 Copula Constructions 503</p> <p>11.3 Introducing Portfolio Optimization 504</p> <p>11.3.1 Some Trivial Accounting 504</p> <p>11.3.2 Markowitz and DCC 510</p> <p>11.3.3 Portfolio Optimization Using Simulation 513</p> <p>11.3.4 The Univariate Collapsing Method 516</p> <p>11.3.5 The ES Span 521</p> <p><b>12 Multivariate <i>t</i> Distributions 525 </b></p> <p>12.1 Multivariate Student’s <i>t</i> 525</p> <p>12.2 Multivariate Noncentral Student’s <i>t</i> 530</p> <p>12.3 Jones Multivariate <i>t</i> Distribution 534</p> <p>12.4 Shaw and Lee Multivariate <i>t</i> Distributions 538</p> <p>12.5 The Meta-Elliptical <i>t</i> Distribution 540</p> <p>12.5.1 The FaK Distribution 541</p> <p>12.5.2 The AFaK Distribution 542</p> <p>12.5.3 FaK and AFaK Estimation: Direct Likelihood Optimization 546</p> <p>12.5.4 FaK and AFaK Estimation: Two-Step Estimation 548</p> <p>12.5.5 Sums of Margins of the AFaK 555</p> <p>12.6 MEST: Marginally Endowed Student’s <i>t</i> 556</p> <p>12.6.1 SMESTI Distribution 557</p> <p>12.6.2 AMESTI Distribution 558</p> <p>12.6.3 MESTI Estimation 561</p> <p>12.6.4 AoN<i><sub>m</sub></i>-MEST 564</p> <p>12.6.5 MEST Distribution 573</p> <p>12.7 Some Closing Remarks 574</p> <p>12.A ES of Convolution of AFaK Margins 575</p> <p>12.B Covariance Matrix for the FaK 581</p> <p><b>13 Weighted Likelihood 587 </b></p> <p>13.1 Concept 587</p> <p>13.2 Determination of Optimal Weighting 592</p> <p>13.3 Density Forecasting and Backtest Overfitting 594</p> <p>13.4 Portfolio Optimization Using (A)FaK 600</p> <p><b>14 Multivariate Mixture Distributions 611 </b></p> <p>14.1 The Mix<i><sub>k</sub></i> N<i><sub>d</sub></i> Distribution 611</p> <p>14.1.1 Density and Simulation 612</p> <p>14.1.2 Motivation for Use of Mixtures 612</p> <p>14.1.3 Quasi-Bayesian Estimation and Choice of Prior 614</p> <p>14.1.4 Portfolio Distribution and Expected Shortfall 620</p> <p>14.2 Model Diagnostics and Forecasting 623</p> <p>14.2.1 Assessing Presence of a Mixture 623</p> <p>14.2.2 Component Separation and Univariate Normality 625</p> <p>14.2.3 Component Separation and Multivariate Normality 629</p> <p>14.2.4 Mixed Normal Weighted Likelihood and Density Forecasting 631</p> <p>14.2.5 Density Forecasting: Optimal Shrinkage 633</p> <p>14.2.6 Moving Averages of 𝜆 640</p> <p>14.3 MCD for Robustness and Mix<sub>2</sub>N<i><sub>d</sub></i> Estimation 645</p> <p>14.4 Some Thoughts on Model Assumptions and Estimation 647</p> <p>14.5 The Multivariate Laplace and Mix<i><sub>k</sub></i> Lap<i><sub>d</sub></i> Distributions 649</p> <p>14.5.1 The Multivariate Laplace and EM Algorithm 650</p> <p>14.5.2 The Mix<i><sub>k</sub></i> Lap<i><sub>d</sub></i> and EM Algorithm 654</p> <p>14.5.3 Estimation via MCD Split and Forecasting 658</p> <p>14.5.4 Estimation of Parameter <i>b</i> 660</p> <p>14.5.5 Portfolio Distribution and Expected Shortfall 662</p> <p>14.5.6 Fast Evaluation of the Bessel Function 663</p> <p><b>Part IV Appendices 667</b></p> <p><b>Appendix A Distribution of Quadratic Forms 669</b></p> <p>A.1 Distribution and Moments 669</p> <p>A.1.1 Probability Density and Cumulative Distribution Functions 669</p> <p>A.1.2 Positive Integer Moments 671</p> <p>A.1.3 Moment Generating Functions 673</p> <p>A.2 Basic Distributional Results 677</p> <p>A.3 Ratios of Quadratic Forms in Normal Variables 679</p> <p>A.3.1 Calculation of the CDF 680</p> <p>A.3.2 Calculation of the PDF 681</p> <p>A.3.2.1 Numeric Differentiation 682</p> <p>A.3.2.2 Use of Geary’s formula 682</p> <p>A.3.2.3 Use of Pan’s Formula 683</p> <p>A.3.2.4 Saddlepoint Approximation 685</p> <p>A.4 Problems 689</p> <p>A.A Appendix: Solutions 690</p> <p><b>Appendix B Moments of Ratios of Quadratic Forms 695</b></p> <p>B.1 For X ∼ N<i><sub>n</sub></i>(0, 2I) and B = I  695</p> <p>B.2 For X ∼ N(0, Σ) 708</p> <p>B.3 For X ∼ N(𝜇, I) 713</p> <p>B.4 For X ∼ N(𝜇, Σ) 720</p> <p>B.5 Useful Matrix Algebra Results 725</p> <p>B.6 Saddlepoint Equivalence Result 729</p> <p><b>Appendix C Some Useful Multivariate Distribution Theory 733</b></p> <p>C.1 Student’s <i>t</i> Characteristic Function 733</p> <p>C.2 Sphericity and Ellipticity 739</p> <p>C.2.1 Introduction 739</p> <p>C.2.2 Sphericity 740</p> <p>C.2.3 Ellipticity 748</p> <p>C.2.4 Testing Ellipticity 768</p> <p><b>Appendix D Introducing the SAS Programming Language 773</b></p> <p>D.1 Introduction to SAS 774</p> <p>D.1.1 Background 774</p> <p>D.1.2 Working with SAS on a PC 775</p> <p>D.1.3 Introduction to the Data Step and the Program Data Vector 777</p> <p>D.2 Basic Data Handling 783</p> <p>D.2.1 Method 1 784</p> <p>D.2.2 Method 2 785</p> <p>D.2.3 Method 3 786</p> <p>D.2.4 Creating Data Sets from Existing Data Sets 787</p> <p>D.2.5 Creating Data Sets from Procedure Output 788</p> <p>D.3 Advanced Data Handling 790</p> <p>D.3.1 String Input and Missing Values 790</p> <p>D.3.2 Using set with first.var and last.var 791</p> <p>D.3.3 Reading in Text Files 795</p> <p>D.3.4 Skipping over Headers 796</p> <p>D.3.5 Variable and Value Labels 796</p> <p>D.4 Generating Charts, Tables, and Graphs 797</p> <p>D.4.1 Simple Charting and Tables 798</p> <p>D.4.2 Date and Time Formats/Informats 801</p> <p>D.4.3 High Resolution Graphics 803</p> <p>D.4.3.1 The GPLOT Procedure 803</p> <p>D.4.3.2 The GCHART Procedure 805</p> <p>D.4.4 Linear Regression and Time-Series Analysis 806</p> <p>D.5 The SAS Macro Processor 809</p> <p>D.5.1 Introduction 809</p> <p>D.5.2 Macro Variables 810</p> <p>D.5.3 Macro Programs 812</p> <p>D.5.4 A Useful Example 814</p> <p>D.5.4.1 Method 1 814</p> <p>D.5.4.2 Method 2 816</p> <p>D.6 Problems 817</p> <p>D.7 Appendix: Solutions 819</p> <p>Bibliography 825</p> <p>Index 875</p>
<p><b>Marc S. Paolella</b> is Professor of Empirical Finance at the University of Zurich, Switzerland. He is also the Editor of <i>Econometrics</i> and an Associate Editor of the <i>Royal Statistical Society Journal Series</i>. With almost 20 years of teaching experience, he is a frequent collaborator to journals and a member of many editorial boards and societies.
<p><b>Linear Models and Time-Series Analysis</b> <p><b>Regression, ANOVA, ARMA and GARCH</b> <p><b>A comprehensive and timely edition on an emerging new trend in time-series</b> <p><i>Linear Models and Time-Series Analysis: Regression, ANOVA, ARMA and GARCH</i> sets a strong foundation, in terms of distribution theory, for the linear model (regression and ANOVA), univariate time-series analysis (ARMAX and GARCH), and some multivariate models associated primarily with modeling financial asset returns (copula-based structures and the discrete mixed normal and Laplace). It builds on the author's previous book, <i>Fundamental Statistical Inference: A Computational Approach,</i> which introduced the major concepts of statistical inference. Attention is explicitly paid to application and numeric computation, with examples of MATLAB code throughout. <p>The chapters on regression, ANOVA, and ARMA cover much standard ground, with the emphasis on clear, detailed derivations of all major results, as well as development of MATLAB codes and use of simulations to show the theory. Important topics such as (small-sample distribution theory associated with) unit root tests and time-varying parameter regression models, and order selection in ARMAX models, are also discussed in detail. <p>Several chapters are dedicated to very modern methods in empirical finance, risk management, and portfolio optimization. In particular, various methods for non-Gaussian, non-elliptic, large-scale portfolio optimization are developed, along with MATLAB codes, and the incorporation of transaction costs. The use of such models is shown, with real data, to highly outperform common models such as Markowitz, Gaussian CCC, and DCC. <p>Further highlights include: <ul> <li>An extensive appendix that discusses and derives all major results associated with ellipticity</li> <li>Two appendix chapters that detail the theory and computation of the distribution and moments for Gaussian quadratic forms, and their ratios, including use of exact methods and saddlepoint approximations</li> <li>A tutorial on the use of SAS for data manipulation is included</li> <li>Solutions to most exercises are provided within the book</li> </ul> <p>This book is suitable for advanced masters students in statistics and quantitative finance, as well as doctoral students in economics and finance. It is also useful for quantitative financial practitioners in large financial institutions and smaller finance outlets.

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €