Details

Bayesian Networks


Bayesian Networks

An Introduction
Wiley Series in Probability and Statistics 1. Aufl.

von: Timo Koski, John Noble

80,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 24.09.2009
ISBN/EAN: 9780470684030
Sprache: englisch
Anzahl Seiten: 368

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<i>Bayesian Networks: An Introduction</i> provides a self-contained introduction to the theory and applications of Bayesian networks, a topic of interest and importance for statisticians, computer scientists and those involved in modelling complex data sets. The material has been extensively tested in classroom teaching and assumes a basic knowledge of probability, statistics and mathematics. All notions are carefully explained and feature exercises throughout. <p>Features include:</p> <ul type="disc"> <li>An introduction to Dirichlet Distribution, Exponential Families and their applications.</li> <li>A detailed description of learning algorithms and Conditional Gaussian Distributions using Junction Tree methods.</li> <li>A discussion of Pearl's intervention calculus, with an introduction to the notion of see and do conditioning.</li> <li>All concepts are clearly defined and illustrated with examples and exercises. Solutions are provided online.</li> </ul> <p>This book will prove a valuable resource for postgraduate students of statistics, computer engineering, mathematics, data mining, artificial intelligence, and biology.</p> <p>Researchers and users of comparable modelling or statistical techniques such as neural networks will also find this book of interest.</p>
<b>Preface.</b> <p><b>1 Graphical models and probabilistic reasoning</b>.</p> <p>1.1 Introduction.</p> <p>1.2 Axioms of probability and basic notations.</p> <p>1.3 The Bayes update of probability.</p> <p>1.4 Inductive learning.</p> <p>1.5 Interpretations of probability and Bayesian networks.</p> <p>1.6 Learning as inference about parameters.</p> <p>1.7 Bayesian statistical inference.</p> <p>1.8 Tossing a thumb-tack.</p> <p>1.9 Multinomial sampling and the Dirichlet integral.</p> <p>Notes.</p> <p>Exercises: Probabilistic theories of causality, Bayes’ rule, multinomial sampling and the Dirichlet density.</p> <p><b>2 Conditional independence, graphs and</b> <b><i>d</i>-separation</b>.</p> <p>2.1 Joint probabilities.</p> <p>2.2 Conditional independence.</p> <p>2.3 Directed acyclic graphs and <i>d</i>-separation.</p> <p>2.4 The Bayes ball.</p> <p>2.5 Potentials.</p> <p>2.6 Bayesian networks.</p> <p>2.7 Object oriented Bayesian networks.</p> <p>2.8 <i>d</i>-Separation and conditional independence.</p> <p>2.9 Markov models and Bayesian networks.</p> <p>2.10 <i>I</i>-maps and Markov equivalence.</p> <p>Notes.</p> <p>Exercises: Conditional independence and <i>d</i>-separation.</p> <p><b>3 Evidence, sufficiency and Monte Carlo methods</b>.</p> <p>3.1 Hard evidence.</p> <p>3.2 Soft evidence and virtual evidence.</p> <p>3.3 Queries in probabilistic inference.</p> <p>3.4 Bucket elimination.</p> <p>3.5 Bayesian sufficient statistics and prediction sufficiency.</p> <p>3.6 Time variables.</p> <p>3.7 A brief introduction to Markov chain Monte Carlo methods.</p> <p>3.8 The one-dimensional discrete Metropolis algorithm.</p> <p>Notes.</p> <p>Exercises: Evidence, sufficiency and Monte Carlo methods.</p> <p><b>4 Decomposable graphs and chain graphs</b>.</p> <p>4.1 Definitions and notations.</p> <p>4.2 Decomposable graphs and triangulation of graphs.</p> <p>4.3 Junction trees.</p> <p>4.4 Markov equivalence.</p> <p>4.5 Markov equivalence, the essential graph and chain graphs.</p> <p>Notes.</p> <p>Exercises: Decomposable graphs and chain graphs.</p> <p><b>5 Learning the conditional probability potentials</b>.</p> <p>5.1 Initial illustration: maximum likelihood estimate for a fork connection.</p> <p>5.2 The maximum likelihood estimator for multinomial sampling.</p> <p>5.3 MLE for the parameters in a DAG: the general setting.</p> <p>5.4 Updating, missing data, fractional updating.</p> <p>Notes.</p> <p>Exercises: Learning the conditional probability potentials.</p> <p><b>6 Learning the graph structure</b>.</p> <p>6.1 Assigning a probability distribution to the graph structure.</p> <p>6.2 Markov equivalence and consistency.</p> <p>6.3 Reducing the size of the search.</p> <p>6.4 Monte Carlo methods for locating the graph structure.</p> <p>6.5 Women in mathematics.</p> <p>Notes.</p> <p>Exercises: Learning the graph structure.</p> <p><b>7 Parameters and sensitivity</b>.</p> <p>7.1 Changing parameters in a network.</p> <p>7.2 Measures of divergence between probability distributions.</p> <p>7.3 The Chan-Darwiche distance measure.</p> <p>7.4 Parameter changes to satisfy query constraints.</p> <p>7.5 The sensitivity of queries to parameter changes.</p> <p>Notes.</p> <p>Exercises: Parameters and sensitivity.</p> <p><b>8 Graphical models and exponential families</b>.</p> <p>8.1 Introduction to exponential families.</p> <p>8.2 Standard examples of exponential families.</p> <p>8.3 Graphical models and exponential families.</p> <p>8.4 Noisy ‘or’ as an exponential family.</p> <p>8.5 Properties of the log partition function.</p> <p>8.6 Fenchel Legendre conjugate.</p> <p>8.7 Kullback-Leibler divergence.</p> <p>8.8 Mean field theory.</p> <p>8.9 Conditional Gaussian distributions.</p> <p>Notes.</p> <p>Exercises: Graphical models and exponential families.</p> <p><b>9 Causality and intervention calculus</b>.</p> <p>9.1 Introduction.</p> <p>9.2 Conditioning by observation and by intervention.</p> <p>9.3 The intervention calculus for a Bayesian network.</p> <p>9.4 Properties of intervention calculus.</p> <p>9.5 Transformations of probability.</p> <p>9.6 A note on the order of ‘see’ and ‘do’ conditioning.</p> <p>9.7 The ‘Sure Thing’ principle.</p> <p>9.8 Back door criterion, confounding and identifiability.</p> <p>Notes.</p> <p>Exercises: Causality and intervention calculus.</p> <p><b>10 The junction tree and probability updating</b>.</p> <p>10.1 Probability updating using a junction tree.</p> <p>10.2 Potentials and the distributive law.</p> <p>10.3 Elimination and domain graphs.</p> <p>10.4 Factorization along an undirected graph.</p> <p>10.5 Factorizing along a junction tree.</p> <p>10.6 Local computation on junction trees.</p> <p>10.7 Schedules.</p> <p>10.8 Local and global consistency.</p> <p>10.9 Message passing for conditional Gaussian distributions.</p> <p>10.10 Using a junction tree with virtual evidence and soft evidence.</p> <p>Notes.</p> <p>Exercises: The junction tree and probability updating.</p> <p><b>11 Factor graphs and the sum product algorithm</b>.</p> <p>11.1 Factorization and local potentials.</p> <p>11.2 The sum product algorithm.</p> <p>11.3 Detailed illustration of the algorithm.</p> <p>Notes.</p> <p>Exercise: Factor graphs and the sum product algorithm.</p> <p><b>References</b>.</p> <p>Index.</p>
"It assumes only a basic knowledge of probability, statistics and mathematics and is well suited for classroom teaching . . . Each chapter of the book is concluded with short notes on the literature and a set of helpful exercises." (Mathematical Reviews, 2011)<br /> <br /> <p>"Extensively tested in classroom teaching … .The authors clearly define all concepts and provide numerous examples and exercises." (<i>Book News</i>, December 2009)</p>
<p><strong>Timo Koski</strong>, Professor of Mathematical Statistics, Department of Mathematics, Royal Institute of Technology, Stockholm, Sweden. <p><strong>John M. Noble</strong>, Department of Mathematics, University of Linköping, Sweden.
<i>Bayesian Networks: An Introduction</i> provides a self-contained introduction to the theory and applications of Bayesian networks, a topic of interest and importance for statisticians, computer scientists and those involved in modelling complex data sets. The material has been extensively tested in classroom teaching and assumes a basic knowledge of probability, statistics and mathematics. All notions are carefully explained and feature exercises throughout. <p><i>Features include:</i></p> <ul type="disc"> <li>An introduction to Dirichlet Distribution, Exponential Families and their applications.</li> <li>A detailed description of learning algorithms and Conditional Gaussian Distributions using Junction Tree methods.</li> <li>A discussion of Pearl's intervention calculus, with an introduction to the notion of see and do conditioning.</li> <li>All concepts are clearly defined and illustrated with examples and exercises. Solutions are provided online.</li> </ul> <p>This book will prove a valuable resource for postgraduate students of statistics, computer engineering, mathematics, data mining, artificial intelligence, and biology.<br /> Researchers and users of comparable modelling or statistical techniques such as neural networks will also find this book of interest.</p>

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €