Details

Deep Learning for Physical Scientists


Deep Learning for Physical Scientists

Accelerating Research with Machine Learning
1. Aufl.

von: Edward O. Pyzer-Knapp, Matthew Benatan

63,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 20.09.2021
ISBN/EAN: 9781119408321
Sprache: englisch
Anzahl Seiten: 208

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>Discover the power of machine learning in the physical sciences with this one-stop resource from a leading voice in the field </b> </p> <p><i>Deep Learning for Physical Scientists: Accelerating Research with Machine Learning</i> delivers an insightful analysis of the transformative techniques being used in deep learning within the physical sciences. The book offers readers the ability to understand, select, and apply the best deep learning techniques for their individual research problem and interpret the outcome. </p> <p>Designed to teach researchers to think in useful new ways about how to achieve results in their research, the book provides scientists with new avenues to attack problems and avoid common pitfalls and problems. Practical case studies and problems are presented, giving readers an opportunity to put what they have learned into practice, with exemplar coding approaches provided to assist the reader.  </p> <p>From modelling basics to feed-forward networks, the book offers a broad cross-section of machine learning techniques to improve physical science research. Readers will also enjoy: </p> <ul style="margin-bottom: 0in; font-size: medium; margin-top: 0in;" type="disc"> <li class="MsoListParagraph" style="margin: 0in; font-size: 11pt; font-family: Calibri, sans-serif;">A thorough introduction to the basic classification and regression with perceptrons </li> <li class="MsoListParagraph" style="margin: 0in; font-size: 11pt; font-family: Calibri, sans-serif;">An exploration of training algorithms, including back propagation and stochastic gradient descent and the parallelization of training </li> <li class="MsoListParagraph" style="margin: 0in; font-size: 11pt; font-family: Calibri, sans-serif;">An examination of multi-layer perceptrons for learning from descriptors and de-noising data </li> <li class="MsoListParagraph" style="margin: 0in; font-size: 11pt; font-family: Calibri, sans-serif;">Discussions of recurrent neural networks for learning from sequences and convolutional neural networks for learning from images </li> <li class="MsoListParagraph" style="margin: 0in; font-size: 11pt; font-family: Calibri, sans-serif;">A treatment of Bayesian optimization for tuning deep learning architectures </li> </ul> <p>Perfect for academic and industrial research professionals in the physical sciences, <i>Deep Learning for Physical Scientists: Accelerating Research with Machine Learning</i> will also earn a place in the libraries of industrial researchers who have access to large amounts of data but have yet to learn the techniques to fully exploit that access. </p> <div id="_mcePaste" style="position: absolute; left: -10000px; top: 0px; width: 1px; height: 1px; overflow: hidden;">Perfect for academic and industrial research professionals in the physical sciences, <em style="font-family: Calibri, sans-serif; font-size: 11pt;">Deep Learning for Physical Scientists: Accelerating Research with Machine Learning will also earn a place in the libraries of industrial researchers who have access to large amounts of data but have yet to learn the techniques to fully exploit that access.  </i></div> <div id="_mcePaste" style="position: absolute; left: -10000px; top: 0px; width: 1px; height: 1px; overflow: hidden;"> </div> <div id="_mcePaste" style="position: absolute; left: -10000px; top: 0px; width: 1px; height: 1px; overflow: hidden;">This book introduces the reader to the transformative techniques involved in deep learning. A range of methodologies are addressed including:</div> <div id="_mcePaste" style="position: absolute; left: -10000px; top: 0px; width: 1px; height: 1px; overflow: hidden;">•Basic classification and regression with perceptrons </div> <div id="_mcePaste" style="position: absolute; left: -10000px; top: 0px; width: 1px; height: 1px; overflow: hidden;">•Training
<p><b>Contents</b></p> <p><b>About the Companion Website </b><i>xi</i></p> <p><b>1 Prefix – Learning to “Think Deep” </b><i>1</i></p> <p>1.1 So What Do I Mean by Changing the Way You Think? <i>2</i></p> <p><b>2 Setting Up a Python Environment for Deep Learning Projects </b><i>5</i></p> <p>2.1 Python Overview <i>5</i></p> <p>2.2 Why Use Python for Data Science? <i>6</i></p> <p>2.3 Anaconda Python <i>7</i></p> <p>2.3.1 Why Use Anaconda? <i>7</i></p> <p>2.3.2 Downloading and Installing Anaconda Python <i>7</i></p> <p>2.3.2.1 Installing TensorFlow <i>9</i></p> <p>2.4 Jupyter Notebooks <i>10</i></p> <p>2.4.1 Why Use a Notebook? <i>10</i></p> <p>2.4.2 Starting a Jupyter Notebook Server <i>11</i></p> <p>2.4.3 Adding Markdown to Notebooks <i>12</i></p> <p>2.4.4 A Simple Plotting Example <i>14</i></p> <p>2.4.5 Summary <i>16</i></p> <p><b>3 Modelling Basics </b><i>17</i></p> <p>3.1 Introduction <i>17</i></p> <p>3.2 Start Where You Mean to Go On – Input Definition and</p> <p>Creation <i>17</i></p> <p>3.3 Loss Functions <i>18</i></p> <p>3.3.1 Classification and Regression <i>19</i></p> <p>3.3.2 Regression Loss Functions <i>19</i></p> <p>3.3.2.1 Mean Absolute Error <i>19</i></p> <p>3.3.2.2 Root Mean Squared Error <i>19</i></p> <p><b>v</b></p> <p>0005160659.3D 5 22/7/2021 7:19:56 AM</p> <p>3.3.3 Classification Loss Functions <i>20</i></p> <p>3.3.3.1 Precision <i>21</i></p> <p>3.3.3.2 Recall <i>21</i></p> <p>3.3.3.3 <i>F</i>1 Score <i>22</i></p> <p>3.3.3.4 Confusion Matrix <i>22</i></p> <p>3.3.3.5 (Area Under) Receiver Operator Curve (AU-ROC) <i>23</i></p> <p>3.3.3.6 Cross Entropy <i>25</i></p> <p>3.4 Overfitting and Underfitting <i>28</i></p> <p>3.4.1 Bias–Variance Trade-Off <i>29</i></p> <p>3.5 Regularisation <i>31</i></p> <p>3.5.1 Ridge Regression <i>31</i></p> <p>3.5.2 LASSO Regularisation <i>33</i></p> <p>3.5.3 Elastic Net <i>34</i></p> <p>3.5.4 Bagging and Model Averaging <i>34</i></p> <p>3.6 Evaluating a Model <i>35</i></p> <p>3.6.1 Holdout Testing <i>35</i></p> <p>3.6.2 Cross Validation <i>36</i></p> <p>3.7 The Curse of Dimensionality <i>37</i></p> <p>3.7.1 Normalising Inputs and Targets <i>37</i></p> <p>3.8 Summary <i>39</i></p> <p>3.8 Notes <i>39</i></p> <p><b>4 Feedforward Networks and Multilayered Perceptrons </b><i>41</i></p> <p>4.1 Introduction <i>41</i></p> <p>4.2 The Single Perceptron <i>41</i></p> <p>4.2.1 Training a Perceptron <i>41</i></p> <p>4.2.2 Activation Functions <i>42</i></p> <p>4.2.3 Back Propagation <i>43</i></p> <p>4.2.3.1 Weight Initialisation <i>45</i></p> <p>4.2.3.2 Learning Rate <i>46</i></p> <p>4.2.4 Key Assumptions <i>46</i></p> <p>4.2.5 Putting It All Together in TensorFlow <i>47</i></p> <p>4.3 Moving to a Deep Network <i>49</i></p> <p>4.4 Vanishing Gradients and Other “Deep” Problems <i>53</i></p> <p>4.4.1 Gradient Clipping <i>54</i></p> <p>4.4.2 Non-saturating Activation Functions <i>54</i></p> <p>4.4.2.1 ReLU <i>54</i></p> <p>4.4.2.2 Leaky ReLU <i>56</i></p> <p>4.4.2.3 ELU <i>57</i></p> <p><b>vi </b><i>Contents</i></p> <p>0005160659.3D 6 22/7/2021 7:19:56 AM</p> <p>4.4.3 More Complex Initialisation Schemes <i>57</i></p> <p>4.4.3.1 Xavier <i>58</i></p> <p>4.4.3.2 He <i>58</i></p> <p>4.4.4 Mini Batching <i>59</i></p> <p>4.4.5 Batch Normalisation <i>60</i></p> <p>4.5 Improving the Optimisation <i>60</i></p> <p>4.5.1 Bias <i>60</i></p> <p>4.5.2 Momentum <i>63</i></p> <p>4.5.3 Nesterov Momentum <i>63</i></p> <p>4.5.4 (Adaptive) Learning Rates <i>63</i></p> <p>4.5.5 AdaGrad <i>64</i></p> <p>4.5.6 RMSProp <i>65</i></p> <p>4.5.7 Adam <i>65</i></p> <p>4.5.8 Regularisation <i>66</i></p> <p>4.5.9 Early Stopping <i>67</i></p> <p>4.5.10 Dropout <i>67</i></p> <p>4.6 Parallelisation of learning <i>69</i></p> <p>4.6.1 Hogwild! <i>69</i></p> <p>4.7 High and Low-level Tensorflow APIs <i>70</i></p> <p>4.8 Architecture Implementations <i>72</i></p> <p>4.9 Summary <i>73</i></p> <p>4.10 Papers to Read <i>73</i></p> <p><b>5 Recurrent Neural Networks </b><i>77</i></p> <p>5.1 Introduction <i>77</i></p> <p>5.2 Basic Recurrent Neural Networks <i>77</i></p> <p>5.2.1 Training a Basic RNN <i>78</i></p> <p>5.2.2 Putting It All Together in TensorFlow <i>79</i></p> <p>5.2.3 The Problem with Vanilla RNNs <i>81</i></p> <p>5.3 Long Short-Term Memory (LSTM) Networks <i>82</i></p> <p>5.3.1 Forget Gate <i>82</i></p> <p>5.3.2 Input Gate <i>84</i></p> <p>5.3.3 Output Gate <i>84</i></p> <p>5.3.4 Peephole Connections <i>85</i></p> <p>5.3.5 Putting It All Together in TensorFlow <i>86</i></p> <p>5.4 Gated Recurrent Units <i>87</i></p> <p>5.4.1 Putting It All Together in TensorFlow <i>88</i></p> <p>5.5 Using Keras for RNNs <i>88</i></p> <p>5.6 Real World Implementations <i>89</i></p> <p><i>Contents </i><b>vii</b></p> <p>0005160659.3D 7 22/7/2021 7:19:56 AM</p> <p>5.7 Summary <i>89</i></p> <p>5.8 Papers to Read <i>90</i></p> <p><b>6 Convolutional Neural Networks </b><i>93</i></p> <p>6.1 Introduction <i>93</i></p> <p>6.2 Fundamental Principles of Convolutional Neural Networks <i>94</i></p> <p>6.2.1 Convolution <i>94</i></p> <p>6.2.2 Pooling <i>95</i></p> <p>6.2.2.1 Why Use Pooling? <i>95</i></p> <p>6.2.2.2 Types of Pooling <i>96</i></p> <p>6.2.3 Stride and Padding <i>99</i></p> <p>6.2.4 Sparse Connectivity <i>101</i></p> <p>6.2.5 Parameter Sharing <i>101</i></p> <p>6.2.6 Convolutional Neural Networks with TensorFlow <i>102</i></p> <p>6.3 Graph Convolutional Networks <i>103</i></p> <p>6.3.1 Graph Convolutional Networks in Practice <i>104</i></p> <p>6.4 Real World Implementations <i>107</i></p> <p>6.5 Summary <i>108</i></p> <p>6.6 Papers to Read <i>108</i></p> <p><b>7 Auto-Encoders </b><i>111</i></p> <p>7.1 Introduction <i>111</i></p> <p>7.1.1 Auto-Encoders for Dimensionality Reduction <i>111</i></p> <p>7.2 Getting a Good Start – Stacked Auto-EncodersGetting a Good</p> <p>Start – Stacked Auto-Encoders, and Pretraining <i>115</i></p> <p>7.2.1 Restricted Boltzmann Machines <i>115</i></p> <p>7.2.2 Stacking Restricted Boltzmann Machines <i>118</i></p> <p>7.3 Denoising Auto-Encoders <i>120</i></p> <p>7.4 Variational Auto-Encoders <i>121</i></p> <p>7.5 Sequence to Sequence Learning <i>125</i></p> <p>7.6 The Attention Mechanism <i>126</i></p> <p>7.7 Application in Chemistry: Building a Molecular Generator <i>127</i></p> <p>7.8 Summary <i>132</i></p> <p>7.9 Real World Implementations <i>132</i></p> <p>7.10 Papers to Read <i>132</i></p> <p><b>8 Optimising Models Using Bayesian Optimisation </b><i>135</i></p> <p>8.1 Introduction <i>135</i></p> <p>8.2 Defining Our Function <i>135</i></p> <p>8.3 Grid and Random Search <i>136</i></p> <p><b>viii </b><i>Contents</i></p> <p>0005160659.3D 8 22/7/2021 7:19:56 AM</p> <p>8.4 Moving Towards an Intelligent Search <i>137</i></p> <p>8.5 Exploration and Exploitation <i>137</i></p> <p>8.6 Greedy Search <i>138</i></p> <p>8.6.1 Key Fact One – Exploitation Heavy Search is Susceptible to Initial</p> <p>Data Bias <i>139</i></p> <p>8.7 Diversity Search <i>141</i></p> <p>8.8 Bayesian Optimisation <i>142</i></p> <p>8.8.1 Domain Knowledge (or Prior) <i>142</i></p> <p>8.8.2 Gaussian Processes <i>145</i></p> <p>8.8.3 Kernels <i>146</i></p> <p>8.8.3.1 Stationary Kernels <i>146</i></p> <p>8.8.3.2 Noise Kernel <i>147</i></p> <p>8.8.4 Combining Gaussian Process Prediction and Optimisation <i>149</i></p> <p>8.8.4.1 Probability of Improvement <i>149</i></p> <p>8.8.4.2 Expected Improvement <i>150</i></p> <p>8.8.5 Balancing Exploration and Exploitation <i>151</i></p> <p>8.8.6 Upper and Lower Confidence Bound Algorithm <i>151</i></p> <p>8.8.7 Maximum Entropy Sampling <i>152</i></p> <p>8.8.8 Optimising the Acquisition Function <i>153</i></p> <p>8.8.9 Cost Sensitive Bayesian Optimisation <i>155</i></p> <p>8.8.10 Constrained Bayesian Optimisation <i>158</i></p> <p>8.8.11 Parallel Bayesian Optimisation <i>158</i></p> <p>8.8.11.1 qEI <i>158</i></p> <p>8.8.11.2 Constant Liar and Kringing Believer <i>160</i></p> <p>8.8.11.3 Local Penalisation <i>162</i></p> <p>8.8.11.4 Parallel Thompson Sampling <i>162</i></p> <p>8.8.11.5 <i>K</i>-Means Batch Bayesian Optimisation <i>162</i></p> <p>8.9 Summary <i>163</i></p> <p>8.10 Papers to Read <i>163</i></p> <p><b>Case Study 1 Solubility Prediction Case Study </b><i>167</i></p> <p>Step 1 – Import Packages <i>167</i></p> <p>Step 2 – Importing the Data <i>168</i></p> <p>Step 3 – Creating the Inputs <i>168</i></p> <p>Step 4 – Splitting into Training and Testing <i>168</i></p> <p>Step 5 – Defining Our Model <i>169</i></p> <p>Step 6 – Running Our Model <i>169</i></p> <p>Step 7 – Automatically Finding an Optimised Architecture Using</p> <p>Bayesian Optimisation <i>170</i></p> <p><b>Case Study 2 Time Series Forecasting with LSTMs </b><i>173</i></p> <p>Simple LSTM <i>173</i></p> <p>Sequence-to-Sequence LSTM <i>177</i></p> <p><b>Case Study 3 Deep Embeddings for Auto-Encoder-Based</b></p> <p><b>Featurisation </b><i>185</i></p> <p><b>Index </b><i>000</i></p> <p> </p>
<p><b>Dr Edward O. Pyzer-Knapp</b> is the worldwide lead for AI Enriched Modelling and Simulation at IBM Research. Previously, he obtained his PhD from the University of Cambridge using state of the art computational techniques to accelerate materials design then moving to Harvard where he was in charge of the day-to-day running of the Harvard Clean Energy Project - a collaboration with IBM which combined massive distributed computing, quantum-mechanical simulations, and machine-learning to accelerate discovery of the next generation of organic photovoltaic materials. He is also the Visiting Professor of Industrially Applied AI at the University of Liverpool, and the Editor in Chief for Applied AI Letters, a journal with a focus on real-world application and validation of AI.</p> <p><b>Dr Matt Benatan</b> received his PhD in Audio-Visual Speech Processing from the University of Leeds, after which he went on to pursue a career in AI research within industry. His work to date has involved the research and development of AI techniques for a broad variety of domains, from applications in audio processing through to materials discovery. His research interests include Computer Vision, Signal Processing, Bayesian Optimization, and Scalable Bayesian Inference.
<p><b>Discover the power of machine learning in the physical sciences with this one-stop resource from a leading voice in the field</b></p> <p><i>Deep Learning for Physical Scientists: Accelerating Research with Machine Learning</i> delivers an insightful analysis of the transformative techniques being used in deep learning within the physical sciences. The book offers readers the ability to understand, select, and apply the best deep learning techniques for their individual research problem and interpret the outcome. <p>Designed to teach researchers to think in useful new ways about how to achieve results in their research, the book provides scientists with new avenues to attack problems and avoid common pitfalls and problems. Practical case studies and problems are presented, giving readers an opportunity to put what they have learned into practice, with exemplar coding approaches provided to assist the reader. <p>From modelling basics to feed-forward networks, the book offers a broad cross-section of machine learning techniques to improve physical science research. Readers will also enjoy: <ul><li>A thorough introduction to the basic classification and regression with perceptrons</li> <li>An exploration of training algorithms, including back propagation and stochastic gradient descent and the parallelization of training</li> <li>An examination of multi-layer perceptrons for learning from descriptors and de-noising data</li> <li>Discussions of recurrent neural networks for learning from sequences and convolutional neural networks for learning from images</li> <li>A treatment of Bayesian optimization for tuning deep learning architectures</li></ul> <p>Perfect for academic and industrial research professionals in the physical sciences, <i>Deep Learning for Physical Scientists: Accelerating Research with Machine Learning</i> will also earn a place in the libraries of industrial researchers who have access to large amounts of data but have yet to learn the techniques to fully exploit that access.

Diese Produkte könnten Sie auch interessieren:

Hot-Melt Extrusion
Hot-Melt Extrusion
von: Dennis Douroumis
PDF ebook
126,99 €
Hot-Melt Extrusion
Hot-Melt Extrusion
von: Dennis Douroumis
EPUB ebook
126,99 €
Kunststoffe
Kunststoffe
von: Wilhelm Keim
PDF ebook
99,99 €