Details

Fundamentals and Methods of Machine and Deep Learning


Fundamentals and Methods of Machine and Deep Learning

Algorithms, Tools, and Applications
1. Aufl.

von: Pradeep Singh

190,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 01.02.2022
ISBN/EAN: 9781119821885
Sprache: englisch
Anzahl Seiten: 480

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<b>FUNDAMENTALS AND METHODS OF MACHINE AND DEEP LEARNING</b> <p><b>The book provides a practical approach by explaining the concepts of machine learning and deep learning algorithms, evaluation of methodology advances, and algorithm demonstrations with applications.</b> <p>Over the past two decades, the field of machine learning and its subfield deep learning have played a main role in software applications development. Also, in recent research studies, they are regarded as one of the disruptive technologies that will transform our future life, business, and the global economy. The recent explosion of digital data in a wide variety of domains, including science, engineering, Internet of Things, biomedical, healthcare, and many business sectors, has declared the era of big data, which cannot be analysed by classical statistics but by the more modern, robust machine learning and deep learning techniques. Since machine learning learns from data rather than by programming hard-coded decision rules, an attempt is being made to use machine learning to make computers that are able to solve problems like human experts in the field. <p>The goal of this book is to present a??practical approach by explaining the concepts of machine learning and deep learning algorithms with applications. Supervised machine learning algorithms, ensemble machine learning algorithms, feature selection, deep learning techniques, and their applications are discussed. Also included in the eighteen chapters is unique information which provides a clear understanding of concepts by using algorithms and case studies illustrated with applications of machine learning and deep learning in different domains, including disease prediction, software defect prediction, online television analysis, medical image processing, etc. Each of the chapters briefly described below provides both a chosen approach and its implementation. <p><b>Audience</b> <p>Researchers and engineers in artificial intelligence, computer scientists as well as software developers.
<p>Preface xix</p> <p><b>1 Supervised Machine Learning: Algorithms and Applications 1<br /></b><i>Shruthi H. Shetty, Sumiksha Shetty, Chandra Singh and Ashwath Rao</i></p> <p>1.1 History 2</p> <p>1.2 Introduction 2</p> <p>1.3 Supervised Learning 4</p> <p>1.4 Linear Regression (LR) 5</p> <p>1.4.1 Learning Model 6</p> <p>1.4.2 Predictions With Linear Regression 7</p> <p>1.5 Logistic Regression 8</p> <p>1.6 Support Vector Machine (SVM) 9</p> <p>1.7 Decision Tree 11</p> <p>1.8 Machine Learning Applications in Daily Life 12</p> <p>1.8.1 Traffic Alerts (Maps) 12</p> <p>1.8.2 Social Media (Facebook) 13</p> <p>1.8.3 Transportation and Commuting (Uber) 13</p> <p>1.8.4 Products Recommendations 13</p> <p>1.8.5 Virtual Personal Assistants 13</p> <p>1.8.6 Self-Driving Cars 14</p> <p>1.8.7 Google Translate 14</p> <p>1.8.8 Online Video Streaming (Netflix) 14</p> <p>1.8.9 Fraud Detection 14</p> <p>1.9 Conclusion 15</p> <p>References 15</p> <p><b>2 Zonotic Diseases Detection Using Ensemble Machine Learning Algorithms 17<br /></b><i>Bhargavi K.</i></p> <p>2.1 Introduction 18</p> <p>2.2 Bayes Optimal Classifier 19</p> <p>2.3 Bootstrap Aggregating (Bagging) 21</p> <p>2.4 Bayesian Model Averaging (BMA) 22</p> <p>2.5 Bayesian Classifier Combination (BCC) 24</p> <p>2.6 Bucket of Models 26</p> <p>2.7 Stacking 27</p> <p>2.8 Efficiency Analysis 29</p> <p>2.9 Conclusion 30</p> <p>References 30</p> <p><b>3 Model Evaluation 33<br /></b><i>Ravi Shekhar Tiwari</i></p> <p>3.1 Introduction 34</p> <p>3.2 Model Evaluation 34</p> <p>3.2.1 Assumptions 36</p> <p>3.2.2 Residual 36</p> <p>3.2.3 Error Sum of Squares (<i>Sse</i>) 37</p> <p>3.2.4 Regression Sum of Squares (<i>Ssr</i>) 37</p> <p>3.2.5 Total Sum of Squares (<i>Ssto</i>) 37</p> <p>3.3 Metric Used in Regression Model 38</p> <p>3.3.1 Mean Absolute Error (<i>Mae</i>) 38</p> <p>3.3.2 Mean Square Error (<i>Mse</i>) 39</p> <p>3.3.3 Root Mean Square Error (<i>Rmse</i>) 41</p> <p>3.3.4 Root Mean Square Logarithm Error (<i>Rmsle</i>) 42</p> <p>3.3.5 R-Square (<i>R</i><sup>2</sup>) 45</p> <p>3.3.5.1 Problem With R-Square (<i>R</i><sup>2</sup>) 46</p> <p>3.3.6 Adjusted R-Square (<i>R</i><sup>2</sup>) 46</p> <p>3.3.7 Variance 47</p> <p>3.3.8 AIC 48</p> <p>3.3.9 BIC 49</p> <p>3.3.10 ACP, Press, and <i>R</i><sup>2</sup>-Predicted 49</p> <p>3.3.11 Solved Examples 51</p> <p>3.4 Confusion Metrics 52</p> <p>3.4.1 How to Interpret the Confusion Metric? 53</p> <p>3.4.2 Accuracy 55</p> <p>3.4.2.1 Why Do We Need the Other Metric Along With Accuracy? 56</p> <p>3.4.3 True Positive Rate (TPR<b>) </b>56</p> <p>3.4.4 False Negative Rate (FNR) 57</p> <p>3.4.5 True Negative Rate (TNR) 57</p> <p>3.4.6 False Positive Rate (FPR) 58</p> <p>3.4.7 Precision 58</p> <p>3.4.8 Recall 59</p> <p>3.4.9 Recall-Precision Trade-Off 60</p> <p>3.4.10 F1-Score 61</p> <p>3.4.11 F-Beta Sore 61</p> <p>3.4.12 Thresholding 63</p> <p>3.4.13 AUC - ROC 64</p> <p>3.4.14 AUC - PRC 65</p> <p>3.4.15 Derived Metric From Recall, Precision, and F1-Score 67</p> <p>3.4.16 Solved Examples 68</p> <p>3.5 Correlation 70</p> <p>3.5.1 Pearson Correlation 70</p> <p>3.5.2 Spearman Correlation 71</p> <p>3.5.3 Kendall’s Rank Correlation 73</p> <p>3.5.4 Distance Correlation 74</p> <p>3.5.5 Biweight Mid-Correlation 75</p> <p>3.5.6 Gamma Correlation 76</p> <p>3.5.7 Point Biserial Correlation 77</p> <p>3.5.8 Biserial Correlation 78</p> <p>3.5.9 Partial Correlation 78</p> <p>3.6 Natural Language Processing (NLP) 78</p> <p>3.6.1 N-Gram 79</p> <p>3.6.2 BELU Score 79</p> <p>3.6.2.1 BELU Score With N-Gram 80</p> <p>3.6.3 Cosine Similarity 81</p> <p>3.6.4 Jaccard Index 83</p> <p>3.6.5 ROUGE 84</p> <p>3.6.6 NIST 85</p> <p>3.6.7 SQUAD 85</p> <p>3.6.8 MACRO 86</p> <p>3.7 Additional Metrics 86</p> <p>3.7.1 Mean Reciprocal Rank (MRR) 86</p> <p>3.7.2 Cohen Kappa 87</p> <p>3.7.3 Gini Coefficient 87</p> <p>3.7.4 Scale-Dependent Errors 87</p> <p>3.7.5 Percentage Errors 88</p> <p>3.7.6 Scale-Free Errors 88</p> <p>3.8 Summary of Metric Derived from Confusion Metric 89</p> <p>3.9 Metric Usage 90</p> <p>3.10 Pro and Cons of Metrics 94</p> <p>3.11 Conclusion 95</p> <p>References 96</p> <p><b>4 Analysis of M-SEIR and LSTM Models for the Prediction of COVID-19 Using RMSLE 101<br /></b><i>Archith S., Yukta C., Archana H.R. and Surendra H.H.</i></p> <p>4.1 Introduction 101</p> <p>4.2 Survey of Models 103</p> <p>4.2.1 SEIR Model 103</p> <p>4.2.2 Modified SEIR Model 103</p> <p>4.2.3 Long Short-Term Memory (LSTM) 104</p> <p>4.3 Methodology 106</p> <p>4.3.1 Modified SEIR 106</p> <p>4.3.2 LSTM Model 108</p> <p>4.3.2.1 Data Pre-Processing 108</p> <p>4.3.2.2 Data Shaping 109</p> <p>4.3.2.3 Model Design 109</p> <p>4.4 Experimental Results 111</p> <p>4.4.1 Modified SEIR Model 111</p> <p>4.4.2 LSTM Model 113</p> <p>4.5 Conclusion 116</p> <p>4.6 Future Work 116</p> <p>References 118</p> <p><b>5 The Significance of Feature Selection Techniques in Machine Learning 121<br /></b><i>N. Bharathi, B.S. Rishiikeshwer, T. Aswin Shriram, B. Santhi and G.R. Brindha</i></p> <p>5.1 Introduction 122</p> <p>5.2 Significance of Pre-Processing 122</p> <p>5.3 Machine Learning System 123</p> <p>5.3.1 Missing Values 123</p> <p>5.3.2 Outliers 123</p> <p>5.3.3 Model Selection 124</p> <p>5.4 Feature Extraction Methods 124</p> <p>5.4.1 Dimension Reduction 125</p> <p>5.4.1.1 Attribute Subset Selection 126</p> <p>5.4.2 Wavelet Transforms 127</p> <p>5.4.3 Principal Components Analysis 127</p> <p>5.4.4 Clustering 128</p> <p>5.5 Feature Selection 128</p> <p>5.5.1 Filter Methods 129</p> <p>5.5.2 Wrapper Methods 129</p> <p>5.5.3 Embedded Methods 130</p> <p>5.6 Merits and Demerits of Feature Selection 131</p> <p>5.7 Conclusion 131</p> <p>References 132</p> <p><b>6 Use of Machine Learning and Deep Learning in Healthcare—A Review on Disease Prediction System 135<br /></b><i>Radha R. and Gopalakrishnan R.</i></p> <p>6.1 Introduction to Healthcare System 136</p> <p>6.2 Causes for the Failure of the Healthcare System 137</p> <p>6.3 Artificial Intelligence and Healthcare System for Predicting Diseases 138</p> <p>6.3.1 Monitoring and Collection of Data 140</p> <p>6.3.2 Storing, Retrieval, and Processing of Data 141</p> <p>6.4 Facts Responsible for Delay in Predicting the Defects 142</p> <p>6.5 Pre-Treatment Analysis and Monitoring 143</p> <p>6.6 Post-Treatment Analysis and Monitoring 145</p> <p>6.7 Application of ML and DL 145</p> <p>6.7.1 ML and DL for Active Aid 145</p> <p>6.7.1.1 Bladder Volume Prediction 147</p> <p>6.7.1.2 Epileptic Seizure Prediction 148</p> <p>6.8 Challenges and Future of Healthcare Systems Based on ML and DL 148</p> <p>6.9 Conclusion 149</p> <p>References 150</p> <p><b>7 Detection of Diabetic Retinopathy Using Ensemble Learning Techniques 153<br /></b><i>Anirban Dutta, Parul Agarwal, Anushka Mittal, Shishir Khandelwal and Shikha Mehta</i></p> <p>7.1 Introduction 153</p> <p>7.2 Related Work 155</p> <p>7.3 Methodology 155</p> <p>7.3.1 Data Pre-Processing 155</p> <p>7.3.2 Feature Extraction 161</p> <p>7.3.2.1 Exudates 161</p> <p>7.3.2.2 Blood Vessels 161</p> <p>7.3.2.3 Microaneurysms 162</p> <p>7.3.2.4 Hemorrhages 162</p> <p>7.3.3 Learning 163</p> <p>7.3.3.1 Support Vector Machines 163</p> <p>7.3.3.2 K-Nearest Neighbors 163</p> <p>7.3.3.3 Random Forest 164</p> <p>7.3.3.4 AdaBoost 164</p> <p>7.3.3.5 Voting Technique 164</p> <p>7.4 Proposed Models 165</p> <p>7.4.1 AdaNaive 165</p> <p>7.4.2 AdaSVM 166</p> <p>7.4.3 AdaForest 166</p> <p>7.5 Experimental Results and Analysis 167</p> <p>7.5.1 Dataset 167</p> <p>7.5.2 Software and Hardware 167</p> <p>7.5.3 Results 168</p> <p>7.6 Conclusion 173</p> <p>References 174</p> <p><b>8 Machine Learning and Deep Learning for Medical Analysis—A Case Study on Heart Disease Data 177<br /></b><i>Swetha A.M., Santhi B. and Brindha G.R.</i></p> <p>8.1 Introduction 178</p> <p>8.2 Related Works 179</p> <p>8.3 Data Pre-Processing 181</p> <p>8.3.1 Data Imbalance 181</p> <p>8.4 Feature Selection 182</p> <p>8.4.1 Extra Tree Classifier 182</p> <p>8.4.2 Pearson Correlation 183</p> <p>8.4.3 Forward Stepwise Selection 183</p> <p>8.4.4 Chi-Square Test 184</p> <p>8.5 ML Classifiers Techniques 184</p> <p>8.5.1 Supervised Machine Learning Models 185</p> <p>8.5.1.1 Logistic Regression 185</p> <p>8.5.1.2 SVM 186</p> <p>8.5.1.3 Naive Bayes 186</p> <p>8.5.1.4 Decision Tree 186</p> <p>8.5.1.5 K-Nearest Neighbors (KNN) 187</p> <p>8.5.2 Ensemble Machine Learning Model 187</p> <p>8.5.2.1 Random Forest 187</p> <p>8.5.2.2 AdaBoost 188</p> <p>8.5.2.3 Bagging 188</p> <p>8.5.3 Neural Network Models 189</p> <p>8.5.3.1 Artificial Neural Network (ANN) 189</p> <p>8.5.3.2 Convolutional Neural Network (CNN) 189</p> <p>8.6 Hyperparameter Tuning 190</p> <p>8.6.1 Cross-Validation 190</p> <p>8.7 Dataset Description 190</p> <p>8.7.1 Data Pre-Processing 193</p> <p>8.7.2 Feature Selection 195</p> <p>8.7.3 Model Selection 196</p> <p>8.7.4 Model Evaluation 197</p> <p>8.8 Experiments and Results 197</p> <p>8.8.1 Study 1: Survival Prediction Using All Clinical Features 198</p> <p>8.8.2 Study 2: Survival Prediction Using Age, Ejection Fraction and Serum Creatinine 198</p> <p>8.8.3 Study 3: Survival Prediction Using Time, Ejection Fraction, and Serum Creatinine 199</p> <p>8.8.4 Comparison Between Study 1, Study 2, and Study 3 203</p> <p>8.8.5 Comparative Study on Different Sizes of Data 204</p> <p>8.9 Analysis 206</p> <p>8.10 Conclusion 206</p> <p>References 207</p> <p><b>9 A Novel Convolutional Neural Network Model to Predict Software Defects 211<br /></b><i>Kumar Rajnish, Vandana Bhattacharjee and Mansi Gupta</i></p> <p>9.1 Introduction 212</p> <p>9.2 Related Works 213</p> <p>9.2.1 Software Defect Prediction Based on Deep Learning 213</p> <p>9.2.2 Software Defect Prediction Based on Deep Features 214</p> <p>9.2.3 Deep Learning in Software Engineering 214</p> <p>9.3 Theoretical Background 215</p> <p>9.3.1 Software Defect Prediction 215</p> <p>9.3.2 Convolutional Neural Network 216</p> <p>9.4 Experimental Setup 218</p> <p>9.4.1 Data Set Description 218</p> <p>9.4.2 Building Novel Convolutional Neural Network (NCNN) Model 219</p> <p>9.4.3 Evaluation Parameters 222</p> <p>9.4.4 Results and Analysis 224</p> <p>9.5 Conclusion and Future Scope 230</p> <p>References 233</p> <p><b>10 Predictive Analysis on Online Television Videos Using Machine Learning Algorithms 237<br /></b><i>Rebecca Jeyavadhanam B., Ramalingam V.V., Sugumaran V. and Rajkumar D.</i></p> <p>10.1 Introduction 238</p> <p>10.1.1 Overview of Video Analytics 241</p> <p>10.1.2 Machine Learning Algorithms 242</p> <p>10.1.2.1 Decision Tree C4.5 243</p> <p>10.1.2.2 J48 Graft 243</p> <p>10.1.2.3 Logistic Model Tree 244</p> <p>10.1.2.4 Best First Tree 244</p> <p>10.1.2.5 Reduced Error Pruning Tree 244</p> <p>10.1.2.6 Random Forest 244</p> <p>10.2 Proposed Framework 245</p> <p>10.2.1 Data Collection 246</p> <p>10.2.2 Feature Extraction 246</p> <p>10.2.2.1 Block Intensity Comparison Code 247</p> <p>10.2.2.2 Key Frame Rate 248</p> <p>10.3 Feature Selection 249</p> <p>10.4 Classification 250</p> <p>10.5 Online Incremental Learning 251</p> <p>10.6 Results and Discussion 253</p> <p>10.7 Conclusion 255</p> <p>References 256</p> <p><b>11 A Combinational Deep Learning Approach to Visually Evoked EEG-Based Image Classification 259<br /></b><i>Nandini Kumari, Shamama Anwar and Vandana Bhattacharjee</i></p> <p>11.1 Introduction 260</p> <p>11.2 Literature Review 262</p> <p>11.3 Methodology 264</p> <p>11.3.1 Dataset Acquisition 264</p> <p>11.3.2 Pre-Processing and Spectrogram Generation 265</p> <p>11.3.3 Classification of EEG Spectrogram Images With Proposed CNN Model 266</p> <p>11.3.4 Classification of EEG Spectrogram Images With Proposed Combinational CNN+LSTM Model 268</p> <p>11.4 Result and Discussion 270</p> <p>11.5 Conclusion 272</p> <p>References 273</p> <p><b>12 Application of Machine Learning Algorithms With Balancing Techniques for Credit Card Fraud Detection: A Comparative Analysis 277<br /></b><i>Shiksha</i></p> <p>12.1 Introduction 278</p> <p>12.2 Methods and Techniques 280</p> <p>12.2.1 Research Approach 280</p> <p>12.2.2 Dataset Description 282</p> <p>12.2.3 Data Preparation 283</p> <p>12.2.4 Correlation Between Features 284</p> <p>12.2.5 Splitting the Dataset 285</p> <p>12.2.6 Balancing Data 285</p> <p>12.2.6.1 Oversampling of Minority Class 286</p> <p>12.2.6.2 Under-Sampling of Majority Class 286</p> <p>12.2.6.3 Synthetic Minority Over Sampling Technique 286</p> <p>12.2.6.4 Class Weight 287</p> <p>12.2.7 Machine Learning Algorithms (Models) 288</p> <p>12.2.7.1 Logistic Regression 288</p> <p>12.2.7.2 Support Vector Machine 288</p> <p>12.2.7.3 Decision Tree 290</p> <p>12.2.7.4 Random Forest 292</p> <p>12.2.8 Tuning of Hyperparameters 294</p> <p>12.2.9 Performance Evaluation of the Models 294</p> <p>12.3 Results and Discussion 298</p> <p>12.3.1 Results Using Balancing Techniques 299</p> <p>12.3.2 Result Summary 299</p> <p>12.4 Conclusions 305</p> <p>12.4.1 Future Recommendations 305</p> <p>References 306</p> <p><b>13 Crack Detection in Civil Structures Using Deep Learning 311<br /></b><i>Bijimalla Shiva Vamshi Krishna, Rishiikeshwer B.S., J. Sanjay Raju, N. Bharathi, C. Venkatasubramanian and G.R. Brindha</i></p> <p>13.1 Introduction 312</p> <p>13.2 Related Work 312</p> <p>13.3 Infrared Thermal Imaging Detection Method 314</p> <p>13.4 Crack Detection Using CNN 314</p> <p>13.4.1 Model Creation 316</p> <p>13.4.2 Activation Functions (AF) 317</p> <p>13.4.3 Optimizers 322</p> <p>13.4.4 Transfer Learning 322</p> <p>13.5 Results and Discussion 322</p> <p>13.6 Conclusion 323</p> <p>References 323</p> <p><b>14 Measuring Urban Sprawl Using Machine Learning 327<br /></b><i>Keerti Kulkarni and P. A. Vijaya</i></p> <p>14.1 Introduction 327</p> <p>14.2 Literature Survey 328</p> <p>14.3 Remotely Sensed Images 329</p> <p>14.4 Feature Selection 331</p> <p>14.4.1 Distance-Based Metric 331</p> <p>14.5 Classification Using Machine Learning Algorithms 332</p> <p>14.5.1 Parametric vs. Non-Parametric Algorithms 332</p> <p>14.5.2 Maximum Likelihood Classifier 332</p> <p>14.5.3 k-Nearest Neighbor Classifiers 334</p> <p>14.5.4 Evaluation of the Classifiers 334</p> <p>14.5.4.1 Precision 334</p> <p>14.5.4.2 Recall 335</p> <p>14.5.4.3 Accuracy 335</p> <p>14.5.4.4 F1-Score 335</p> <p>14.6 Results 335</p> <p>14.7 Discussion and Conclusion 338</p> <p>Acknowledgements 338</p> <p>References 338</p> <p><b>15 Application of Deep Learning Algorithms in Medical Image Processing: A Survey 341<br /></b><i>Santhi B., Swetha A.M. and Ashutosh A.M.</i></p> <p>15.1 Introduction 342</p> <p>15.2 Overview of Deep Learning Algorithms 343</p> <p>15.2.1 Supervised Deep Neural Networks 343</p> <p>15.2.1.1 Convolutional Neural Network 343</p> <p>15.2.1.2 Transfer Learning 344</p> <p>15.2.1.3 Recurrent Neural Network 344</p> <p>15.2.2 Unsupervised Learning 345</p> <p>15.2.2.1 Autoencoders 345</p> <p>15.2.2.2 GANs 345</p> <p>15.3 Overview of Medical Images 346</p> <p>15.3.1 MRI Scans 346</p> <p>15.3.2 CT Scans 347</p> <p>15.3.3 X-Ray Scans 347</p> <p>15.3.4 PET Scans 347</p> <p>15.4 Scheme of Medical Image Processing 348</p> <p>15.4.1 Formation of Image 348</p> <p>15.4.2 Image Enhancement 349</p> <p>15.4.3 Image Analysis 349</p> <p>15.4.4 Image Visualization 349</p> <p>15.5 Anatomy-Wise Medical Image Processing With Deep Learning 349</p> <p>15.5.1 Brain Tumor 352</p> <p>15.5.2 Lung Nodule Cancer Detection 357</p> <p>15.5.3 Breast Cancer Segmentation and Detection 362</p> <p>15.5.4 Heart Disease Prediction 364</p> <p>15.5.5 COVID-19 Prediction 370</p> <p>15.6 Conclusion 372</p> <p>References 372</p> <p><b>16 Simulation of Self-Driving Cars Using Deep Learning 379<br /></b><i>Rahul M. K., Praveen L. Uppunda, Vinayaka Raju S., Sumukh B. and C. Gururaj</i></p> <p>16.1 Introduction 380</p> <p>16.2 Methodology 380</p> <p>16.2.1 Behavioral Cloning 380</p> <p>16.2.2 End-to-End Learning 380</p> <p>16.3 Hardware Platform 381</p> <p>16.4 Related Work 382</p> <p>16.5 Pre-Processing 382</p> <p>16.5.1 Lane Feature Extraction 382</p> <p>16.5.1.1 Canny Edge Detector 383</p> <p>16.5.1.2 Hough Transform 383</p> <p>16.5.1.3 Raw Image Without Pre-Processing 384</p> <p>16.6 Model 384</p> <p>16.6.1 CNN Architecture 385</p> <p>16.6.2 Multilayer Perceptron Model 385</p> <p>16.6.3 Regression vs. Classification 385</p> <p>16.6.3.1 Regression 386</p> <p>16.6.3.2 Classification 386</p> <p>16.7 Experiments 387</p> <p>16.8 Results 387</p> <p>16.9 Conclusion 394</p> <p>References 394</p> <p><b>17 Assistive Technologies for Visual, Hearing, and Speech Impairments: Machine Learning and Deep Learning Solutions 397<br /></b><i>Shahira K. C., Sruthi C. J. and Lijiya A.</i></p> <p>17.1 Introduction 397</p> <p>17.2 Visual Impairment 398</p> <p>17.2.1 Conventional Assistive Technology for the VIP 399</p> <p>17.2.1.1 Way Finding 399</p> <p>17.2.1.2 Reading Assistance 402</p> <p>17.2.2 The Significance of Computer Vision and Deep Learning in AT of VIP 403</p> <p>17.2.2.1 Navigational Aids 403</p> <p>17.2.2.2 Scene Understanding 405</p> <p>17.2.2.3 Reading Assistance 406</p> <p>17.2.2.4 Wearables 408</p> <p>17.3 Verbal and Hearing Impairment 410</p> <p>17.3.1 Assistive Listening Devices 410</p> <p>17.3.2 Alerting Devices 411</p> <p>17.3.3 Augmentative and Alternative Communication Devices 411</p> <p>17.3.3.1 Sign Language Recognition 412</p> <p>17.3.4 Significance of Machine Learning and Deep Learning in Assistive Communication Technology 417</p> <p>17.4 Conclusion and Future Scope 418</p> <p>References 418</p> <p><b>18 Case Studies: Deep Learning in Remote Sensing 425<br /></b><i>Emily Jenifer A. and Sudha N.</i></p> <p>18.1 Introduction 426</p> <p>18.2 Need for Deep Learning in Remote Sensing 427</p> <p>18.3 Deep Neural Networks for Interpreting Earth Observation Data 427</p> <p>18.3.1 Convolutional Neural Network 427</p> <p>18.3.2 Autoencoder 428</p> <p>18.3.3 Restricted Boltzmann Machine and Deep Belief Network 429</p> <p>18.3.4 Generative Adversarial Network 430</p> <p>18.3.5 Recurrent Neural Network 431</p> <p>18.4 Hybrid Architectures for Multi-Sensor Data Processing 432</p> <p>18.5 Conclusion 434</p> <p>References 434</p> <p>Index 439</p>
<p><B>Pradeep Singh PhD,</B> is an assistant professor in the Department of Computer Science Engineering, National Institute of Technology, Raipur, India. His current research interests include machine learning, deep learning, evolutionary computing, empirical studies on software quality, and software fault prediction models. He has more than 15 years of teaching experience with many publications in reputed international journals, conferences, and book chapters.</p>
<p><b>The book provides a practical approach by explaining the concepts of machine learning and deep learning algorithms, evaluation of methodology advances, and algorithm demonstrations with applications.</b></p> <p>Over the past two decades, the field of machine learning and its subfield deep learning have played a main role in software applications development. Also, in recent research studies, they are regarded as one of the disruptive technologies that will transform our future life, business, and the global economy. The recent explosion of digital data in a wide variety of domains, including science, engineering, Internet of Things, biomedical, healthcare, and many business sectors, has declared the era of big data, which cannot be analysed by classical statistics but by the more modern, robust machine learning and deep learning techniques. Since machine learning learns from data rather than by programming hard-coded decision rules, an attempt is being made to use machine learning to make computers that are able to solve problems like human experts in the field. <p>The goal of this book is to present a??practical approach by explaining the concepts of machine learning and deep learning algorithms with applications. Supervised machine learning algorithms, ensemble machine learning algorithms, feature selection, deep learning techniques, and their applications are discussed. Also included in the eighteen chapters is unique information which provides a clear understanding of concepts by using algorithms and case studies illustrated with applications of machine learning and deep learning in different domains, including disease prediction, software defect prediction, online television analysis, medical image processing, etc. Each of the chapters briefly described below provides both a chosen approach and its implementation. <p><b>Audience</b> <p>Researchers and engineers in artificial intelligence, computer scientists as well as software developers.

Diese Produkte könnten Sie auch interessieren:

MDX Solutions
MDX Solutions
von: George Spofford, Sivakumar Harinath, Christopher Webb, Dylan Hai Huang, Francesco Civardi
PDF ebook
53,99 €
Concept Data Analysis
Concept Data Analysis
von: Claudio Carpineto, Giovanni Romano
PDF ebook
107,99 €
Handbook of Virtual Humans
Handbook of Virtual Humans
von: Nadia Magnenat-Thalmann, Daniel Thalmann
PDF ebook
150,99 €