Details

Deep Learning Approaches for Security Threats in IoT Environments


Deep Learning Approaches for Security Threats in IoT Environments


1. Aufl.

von: Mohamed Abdel-Basset, Nour Moustafa, Hossam Hawash

107,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 22.11.2022
ISBN/EAN: 9781119884163
Sprache: englisch
Anzahl Seiten: 384

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<b>Deep Learning Approaches for Security Threats in IoT Environments</b> <p><b>An expert discussion of the application of deep learning methods in the IoT security environment</b> <p>In <i>Deep Learning Approaches for Security Threats in IoT Environments</i>, a team of distinguished cybersecurity educators deliver an insightful and robust exploration of how to approach and measure the security of Internet-of-Things (IoT) systems and networks. In this book, readers will examine critical concepts in artificial intelligence (AI) and IoT, and apply effective strategies to help secure and protect IoT networks. The authors discuss supervised, semi-supervised, and unsupervised deep learning techniques, as well as reinforcement and federated learning methods for privacy preservation. <p>This book applies deep learning approaches to IoT networks and solves the security problems that professionals frequently encounter when working in the field of IoT, as well as providing ways in which smart devices can solve cybersecurity issues. <p>Readers will also get access to a companion website with PowerPoint presentations, links to supporting videos, and additional resources. They’ll also find: <ul><li> A thorough introduction to artificial intelligence and the Internet of Things, including key concepts like deep learning, security, and privacy</li> <li> Comprehensive discussions of the architectures, protocols, and standards that form the foundation of deep learning for securing modern IoT systems and networks</li> <li> In-depth examinations of the architectural design of cloud, fog, and edge computing networks</li> <li> Fulsome presentations of the security requirements, threats, and countermeasures relevant to IoT networks</li></ul> <p>Perfect for professionals working in the AI, cybersecurity, and IoT industries, <i>Deep Learning Approaches for Security Threats in IoT Environments</i> will also earn a place in the libraries of undergraduate and graduate students studying deep learning, cybersecurity, privacy preservation, and the security of IoT networks.
<p>Author Biography</p> <p>About the Companion Website</p> <p>1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 1: INTRODUCING DEEP LEARNING FOR IoT SECURITY</p> <p>1.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>1.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Internet of Things (IoT) Architectures</p> <p>1.2.1.&nbsp;&nbsp;&nbsp;&nbsp; Physical layer</p> <p>1.2.2.&nbsp;&nbsp;&nbsp;&nbsp; Network layer</p> <p>1.2.3.&nbsp;&nbsp;&nbsp;&nbsp; Application Layer</p> <p>1.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Internet of Things Vulnerabilities and attacks</p> <p>1.3.1.&nbsp;&nbsp;&nbsp;&nbsp; Passive attacks</p> <p>1.3.2.&nbsp;&nbsp;&nbsp;&nbsp; Active attacks</p> <p>1.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Artificial Intelligence</p> <p>1.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Deep Learning</p> <p>1.6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Taxonomy of Deep Learning Models</p> <p>1.6.1.&nbsp;&nbsp;&nbsp;&nbsp; Supervision criterion</p> <p>1.6.1.1. Supervised deep learning</p> <p>1.6.1.2. Unsupervised deep learning.</p> <p>1.6.1.3. Semi-supervised deep learning.</p> <p>1.6.1.4. Deep reinforcement learning.</p> <p>1.6.2.&nbsp;&nbsp;&nbsp;&nbsp; Incrementality criterion</p> <p>1.6.2.1. Batch Learning</p> <p>1.6.2.2. Online Learning</p> <p>1.6.3.&nbsp;&nbsp;&nbsp;&nbsp; Generalization criterion</p> <p>1.6.3.1. model-based learning</p> <p>1.6.3.2. instance-based learning</p> <p>1.7.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 2: Deep Neural Networks</p> <p>2.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>2.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; From Biological Neurons to Artificial Neurons</p> <p>2.2.1.&nbsp;&nbsp;&nbsp;&nbsp; Biological Neurons&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p> <p>2.2.2.&nbsp;&nbsp;&nbsp;&nbsp; Artificial Neurons</p> <p>2.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Artificial Neural Network (ANN)</p> <p>2.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Activation Functions</p> <p>2.4.1.&nbsp;&nbsp;&nbsp;&nbsp; Types of Activation&nbsp;</p> <p>2.4.1.1. Binary Step Function</p> <p>2.4.1.2. Linear Activation Function</p> <p>2.4.1.3. Non-Linear Activation Functions</p> <p>&nbsp;</p> <p>2.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The Learning process of ANN</p> <p>2.5.1.&nbsp;&nbsp;&nbsp;&nbsp; Forward Propagation</p> <p>2.5.2.&nbsp;&nbsp;&nbsp;&nbsp; Backpropagation (Gradient Descent)</p> <p>2.6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Loss Functions</p> <p>2.6.1.&nbsp;&nbsp;&nbsp;&nbsp; Regression Loss Functions</p> <p>2.6.1.1. Mean Absolute Error (MAE) Loss</p> <p>2.6.1.2. Mean Squared Error (MSE) Loss</p> <p>2.6.1.3. Huber Loss</p> <p>2.6.1.4. Mean Bias Error (MBE) Loss</p> <p>2.6.1.5. Mean Squared Logarithmic Error (MSLE)</p> <p>2.6.2.&nbsp;&nbsp;&nbsp;&nbsp; Classification Loss Functions</p> <p>2.6.2.1. Binary Cross Entropy (BCE) Loss</p> <p>2.6.2.2. Categorical Cross Entropy (CCE) Loss</p> <p>2.6.2.3. Hinge Loss</p> <p>2.6.2.4. Kullback Leibler Divergence (KL) Loss</p> <p>2.7.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>&nbsp;</p> <p>3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 3: Training Deep Neural Networks</p> <p>3.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>3.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Gradient Descent revisited</p> <p>3.2.1.&nbsp;&nbsp;&nbsp;&nbsp; Gradient Descent</p> <p>3.2.2.&nbsp;&nbsp;&nbsp;&nbsp; Stochastic Gradient Descent</p> <p>3.2.3.&nbsp;&nbsp;&nbsp;&nbsp; Mini-batch Gradient Descent</p> <p>3.2.4.&nbsp;&nbsp;&nbsp;&nbsp;</p> <p>3.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Gradients vanishing and exploding</p> <p>3.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Gradient Clipping</p> <p>3.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Parameter initialization</p> <p>3.5.1.&nbsp;&nbsp;&nbsp;&nbsp; Random initialization</p> <p>3.5.2.&nbsp;&nbsp;&nbsp;&nbsp; Lecun Initialization</p> <p>3.5.3.&nbsp;&nbsp;&nbsp;&nbsp; Xavier initialization</p> <p>3.5.4.&nbsp;&nbsp;&nbsp;&nbsp; Kaiming (He) initialization</p> <p>3.6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faster Optimizers</p> <p>3.6.1.&nbsp;&nbsp;&nbsp;&nbsp; Momentum optimization</p> <p>3.6.2.&nbsp;&nbsp;&nbsp;&nbsp; Nesterov Accelerated Gradient</p> <p>3.6.3.&nbsp;&nbsp;&nbsp;&nbsp; AdaGrad</p> <p>3.6.4.&nbsp;&nbsp;&nbsp;&nbsp; RMSProp</p> <p>3.6.5.&nbsp;&nbsp;&nbsp;&nbsp; Adam optimizer</p> <p>3.7.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Model training issues</p> <p>3.7.1.&nbsp;&nbsp;&nbsp;&nbsp; Bias</p> <p>3.7.2.&nbsp;&nbsp;&nbsp;&nbsp; Variance</p> <p>3.7.3.&nbsp;&nbsp;&nbsp;&nbsp; Overfitting issues</p> <p>3.7.4.&nbsp;&nbsp;&nbsp;&nbsp; Underfitting issues</p> <p>3.7.5.&nbsp;&nbsp;&nbsp;&nbsp; Model capacity</p> <p>3.8.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 4: Evaluating Deep Neural Networks</p> <p>4.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>4.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Validation dataset</p> <p>4.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Regularization methods</p> <p>4.3.1.&nbsp;&nbsp;&nbsp;&nbsp; Early Stopping</p> <p>4.3.2.&nbsp;&nbsp;&nbsp;&nbsp; L1 &amp; L2 Regularization</p> <p>4.3.3.&nbsp;&nbsp;&nbsp;&nbsp; Dropout</p> <p>4.3.4.&nbsp;&nbsp;&nbsp;&nbsp; Max-Norm Regularization</p> <p>4.3.5.&nbsp;&nbsp;&nbsp;&nbsp; Data Augmentation</p> <p>4.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Cross-Validation</p> <p>4.4.1.&nbsp;&nbsp;&nbsp;&nbsp; Hold-out cross-validation</p> <p>4.4.2.&nbsp;&nbsp;&nbsp;&nbsp; K-folds cross-validation</p> <p>4.4.3.&nbsp;&nbsp;&nbsp;&nbsp; Repeated K-folds cross-validation</p> <p>4.4.4.&nbsp;&nbsp;&nbsp;&nbsp; Leave-one-out cross-validation</p> <p>4.4.5.&nbsp;&nbsp;&nbsp;&nbsp; Leave-p-out cross-validation</p> <p>4.4.6.&nbsp;&nbsp;&nbsp;&nbsp; Time series cross-validation</p> <p>4.4.7.&nbsp;&nbsp;&nbsp;&nbsp; Block cross-validation</p> <p>4.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Performance Metrics.</p> <p>4.5.1.&nbsp;&nbsp;&nbsp;&nbsp; Regression Metrics</p> <p>4.5.1.1. Mean Absolute Error (MAE)</p> <p>4.5.1.2. Root Mean Squared Error (RMSE)</p> <p>4.5.1.3. Coefficient of determination (R-Squared)</p> <p>4.5.1.4. Adjusted R2</p> <p>4.5.1.5.</p> <p>4.5.2.&nbsp;&nbsp;&nbsp;&nbsp; Classification Metrics</p> <p>4.5.2.1. Confusion Matrix.</p> <p>4.5.2.2. Accuracy</p> <p>4.5.2.3. Precision</p> <p>4.5.2.4. Recall</p> <p>4.5.2.5. Precision-Recall Curve</p> <p>4.5.2.6. F1-score</p> <p>4.5.2.7. Beta F1-score</p> <p>4.5.2.8. False Positive Rate (FPR)</p> <p>4.5.2.9. Specificity</p> <p>4.5.2.10.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Receiving operating characteristics (ROC) curve</p> <p>4.6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>&nbsp;</p> <p>5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 5</p> <p>5.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>5.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Shift from full connected to convolutional</p> <p>5.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Basic Architecture</p> <p>5.3.1.&nbsp;&nbsp;&nbsp;&nbsp; The Cross-Correlation Operation</p> <p>5.3.2.&nbsp;&nbsp;&nbsp;&nbsp; Convolution operation</p> <p>5.3.3.&nbsp;&nbsp;&nbsp;&nbsp; Receptive Field</p> <p>5.3.4.&nbsp;&nbsp;&nbsp;&nbsp; Padding and Stride</p> <p>5.3.4.1. Padding</p> <p>5.3.4.2. Stride</p> <p>5.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Multiple Channels</p> <p>5.4.1.&nbsp;&nbsp;&nbsp;&nbsp; Multi-channel Inputs</p> <p>5.4.2.&nbsp;&nbsp;&nbsp;&nbsp; Multi-channels Output</p> <p>5.4.3.&nbsp;&nbsp;&nbsp;&nbsp; Convolutional kernel 1&times;1.</p> <p>5.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Pooling Layers</p> <p>5.5.1.&nbsp;&nbsp;&nbsp;&nbsp; Max Pooling</p> <p>5.5.2.&nbsp;&nbsp;&nbsp;&nbsp; Average Pooling</p> <p>5.6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Normalization Layers</p> <p>5.6.1.&nbsp;&nbsp;&nbsp;&nbsp; Batch Normalization</p> <p>5.6.2.&nbsp;&nbsp;&nbsp;&nbsp; Layer Normalization</p> <p>5.6.3.&nbsp;&nbsp;&nbsp;&nbsp; Instance Normalization</p> <p>5.6.4.&nbsp;&nbsp;&nbsp;&nbsp; Group Normalization</p> <p>5.6.5.&nbsp;&nbsp;&nbsp;&nbsp; Weight Normalization</p> <p>5.7.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Convolutional Neural Networks (LeNet)</p> <p>5.8.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Case studies</p> <p>5.8.1.&nbsp;&nbsp;&nbsp;&nbsp; Handwritten Digit Classification (one channel input)</p> <p>5.8.2.&nbsp;&nbsp;&nbsp;&nbsp; Dog vs Cat Image Classification (Multi-channel input)</p> <p>5.9.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 6: Dive into Convolutional Neural Networks</p> <p>6.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>6.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; One-dimensional Convolutional Network</p> <p>6.2.1.&nbsp;&nbsp;&nbsp;&nbsp; One-dimensional Convolution</p> <p>6.2.2.&nbsp;&nbsp;&nbsp;&nbsp; One-dimensional pooling</p> <p>6.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Three-dimensional Convolutional Network</p> <p>6.3.1.&nbsp;&nbsp;&nbsp;&nbsp; Three-dimension convolution</p> <p>6.3.2.&nbsp;&nbsp;&nbsp;&nbsp; Three-dimensional pooling</p> <p>6.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Transposed Convolution Layer</p> <p>6.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Atrous/Dilated Convolution</p> <p>6.6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Separable Convolutions</p> <p>6.6.1.&nbsp;&nbsp;&nbsp;&nbsp; Spatially Separable Convolutions</p> <p>6.6.2.&nbsp;&nbsp;&nbsp;&nbsp; Depth-wise Separable (DS) Convolutions</p> <p>6.7.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Grouped Convolution</p> <p>6.8.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Shuffled Grouped Convolution</p> <p>6.9.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>7.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 7: Advanced Convolutional Neural Network</p> <p>7.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>7.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; AlexNet</p> <p>7.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Block-wise Convolutional Network (VGG)</p> <p>7.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Network-in Network</p> <p>7.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Inception Networks</p> <p>7.5.1.&nbsp;&nbsp;&nbsp;&nbsp; GoogLeNet</p> <p>7.5.2.&nbsp;&nbsp;&nbsp;&nbsp; Inception Network V2(Inception V2)</p> <p>7.5.3.&nbsp;&nbsp;&nbsp;&nbsp; Inception Network V3 (Inception V3)</p> <p>7.6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Residual Convolutional Networks</p> <p>7.7.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Dense Convolutional Networks</p> <p>7.8.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Temporal Convolutional Network</p> <p>7.8.1.&nbsp;&nbsp;&nbsp;&nbsp; One-dimensional Convolutional Network</p> <p>7.8.2.&nbsp;&nbsp;&nbsp;&nbsp; Causal and Dilated Convolution</p> <p>7.8.3.&nbsp;&nbsp;&nbsp;&nbsp; Residual blocks</p> <p>7.9.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>&nbsp;</p> <p>8.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 8: Introducing Recurrent Neural Networks</p> <p>8.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>8.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Recurrent neural networks</p> <p>8.2.1.&nbsp;&nbsp;&nbsp;&nbsp; Recurrent Neurons</p> <p>8.2.2.&nbsp;&nbsp;&nbsp;&nbsp; Memory Cell</p> <p>8.2.3.&nbsp;&nbsp;&nbsp;&nbsp; Recurrent Neural Network</p> <p>8.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Different Categories of RNNs</p> <p>8.3.1.&nbsp;&nbsp;&nbsp;&nbsp; One-to-one RNN</p> <p>8.3.2.&nbsp;&nbsp;&nbsp;&nbsp; One-to-many RNN</p> <p>8.3.3.&nbsp;&nbsp;&nbsp;&nbsp; Many-to-one RNN</p> <p>8.3.4.&nbsp;&nbsp;&nbsp;&nbsp; Many-to-many RNN</p> <p>8.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Backpropagation Through Time</p> <p>8.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Challenges facing simple RNNs</p> <p>8.5.1.&nbsp;&nbsp;&nbsp;&nbsp; Vanishing Gradient</p> <p>8.5.2.&nbsp;&nbsp;&nbsp;&nbsp; Exploding gradient.</p> <p>8.5.2.1. Truncated Backpropagation through time (TBPTT)</p> <p>8.5.3.&nbsp;&nbsp;&nbsp;&nbsp; Clipping Gradients</p> <p>8.6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Case study: Malware Detection</p> <p>8.7.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>&nbsp;</p> <p>9.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 9: Dive into Recurrent Neural Networks</p> <p>9.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>9.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Long Short-term Memory (LSTM)</p> <p>9.2.1.&nbsp;&nbsp;&nbsp;&nbsp; LSTM gates</p> <p>9.2.2.&nbsp;&nbsp;&nbsp;&nbsp; Candidate Memory Cells</p> <p>9.2.3.&nbsp;&nbsp;&nbsp;&nbsp; Memory Cell</p> <p>9.2.4.&nbsp;&nbsp;&nbsp;&nbsp; Hidden state</p> <p>9.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; LSTM with Peephole Connections</p> <p>9.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Gated Recurrent Units (GRU)</p> <p>9.4.1.&nbsp;&nbsp;&nbsp;&nbsp; CRU cell gates</p> <p>9.4.2.&nbsp;&nbsp;&nbsp;&nbsp; Candidate State</p> <p>9.4.3.&nbsp;&nbsp;&nbsp;&nbsp; Hidden state</p> <p>9.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ConvLSTM</p> <p>9.6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Unidirectional vs Bi-directional Recurrent Network</p> <p>9.7.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Deep Recurrent Network</p> <p>9.8.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Insights</p> <p>9.9.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Case study of Malware Detection</p> <p>9.10.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>&nbsp;</p> <p>10.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 10: Attention Neural Networks</p> <p>10.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>10.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; From biological to computerized attention</p> <p>10.2.1.&nbsp; Biological Attention</p> <p>10.2.2.&nbsp; Queries, Keys, and Values</p> <p>10.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Attention Pooling: Nadaraya-Watson Kernel Regression</p> <p>10.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Attention Scoring Functions</p> <p>10.4.1.&nbsp; Masked Softmax Operation</p> <p>10.4.2.&nbsp; Additive Attention (AA)</p> <p>10.4.3.&nbsp; Scaled Dot-Product Attention</p> <p>10.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Multi-Head Attention (MHA)</p> <p>10.6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Self-Attention Mechanism</p> <p>10.6.1.&nbsp; Self-Attention (SA) mechanism</p> <p>10.6.2.&nbsp; Positional encoding</p> <p>10.7.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Transformer Network</p> <p>10.8.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>&nbsp;</p> <p>11.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 11: Autoencoder Networks</p> <p>11.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>11.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introducing Autoencoders</p> <p>11.2.1.&nbsp; Definition of Autoencoder&nbsp;</p> <p>11.2.2.&nbsp; Structural Design</p> <p>11.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Convolutional Autoencoder</p> <p>11.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Denoising Autoencoder</p> <p>11.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Sparse autoencoders</p> <p>11.6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Contractive autoencoders</p> <p>11.7.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Variational autoencoders</p> <p>11.8.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Case study</p> <p>11.9.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>&nbsp;</p> <p>12.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 12: Generative Adversarial Networks (GANs)</p> <p>12.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>12.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Foundation of Generative Adversarial Network</p> <p>12.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Deep Convolutional GAN</p> <p>12.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Conditional GAN</p> <p>12.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>13.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 13: Dive into Generative Adversarial Networks</p> <p>13.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>13.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Wasserstein GAN</p> <p>13.2.1.&nbsp; Distance functions</p> <p>13.2.2.&nbsp; Distance function in GANs</p> <p>13.2.3.&nbsp; Wasserstein loss</p> <p>13.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Least-squares GAN (LSGAN)</p> <p>13.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Auxiliary Classifier GAN (ACGAN)</p> <p>13.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>&nbsp;</p> <p>14.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 14: Disentangled Representation GANs</p> <p>14.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>14.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Disentangled representations</p> <p>14.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; InfoGAN</p> <p>14.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; StackedGAN</p> <p>14.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>&nbsp;</p> <p>15.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 15: Introducing Federated Learning for Internet of Things (IoT)</p> <p>15.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>15.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Federated Learning in Internet of Things.</p> <p>15.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Taxonomic view of Federated Learning</p> <p>15.3.1.&nbsp; Network Structure</p> <p>15.3.1.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Centralized Federated Learning</p> <p>15.3.1.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Decentralized Federated Learning</p> <p>15.3.1.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Hierarchical Federated Learning</p> <p>15.3.2.&nbsp; Data Partition</p> <p>15.3.3.&nbsp; Horizontal Federated Learning</p> <p>15.3.4.&nbsp; Vertical Federated Learning</p> <p>15.3.5.&nbsp; Federated Transfer learning</p> <p>15.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Open-source Frameworks</p> <p>15.4.1.&nbsp; TensorFlow Federated</p> <p>15.4.2.&nbsp; FedML</p> <p>15.4.3.&nbsp; LEAF</p> <p>15.4.4.&nbsp; Paddle FL</p> <p>15.4.5.&nbsp; Federated AI Technology Enabler (FATE)</p> <p>15.4.6.&nbsp; OpenFL</p> <p>15.4.7.&nbsp; IBM Federated Learning</p> <p>15.4.8.&nbsp; NVIDIA FLARE</p> <p>15.4.9.&nbsp; Flower</p> <p>15.4.10.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Sherpa.ai</p> <p>15.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>&nbsp;</p> <p>16.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Chapter 16: Privacy-Preserved Federated Learning</p> <p>16.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Introduction</p> <p>16.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Statistical Challenges in Federated Learning</p> <p>16.2.1.&nbsp; Non-Independent and Identically Distributed (Non-IID) Data</p> <p>16.2.1.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Class Imbalance</p> <p>16.2.1.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Distribution Imbalance</p> <p>16.2.1.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Size Imbalance</p> <p>16.2.2.&nbsp; Model Heterogeneity</p> <p>16.2.3.&nbsp; Block Cycles</p> <p>16.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Security Challenge in Federated Learning</p> <p>16.3.1.&nbsp; Untargeted Attacks</p> <p>16.3.2.&nbsp; Targeted Attacks</p> <p>16.4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Privacy Challenges in Federated Learning</p> <p>16.4.1.&nbsp; Secure Aggregation</p> <p>16.4.1.1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Homomorphic Encryption (HE)</p> <p>16.4.1.2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Secure Multiparty Computation</p> <p>16.4.1.3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Blockchain</p> <p>16.4.2.&nbsp; Perturbation Method</p> <p>16.5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Supplementary Materials</p> <p>&nbsp;</p>
<p><b>Mohamed Abdel-Basset, PhD,</b> is an Associate Professor in the Faculty of Computers and Informatics at Zagazig University, Egypt. He is a Senior Member of the IEEE. <p><b>Nour Moustafa, PhD,</b> is a Postgraduate Discipline Coordinator (Cyber) and Senior Lecturer in Cybersecurity and Computing at the School of Engineering and Information Technology at the University of New South Wales, UNSW Canberra, Australia. <p><b>Hossam Hawash</b> is an Assistant Lecturer in the Department of Computer Science, Faculty of Computers and Informatics at Zagazig University, Egypt.
<p><b>An expert discussion of the application of deep learning methods in the IoT security environment</b> <p>In <i>Deep Learning Approaches for Security Threats in IoT Environments</i>, a team of distinguished cybersecurity educators deliver an insightful and robust exploration of how to approach and measure the security of Internet-of-Things (IoT) systems and networks. In this book, readers will examine critical concepts in artificial intelligence (AI) and IoT, and apply effective strategies to help secure and protect IoT networks. The authors discuss supervised, semi-supervised, and unsupervised deep learning techniques, as well as reinforcement and federated learning methods for privacy preservation. <p>This book applies deep learning approaches to IoT networks and solves the security problems that professionals frequently encounter when working in the field of IoT, as well as providing ways in which smart devices can solve cybersecurity issues. <p>Readers will also get access to a companion website with PowerPoint presentations, links to supporting videos, and additional resources. They’ll also find: <ul><li> A thorough introduction to artificial intelligence and the Internet of Things, including key concepts like deep learning, security, and privacy</li> <li> Comprehensive discussions of the architectures, protocols, and standards that form the foundation of deep learning for securing modern IoT systems and networks</li> <li> In-depth examinations of the architectural design of cloud, fog, and edge computing networks</li> <li> Fulsome presentations of the security requirements, threats, and countermeasures relevant to IoT networks</li></ul> <p>Perfect for professionals working in the AI, cybersecurity, and IoT industries, <i>Deep Learning Approaches for Security Threats in IoT Environments</i> will also earn a place in the libraries of undergraduate and graduate students studying deep learning, cybersecurity, privacy preservation, and the security of IoT networks.

Diese Produkte könnten Sie auch interessieren:

Building Secure Cars
Building Secure Cars
von: Dennis Kengo Oka
EPUB ebook
116,99 €
Design and Development of Efficient Energy Systems
Design and Development of Efficient Energy Systems
von: Suman Lata Tripathi, Dushyant Kumar Singh, Sanjeevikumar Padmanaban, P. Raja
PDF ebook
197,99 €
Cognitive Engineering for Next Generation Computing
Cognitive Engineering for Next Generation Computing
von: Kolla Bhanu Prakash, G. R. Kanagachidambaresan, V. Srikanth, E. Vamsidhar
PDF ebook
170,99 €