Details

High-Performance Computing on Complex Environments


High-Performance Computing on Complex Environments


Wiley Series on Parallel and Distributed Computing, Band 96 1. Aufl.

von: Emmanuel Jeannot, Julius Zilinskas

104,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 10.04.2014
ISBN/EAN: 9781118712078
Sprache: englisch
Anzahl Seiten: 512

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p>With recent changes in multicore and general-purpose computing on graphics processing units, the way parallel computers are used and programmed has drastically changed. It is important to provide a comprehensive study on how to use such machines written by specialists of the domain. The book provides recent research results in high-performance computing on complex environments, information on how to efficiently exploit heterogeneous and hierarchical architectures and distributed systems, detailed studies on the impact of applying heterogeneous computing practices to real problems, and applications varying from remote sensing to tomography. The content spans topics such as Numerical Analysis for Heterogeneous and Multicore Systems; Optimization of Communication for High Performance Heterogeneous and Hierarchical Platforms; Efficient Exploitation of Heterogeneous Architectures, Hybrid CPU+GPU, and Distributed Systems; Energy Awareness in High-Performance Computing; and Applications of Heterogeneous High-Performance Computing.<br /> <br /> • Covers cutting-edge research in HPC on complex environments, following an international collaboration of members of the ComplexHPC<br /> <br /> • Explains how to efficiently exploit heterogeneous and hierarchical architectures and distributed systems<br /> <br /> • Twenty-three chapters and over 100 illustrations cover domains such as numerical analysis, communication and storage, applications, GPUs and accelerators, and energy efficiency</p>
<p>Contributors xxiii</p> <p>Preface xxvii</p> <p><b>Part I Introduction 1</b></p> <p><b>1. Summary of the Open European Network for High-Performance Computing in Complex Environments 3<br /></b><i>Emmanuel Jeannot and Julius ?ilinskas</i></p> <p>1.1 Introduction and Vision 4</p> <p>1.2 Scientific Organization 6</p> <p>1.2.1 Scientific Focus 6</p> <p>1.2.2 Working Groups 6</p> <p>1.3 Activities of the Project 6</p> <p>1.3.1 Spring Schools 6</p> <p>1.3.2 International Workshops 7</p> <p>1.3.3 Working Groups Meetings 7</p> <p>1.3.4 Management Committee Meetings 7</p> <p>1.3.5 Short-Term Scientific Missions 7</p> <p>1.4 Main Outcomes of the Action 7</p> <p>1.5 Contents of the Book 8</p> <p>Acknowledgment 10</p> <p><b>Part II Numerical Analysis for Heterogeneous and Multicore Systems 11</b></p> <p><b>2. On the Impact of the Heterogeneous Multicore and Many-Core Platforms on Iterative Solution Methods and Preconditioning Techniques 13<br /></b><i>Dimitar Lukarski and Maya Neytcheva</i></p> <p>2.1 Introduction 14</p> <p>2.2 General Description of Iterative Methods and Preconditioning 16</p> <p>2.2.1 Basic Iterative Methods 16</p> <p>2.2.2 Projection Methods: CG and GMRES 18</p> <p>2.3 Preconditioning Techniques 20</p> <p>2.4 Defect-Correction Technique 21</p> <p>2.5 Multigrid Method 22</p> <p>2.6 Parallelization of Iterative Methods 22</p> <p>2.7 Heterogeneous Systems 23</p> <p>2.7.1 Heterogeneous Computing 24</p> <p>2.7.2 Algorithm Characteristics and Resource Utilization 25</p> <p>2.7.3 Exposing Parallelism 26</p> <p>2.7.4 Heterogeneity in Matrix Computation 26</p> <p>2.7.5 Setup of Heterogeneous Iterative Solvers 27</p> <p>2.8 Maintenance and Portability 29</p> <p>2.9 Conclusion 30</p> <p>Acknowledgments 31</p> <p>References 31</p> <p><b>3. Efficient Numerical Solution of 2D Diffusion Equation on Multicore Computers 33<br /></b><i>Matja? Depolli, Gregor Kosec, and Roman Trobec</i></p> <p>3.1 Introduction 34</p> <p>3.2 Test Case 35</p> <p>3.2.1 Governing Equations 35</p> <p>3.2.2 Solution Procedure 36</p> <p>3.3 Parallel Implementation 39</p> <p>3.3.1 Intel PCM Library 39</p> <p>3.3.2 OpenMP 40</p> <p>3.4 Results 41</p> <p>3.4.1 Results of Numerical Integration 41</p> <p>3.4.2 Parallel Efficiency 42</p> <p>3.5 Discussion 45</p> <p>3.6 Conclusion 47</p> <p>Acknowledgment 47</p> <p>References 47</p> <p><b>4. Parallel Algorithms for Parabolic Problems on Graphs in Neuroscience 51<br /></b><i>Natalija Tumanova and Raimondas Ciegis</i></p> <p>4.1 Introduction 51</p> <p>4.2 Formulation of the Discrete Model 53</p> <p>4.2.1 The <i>𝜃</i>-Implicit Discrete Scheme 55</p> <p>4.2.2 The Predictor–Corrector Algorithm I 57</p> <p>4.2.3 The Predictor–Corrector Algorithm II 58</p> <p>4.3 Parallel Algorithms 59</p> <p>4.3.1 Parallel <i>𝜃</i>-Implicit Algorithm 59</p> <p>4.3.2 Parallel Predictor–Corrector Algorithm I 62</p> <p>4.3.3 Parallel Predictor–Corrector Algorithm II 63</p> <p>4.4 Computational Results 63</p> <p>4.4.1 Experimental Comparison of Predictor–Corrector Algorithms 66</p> <p>4.4.2 Numerical Experiment of Neuron Excitation 68</p> <p>4.5 Conclusions 69</p> <p>Acknowledgments 70</p> <p>References 70</p> <p><b>Part III Communication and Storage Considerations in High-Performance Computing 73</b></p> <p><b>5. An Overview of Topology Mapping Algorithms and Techniques in High-Performance Computing 75<br /></b><i>Torsten Hoefler, Emmanuel Jeannot, and Guillaume Mercier</i></p> <p>5.1 Introduction 76</p> <p>5.2 General Overview 76</p> <p>5.2.1 A Key to Scalability: Data Locality 77</p> <p>5.2.2 Data Locality Management in Parallel Programming Models 77</p> <p>5.2.3 Virtual Topology: Definition and Characteristics 78</p> <p>5.2.4 Understanding the Hardware 79</p> <p>5.3 Formalization of the Problem 79</p> <p>5.4 Algorithmic Strategies for Topology Mapping 81</p> <p>5.4.1 Greedy Algorithm Variants 81</p> <p>5.4.2 Graph Partitioning 82</p> <p>5.4.3 Schemes Based on Graph Similarity 82</p> <p>5.4.4 Schemes Based on Subgraph Isomorphism 82</p> <p>5.5 Mapping Enforcement Techniques 82</p> <p>5.5.1 Resource Binding 83</p> <p>5.5.2 Rank Reordering 83</p> <p>5.5.3 Other Techniques 84</p> <p>5.6 Survey of Solutions 85</p> <p>5.6.1 Algorithmic Solutions 85</p> <p>5.6.2 Existing Implementations 85</p> <p>5.7 Conclusion and Open Problems 89</p> <p>Acknowledgment 90</p> <p>References 90</p> <p><b>6. Optimization of Collective Communication for Heterogeneous HPC Platforms 95<br /></b><i>Kiril Dichev and Alexey Lastovetsky</i></p> <p>6.1 Introduction 95</p> <p>6.2 Overview of Optimized Collectives and Topology-Aware Collectives 97</p> <p>6.3 Optimizations of Collectives on Homogeneous Clusters 98</p> <p>6.4 Heterogeneous Networks 99</p> <p>6.4.1 Comparison to Homogeneous Clusters 99</p> <p>6.5 Topology- and Performance-Aware Collectives 100</p> <p>6.6 Topology as Input 101</p> <p>6.7 Performance as Input 102</p> <p>6.7.1 Homogeneous Performance Models 103</p> <p>6.7.2 Heterogeneous Performance Models 105</p> <p>6.7.3 Estimation of Parameters of Heterogeneous Performance Models 106</p> <p>6.7.4 Other Performance Models 106</p> <p>6.8 Non-MPI Collective Algorithms for Heterogeneous Networks 106</p> <p>6.8.1 Optimal Solutions with Multiple Spanning Trees 107</p> <p>6.8.2 Adaptive Algorithms for Efficient Large-Message Transfer 107</p> <p>6.8.3 Network Models Inspired by BitTorrent 108</p> <p>6.9 Conclusion 111</p> <p>Acknowledgments 111</p> <p>References 111</p> <p><b>7. Effective Data Access Patterns on Massively Parallel Processors 115<br /></b><i>Gabriele Capannini, Ranieri Baraglia, Fabrizio Silvestri, and Franco Maria Nardini</i></p> <p>7.1 Introduction 115</p> <p>7.2 Architectural Details 116</p> <p>7.3 <i>K</i>-Model 117</p> <p>7.3.1 The Architecture 117</p> <p>7.3.2 Cost and Complexity Evaluation 118</p> <p>7.3.3 Efficiency Evaluation 119</p> <p>7.4 Parallel Prefix Sum 120</p> <p>7.4.1 Experiments 125</p> <p>7.5 Bitonic Sorting Networks 126</p> <p>7.5.1 Experiments 131</p> <p>7.6 Final Remarks 132</p> <p>Acknowledgments 133</p> <p>References 133</p> <p><b>8. Scalable Storage I/O Software for Blue Gene Architectures 135<br /></b><i>Florin Isaila, Javier Garcia, and Jesús Carretero</i></p> <p>8.1 Introduction 135</p> <p>8.2 Blue Gene System Overview 136</p> <p>8.2.1 Blue Gene Architecture 136</p> <p>8.2.2 Operating System Architecture 136</p> <p>8.3 Design and Implementation 138</p> <p>8.3.1 The Client Module 139</p> <p>8.3.2 The I/O Module 141</p> <p>8.4 Conclusions and Future Work 142</p> <p>Acknowledgments 142</p> <p>References 142</p> <p><b>Part IV Efficient Exploitation af Heterogeneous Architectures 145</b></p> <p><b>9. Fair Resource Sharing for Dynamic Scheduling of Workflows on Heterogeneous Systems 147<br /></b><i>Hamid Arabnejad, Jorge G. Barbosa, and Frédéric Suter</i></p> <p>9.1 Introduction 148</p> <p>9.1.1 Application Model 148</p> <p>9.1.2 System Model 151</p> <p>9.1.3 Performance Metrics 152</p> <p>9.2 Concurrent Workflow Scheduling 153</p> <p>9.2.1 Offline Scheduling of Concurrent Workflows 154</p> <p>9.2.2 Online Scheduling of Concurrent Workflows 155</p> <p>9.3 Experimental Results and Discussion 160</p> <p>9.3.1 DAG Structure 160</p> <p>9.3.2 Simulated Platforms 160</p> <p>9.3.3 Results and Discussion 162</p> <p>9.4 Conclusions 165</p> <p>Acknowledgments 166</p> <p>References 166</p> <p><b>10. Systematic Mapping of Reed–Solomon Erasure Codes on Heterogeneous Multicore Architectures 169<br /></b><i>Roman Wyrzykowski, Marcin Wozniak, and Lukasz Kuczynski</i></p> <p>10.1 Introduction 169</p> <p>10.2 Related Works 171</p> <p>10.3 Reed–Solomon Codes and Linear Algebra Algorithms 172</p> <p>10.4 Mapping Reed–Solomon Codes on Cell/B.E. Architecture 173</p> <p>10.4.1 Cell/B.E. Architecture 173</p> <p>10.4.2 Basic Assumptions for Mapping 174</p> <p>10.4.3 Vectorization Algorithm and Increasing its Efficiency 175</p> <p>10.4.4 Performance Results 177</p> <p>10.5 Mapping Reed–Solomon Codes on Multicore GPU Architectures 178</p> <p>10.5.1 Parallelization of Reed–Solomon Codes on GPU Architectures 178</p> <p>10.5.2 Organization of GPU Threads 180</p> <p>10.6 Methods of Increasing the Algorithm Performance on GPUs 181</p> <p>10.6.1 Basic Modifications 181</p> <p>10.6.2 Stream Processing 182</p> <p>10.6.3 Using Shared Memory 184</p> <p>10.7 GPU Performance Evaluation 185</p> <p>10.7.1 Experimental Results 185</p> <p>10.7.2 Performance Analysis using the Roofline Model 187</p> <p>10.8 Conclusions and Future Works 190</p> <p>Acknowledgments 191</p> <p>References 191</p> <p><b>11. Heterogeneous Parallel Computing Platforms and Tools for Compute-Intensive Algorithms: A Case Study 193<br /></b><i>Daniele D’Agostino, Andrea Clematis, and Emanuele Danovaro</i></p> <p>11.1 Introduction 194</p> <p>11.2 A Low-Cost Heterogeneous Computing Environment 196</p> <p>11.2.1 Adopted Computing Environment 199</p> <p>11.3 First Case Study: The <i>N</i>-Body Problem 200</p> <p>11.3.1 The Sequential <i>N</i>-Body Algorithm 201</p> <p>11.3.2 The Parallel <i>N</i>-Body Algorithm for Multicore Architectures 203</p> <p>11.3.3 The Parallel <i>N</i>-Body Algorithm for CUDA Architectures 204</p> <p>11.4 Second Case Study: The Convolution Algorithm 206</p> <p>11.4.1 The Sequential Convolver Algorithm 206</p> <p>11.4.2 The Parallel Convolver Algorithm for Multicore Architectures 207</p> <p>11.4.3 The Parallel Convolver Algorithm for GPU Architectures 208</p> <p>11.5 Conclusions 211</p> <p>Acknowledgments 212</p> <p>References 212</p> <p><b>12. Efficient Application of Hybrid Parallelism in Electromagnetism Problems 215<br /></b><i>Alejandro Álvarez-Melcón, Fernando D. Quesada, Domingo Giménez, Carlos Pérez-Alcaraz, José-Ginés Picón, and Tomás Ramírez</i></p> <p>12.1 Introduction 215</p> <p>12.2 Computation of Green’s functions in Hybrid Systems 216</p> <p>12.2.1 Computation in a Heterogeneous Cluster 217</p> <p>12.2.2 Experiments 218</p> <p>12.3 Parallelization in Numa Systems of a Volume Integral Equation Technique 222</p> <p>12.3.1 Experiments 222</p> <p>12.4 Autotuning Parallel Codes 226</p> <p>12.4.1 Empirical Autotuning 227</p> <p>12.4.2 Modeling the Linear Algebra Routines 229</p> <p>12.5 Conclusions and Future Research 230</p> <p>Acknowledgments 231</p> <p>References 232</p> <p><b>Part V CPU + GPU Coprocessing 235</b></p> <p><b>13. Design and Optimization of Scientific Applications for Highly Heterogeneous and Hierarchical HPC Platforms Using Functional Computation Performance Models 237<br /></b><i>David Clarke, Aleksandar Ilic, Alexey Lastovetsky, Vladimir Rychkov, Leonel Sousa, and Ziming Zhong</i></p> <p>13.1 Introduction 238</p> <p>13.2 Related Work 241</p> <p>13.3 Data Partitioning Based on Functional Performance Model 243</p> <p>13.4 Example Application: Heterogeneous Parallel Matrix Multiplication 245</p> <p>13.5 Performance Measurement on CPUs/GPUs System 247</p> <p>13.6 Functional Performance Models of Multiple Cores and GPUs 248</p> <p>13.7 FPM-Based Data Partitioning on CPUs/GPUs System 250</p> <p>13.8 Efficient Building of Functional Performance Models 251</p> <p>13.9 FPM-Based Data Partitioning on Hierarchical Platforms 253</p> <p>13.10 Conclusion 257</p> <p>Acknowledgments 259</p> <p>References 259</p> <p><b>14. Efficient Multilevel Load Balancing on Heterogeneous CPU + GPU Systems 261<br /></b><i>Aleksandar Ilic and Leonel Sousa</i></p> <p>14.1 Introduction: Heterogeneous CPU + GPU Systems 262</p> <p>14.1.1 Open Problems and Specific Contributions 263</p> <p>14.2 Background and Related Work 265</p> <p>14.2.1 Divisible Load Scheduling in Distributed CPU-Only Systems 265</p> <p>14.2.2 Scheduling in Multicore CPU and Multi-GPU Environments 268</p> <p>14.3 Load Balancing Algorithms for Heterogeneous CPU + GPU Systems 269</p> <p>14.3.1 Multilevel Simultaneous Load Balancing Algorithm 270</p> <p>14.3.2 Algorithm for Multi-Installment Processing with Multidistributions 273</p> <p>14.4 Experimental Results 275</p> <p>14.4.1 MSLBA Evaluation: Dense Matrix Multiplication Case Study 275</p> <p>14.4.2 AMPMD Evaluation: 2D FFT Case Study 277</p> <p>14.5 Conclusions 279</p> <p>Acknowledgments 280</p> <p>References 280</p> <p><b>15. The All-Pair Shortest-Path Problem in Shared-Memory Heterogeneous Systems 283<br /></b><i>Hector Ortega-Arranz, Yuri Torres, Diego R. Llanos, and Arturo Gonzalez-Escribano</i></p> <p>15.1 Introduction 283</p> <p>15.2 Algorithmic Overview 285</p> <p>15.2.1 Graph Theory Notation 285</p> <p>15.2.2 Dijkstra’s Algorithm 286</p> <p>15.2.3 Parallel Version of Dijkstra’s Algorithm 287</p> <p>15.3 CUDA Overview 287</p> <p>15.4 Heterogeneous Systems and Load Balancing 288</p> <p>15.5 Parallel Solutions to The APSP 289</p> <p>15.5.1 GPU Implementation 289</p> <p>15.5.2 Heterogeneous Implementation 290</p> <p>15.6 Experimental Setup 291</p> <p>15.6.1 Methodology 291</p> <p>15.6.2 Target Architectures 292</p> <p>15.6.3 Input Set Characteristics 292</p> <p>15.6.4 Load-Balancing Techniques Evaluated 292</p> <p>15.7 Experimental Results 293</p> <p>15.7.1 Complete APSP 293</p> <p>15.7.2 512-Source-Node-to-All Shortest Path 295</p> <p>15.7.3 Experimental Conclusions 296</p> <p>15.8 Conclusions 297</p> <p>Acknowledgments 297</p> <p>References 297</p> <p><b>Part VI Efficient Exploitation of Distributed Systems 301</b></p> <p><b>16. Resource Management for HPC on the Cloud 303<br /></b><i>Marc E. Frincu and Dana Petcu</i></p> <p>16.1 Introduction 303</p> <p>16.2 On the Type of Applications for HPC and HPC2 305</p> <p>16.3 HPC on the Cloud 306</p> <p>16.3.1 General PaaS Solutions 306</p> <p>16.3.2 On-Demand Platforms for HPC 310</p> <p>16.4 Scheduling Algorithms for HPC2 311</p> <p>16.5 Toward an Autonomous Scheduling Framework 312</p> <p>16.5.1 Autonomous Framework for RMS 313</p> <p>16.5.2 Self-Management 315</p> <p>16.5.3 Use Cases 317</p> <p>16.6 Conclusions 319</p> <p>Acknowledgment 320</p> <p>References 320</p> <p><b>17. Resource Discovery in Large-Scale Grid Systems 323<br /></b><i>Konstantinos Karaoglanoglou and Helen Karatza</i></p> <p>17.1 Introduction and Background 323</p> <p>17.1.1 Introduction 323</p> <p>17.1.2 Resource Discovery in Grids 324</p> <p>17.1.3 Background 325</p> <p>17.2 The Semantic Communities Approach 325</p> <p>17.2.1 Grid Resource Discovery Using Semantic Communities 325</p> <p>17.2.2 Grid Resource Discovery Based on Semantically Linked Virtual Organizations 327</p> <p>17.3 The P2P Approach 329</p> <p>17.3.1 On Fully Decentralized Resource Discovery in Grid Environments Using a P2P Architecture 329</p> <p>17.3.2 P2P Protocols for Resource Discovery in the Grid 330</p> <p>17.4 The Grid-Routing Transferring Approach 333</p> <p>17.4.1 Resource Discovery Based on Matchmaking Routers 333</p> <p>17.4.2 Acquiring Knowledge in a Large-Scale Grid System 335</p> <p>17.5 Conclusions 337</p> <p>Acknowledgment 338</p> <p>References 338</p> <p><b>Part VII Energy Awareness in High-Performance Computing 341</b></p> <p><b>18. Energy-Aware Approaches for HPC Systems 343<br /></b><i>Robert Basmadjian, Georges Da Costa, Ghislain Landry Tsafack Chetsa, Laurent Lefevre, Ariel Oleksiak, and Jean-Marc Pierson</i></p> <p>18.1 Introduction 344</p> <p>18.2 Power Consumption of Servers 345</p> <p>18.2.1 Server Modeling 346</p> <p>18.2.2 Power Prediction Models 347</p> <p>18.3 Classification and Energy Profiles of HPC Applications 354</p> <p>18.3.1 Phase Detection 356</p> <p>18.3.2 Phase Identification 358</p> <p>18.4 Policies and Leverages 359</p> <p>18.5 Conclusion 360</p> <p>Acknowledgements 361</p> <p>References 361</p> <p><b>19. Strategies for Increased Energy Awareness in Cloud Federations 365<br /></b><i>Gabor Kecskemeti, AttilaKertesz, Attila Cs. Marosi, and Zsolt Nemeth</i></p> <p>19.1 Introduction 365</p> <p>19.2 Related Work 367</p> <p>19.3 Scenarios 369</p> <p>19.3.1 Increased Energy Awareness Across Multiple Data Centers within a Single Administrative Domain 369</p> <p>19.3.2 Energy Considerations in Commercial Cloud Federations 372</p> <p>19.3.3 Reduced Energy Footprint of Academic Cloud Federations 374</p> <p>19.4 Energy-Aware Cloud Federations 374</p> <p>19.4.1 Availability of Energy-Consumption-Related Information 375</p> <p>19.4.2 Service Call Scheduling at the Meta-Brokering Level of FCM 376</p> <p>19.4.3 Service Call Scheduling and VM Management at the Cloud-Brokering Level of FCM 377</p> <p>19.5 Conclusions 379</p> <p>Acknowledgments 380</p> <p>References 380</p> <p><b>20. Enabling Network Security in HPC Systems Using Heterogeneous CMPs 383<br /></b><i>Ozcan Ozturk and Suleyman Tosun</i></p> <p>20.1 Introduction 384</p> <p>20.2 Related Work 386</p> <p>20.3 Overview of Our Approach 387</p> <p>20.3.1 Heterogeneous CMP Architecture 387</p> <p>20.3.2 Network Security Application Behavior 388</p> <p>20.3.3 High-Level View 389</p> <p>20.4 Heterogeneous CMP Design for Network Security Processors 390</p> <p>20.4.1 Task Assignment 390</p> <p>20.4.2 ILP Formulation 391</p> <p>20.4.3 Discussion 393</p> <p>20.5 Experimental Evaluation 394</p> <p>20.5.1 Setup 394</p> <p>20.5.2 Results 395</p> <p>20.6 Concluding Remarks 397</p> <p>Acknowledgments 397</p> <p>References 397</p> <p><b>Part VIII Applications of Heterogeneous High-Performance Computing 401</b></p> <p><b>21. Toward a High-Performance Distributed CBIR System for Hyperspectral Remote Sensing Data: A Case Study in Jungle Computing 403<br /></b><i>Timo van Kessel, NielsDrost, Jason Maassen, Henri E. Bal, Frank J. Seinstra, and Antonio J. Plaza</i></p> <p>21.1 Introduction 404</p> <p>21.2 CBIR For Hyperspectral Imaging Data 407</p> <p>21.2.1 Spectral Unmixing 407</p> <p>21.2.2 Proposed CBIR System 409</p> <p>21.3 Jungle Computing 410</p> <p>21.3.1 Jungle Computing: Requirements 411</p> <p>21.4 IBIS and Constellation 412</p> <p>21.5 System Design and Implementation 415</p> <p>21.5.1 Endmember Extraction 418</p> <p>21.5.2 Query Execution 418</p> <p>21.5.3 Equi-Kernels 419</p> <p>21.5.4 Matchmaking 420</p> <p>21.6 Evaluation 420</p> <p>21.6.1 Performance Evaluation 421</p> <p>21.7 Conclusions 426</p> <p>Acknowledgments 426</p> <p>References 426</p> <p><b>22. Taking Advantage of Heterogeneous Platforms in Image and Video Processing 429<br /></b><i>Sidi A. Mahmoudi, Erencan Ozkan, Pierre Manneback, and Suleyman Tosun</i></p> <p>22.1 Introduction 430</p> <p>22.2 Related Work 431</p> <p>22.2.1 Image Processing on GPU 431</p> <p>22.2.2 Video Processing on GPU 432</p> <p>22.2.3 Contribution 433</p> <p>22.3 Parallel Image Processing on GPU 433</p> <p>22.3.1 Development Scheme for Image Processing on GPU 433</p> <p>22.3.2 GPU Optimization 434</p> <p>22.3.3 GPU Implementation of Edge and Corner Detection 434</p> <p>22.3.4 Performance Analysis and Evaluation 434</p> <p>22.4 Image Processing on Heterogeneous Architectures 437</p> <p>22.4.1 Development Scheme for Multiple Image Processing 437</p> <p>22.4.2 Task Scheduling within Heterogeneous Architectures 438</p> <p>22.4.3 Optimization Within Heterogeneous Architectures 438</p> <p>22.5 Video Processing on GPU 438</p> <p>22.5.1 Development Scheme for Video Processing on GPU 439</p> <p>22.5.2 GPU Optimizations 440</p> <p>22.5.3 GPU Implementations 440</p> <p>22.5.4 GPU-Based Silhouette Extraction 440</p> <p>22.5.5 GPU-Based Optical Flow Estimation 440</p> <p>22.5.6 Result Analysis 443</p> <p>22.6 Experimental Results 444</p> <p>22.6.1 Heterogeneous Computing for Vertebra Segmentation 444</p> <p>22.6.2 GPU Computing for Motion Detection Using a Moving Camera 445</p> <p>22.7 Conclusion 447</p> <p>Acknowledgment 448</p> <p>References 448</p> <p><b>23. Real-Time Tomographic Reconstruction Through CPU + GPU Coprocessing 451<br /></b><i>José Ignacio Agulleiro, Francisco Vazquez, Ester M. Garzon, and Jose J. Fernandez</i></p> <p>23.1 Introduction 452</p> <p>23.2 Tomographic Reconstruction 453</p> <p>23.3 Optimization of Tomographic Reconstruction for CPUs and for GPUs 455</p> <p>23.4 Hybrid CPU + GPU Tomographic Reconstruction 457</p> <p>23.5 Results 459</p> <p>23.6 Discussion and Conclusion 461</p> <p>Acknowledgments 463</p> <p>References 463</p> <p>Index 467</p>
<p><b>EMMANUEL JEANNOT</b> is a Senior Research Scientist at INRIA. He received his PhD in computer science from École Normale Supérieur de Lyon. His main research interests are processes placement, scheduling for heterogeneous environments and grids, data redistribution, algorithms, and models for parallel machines. <p><b>JULIUS ILINSKAS</b> is a Principal Researcher and a Head of Department at Vilnius University, Lithuania. His research interests include parallel computing, optimization, data analysis, and visualization.
<p><b>A COMPREHENSIVE STUDY ON THE CHALLENGES AND RESEARCH RESULTS IN HIGH-PERFORMANCE COMPUTING, WITH A FOCUS ON COMPLEX ENVIRONMENTS</b> <p>With recent changes in multicore and general-purpose computing on graphics processing units, the way parallel computers are used and programmed has drastically changed. It is important to provide a comprehensive study on how to use such machines written by specialists of the domain. This book provides recent research results in high-performance computing on complex environments, information on how to efficiently exploit heterogeneous and hierarchical architectures and distributed systems, detailed studies on the impact of applying heterogeneous computing practices to real problems, and applications varying from remote sensing to tomography. The content spans topics such as Numerical Analysis for Heterogeneous and Multicore Systems; Optimization of Communication for High Performance Heterogeneous and Hierarchical Platforms; Efficient Exploitation of Heterogeneous Architectures, Hybrid CPU+GPU, and Distributed Systems; Energy Awareness in High-Performance Computing; and Applications of Heterogeneous High-Performance Computing.

Diese Produkte könnten Sie auch interessieren:

Symbian OS Explained
Symbian OS Explained
von: Jo Stichbury
PDF ebook
32,99 €
Symbian OS Internals
Symbian OS Internals
von: Jane Sales
PDF ebook
56,99 €
Parallel Combinatorial Optimization
Parallel Combinatorial Optimization
von: El-Ghazali Talbi
PDF ebook
120,99 €