Details

Task Scheduling for Parallel Systems


Task Scheduling for Parallel Systems


Wiley Series on Parallel and Distributed Computing, Band 60 1. Aufl.

von: Oliver Sinnen

107,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 18.05.2007
ISBN/EAN: 9780470121160
Sprache: englisch
Anzahl Seiten: 320

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<b>A new model for task scheduling that dramatically improves the efficiency of parallel systems</b> <p>Task scheduling for parallel systems can become a quagmire of heuristics, models, and methods that have been developed over the past decades. The author of this innovative text cuts through the confusion and complexity by presenting a consistent and comprehensive theoretical framework along with realistic parallel system models. These new models, based on an investigation of the concepts and principles underlying task scheduling, take into account heterogeneity, contention for communication resources, and the involvement of the processor in communications.</p> <p>For readers who may be new to task scheduling, the first chapters are essential. They serve as an excellent introduction to programming parallel systems, and they place task scheduling within the context of the program parallelization process. The author then reviews the basics of graph theory, discussing the major graph models used to represent parallel programs. Next, the author introduces his task scheduling framework. He carefully explains the theoretical background of this framework and provides several examples to enable readers to fully understand how it greatly simplifies and, at the same time, enhances the ability to schedule.</p> <p>The second half of the text examines both basic and advanced scheduling techniques, offering readers a thorough understanding of the principles underlying scheduling algorithms. The final two chapters address communication contention in scheduling and processor involvement in communications.</p> <p>Each chapter features exercises that help readers put their new skills into practice. An extensive bibliography leads to additional information for further research. Finally, the use of figures and examples helps readers better visualize and understand complex concepts and processes.</p> <p>Researchers and students in distributed and parallel computer systems will find that this text dramatically improves their ability to schedule tasks accurately and efficiently.</p>
Preface. <p>Acknowledgments.</p> <p><b>1. Introduction.</b></p> <p>1.1 Overview.</p> <p>1.2 Organization.</p> <p><b>2. Parallel Systems and Programming.</b></p> <p>2.1 Parallel Architectures.</p> <p>2.1.1 Flynn’s Taxonomy.</p> <p>2.1.2 Memory Architectures.</p> <p>2.1.3 Programming Paradigms and Models.</p> <p>2.2 Communication Networks.</p> <p>2.2.1 Static Networks.</p> <p>2.2.2 Dynamic Networks.</p> <p>2.3 Parallelization.</p> <p>2.4 Subtask Decomposition.</p> <p>2.4.1 Concurrency and Granularity.</p> <p>2.4.2 Decomposition Techniques.</p> <p>2.4.3 Computation Type and Program Formulation.</p> <p>2.4.4 Parallelization Techniques.</p> <p>2.4.5 Target Parallel System.</p> <p>2.5 Dependence Analysis.</p> <p>2.5.1 Data Dependence.</p> <p>2.5.2 Data Dependence in Loops.</p> <p>2.5.3 Control Dependence.</p> <p>2.6 Concluding Remarks.</p> <p>2.7 Exercises.</p> <p><b>3. Graph Representations.</b></p> <p>3.1 Basic Graph Concepts.</p> <p>3.1.1 Computer Representation of Graphs.</p> <p>3.1.2 Elementary Graph Algorithms.</p> <p>3.2 Graph as a Program Model.</p> <p>3.2.1 Computation and Communication Costs.</p> <p>3.2.2 Comparison Criteria.</p> <p>3.3 Dependence Graph (DG).</p> <p>3.3.1 Iteration Dependence Graph.</p> <p>3.3.2 Summary.</p> <p>3.4 Flow Graph (FG).</p> <p>3.4.1 Data-Driven Execution Model.</p> <p>3.4.2 Summary.</p> <p>3.5 Task Graph (DAG).</p> <p>3.5.1 Graph Transformations and Conversions.</p> <p>3.5.2 Motivations and Limitations.</p> <p>3.5.3 Summary.</p> <p>3.6 Concluding Remarks.</p> <p>3.7 Exercises.</p> <p><b>4. Task Scheduling.</b></p> <p>4.1 Fundamentals.</p> <p>4.2 With Communication Costs.</p> <p>4.2.1 Schedule Example.</p> <p>4.2.2 Scheduling Complexity.</p> <p>4.3 Without Communication Costs.</p> <p>4.3.1 Schedule Example.</p> <p>4.3.2 Scheduling Complexity.</p> <p>4.4 Task Graph Properties.</p> <p>4.4.1 Critical Path.</p> <p>4.4.2 Node Levels.</p> <p>4.4.3 Granularity.</p> <p>4.5 Concluding Remarks.</p> <p>4.6 Exercises.</p> <p><b>5. Fundamental Heuristics.</b></p> <p>5.1 List Scheduling.</p> <p>5.1.1 Start Time Minimization.</p> <p>5.1.2 With Dynamic Priorities.</p> <p>5.1.3 Node Priorities.</p> <p>5.2 Scheduling with Given Processor Allocation.</p> <p>5.2.1 Phase Two.</p> <p>5.3 Clustering.</p> <p>5.3.1 Clustering Algorithms.</p> <p>5.3.2 Linear Clustering.</p> <p>5.3.3 Single Edge Clustering.</p> <p>5.3.4 List Scheduling as Clustering.</p> <p>5.3.5 Other Algorithms.</p> <p>5.4 From Clustering to Scheduling.</p> <p>5.4.1 Assigning Clusters to Processors.</p> <p>5.4.2 Scheduling on Processors.</p> <p>5.5 Concluding Remarks.</p> <p>5.6 Exercises.</p> <p><b>6. Advanced Task Scheduling.</b></p> <p>6.1 Insertion Technique.</p> <p>6.1.1 List Scheduling with Node Insertion.</p> <p>6.2 Node Duplication.</p> <p>6.2.1 Node Duplication Heuristics.</p> <p>6.3 Heterogeneous Processors.</p> <p>6.3.1 Scheduling.</p> <p>6.4 Complexity Results.</p> <p>6.4.1 α|β|γ Classification.</p> <p>6.4.2 Without Communication Costs.</p> <p>6.4.3 With Communication Costs.</p> <p>6.4.4 With Node Duplication.</p> <p>6.4.5 Heterogeneous Processors.</p> <p>6.5 Genetic Algorithms.</p> <p>6.5.1 Basics.</p> <p>6.5.2 Chromosomes.</p> <p>6.5.3 Reproduction.</p> <p>6.5.4 Selection, Complexity, and Flexibility.</p> <p>6.6 Concluding Remarks.</p> <p>6.7 Exercises.</p> <p><b>7. Communication Contention in Scheduling.</b></p> <p>7.1 Contention Awareness.</p> <p>7.1.1 End-Point Contention.</p> <p>7.1.2 Network Contention.</p> <p>7.1.3 Integrating End-Point and Network Contention.</p> <p>7.2 Network Model.</p> <p>7.2.1 Topology Graph.</p> <p>7.2.2 Routing.</p> <p>7.2.3 Scheduling Network Model.</p> <p>7.3 Edge Scheduling.</p> <p>7.3.1 Scheduling Edge on Route.</p> <p>7.3.2 The Edge Scheduling.</p> <p>7.4 Contention Aware Scheduling.</p> <p>7.4.1 Basics.</p> <p>7.4.2 NP-Completeness.</p> <p>7.5 Heuristics.</p> <p>7.5.1 List Scheduling.</p> <p>7.5.2 Priority Schemes—Task Graph Properties.</p> <p>7.5.3 Clustering.</p> <p>7.5.4 Experimental Results.</p> <p>7.6 Concluding Remarks.</p> <p>7.7 Exercises.</p> <p><b>8. Processor Involvement in Communication.</b></p> <p>8.1 Processor Involvement—Types and Characteristics.</p> <p>8.1.1 Involvement Types.</p> <p>8.1.2 Involvement Characteristics.</p> <p>8.1.3 Relation to LogP and Its Variants.</p> <p>8.2 Involvement Scheduling.</p> <p>8.2.1 Scheduling Edges on the Processors.</p> <p>8.2.2 Node and Edge Scheduling.</p> <p>8.2.3 Task Graph.</p> <p>8.2.4 NP-Completeness.</p> <p>8.3 Algorithmic Approaches.</p> <p>8.3.1 Direct Scheduling.</p> <p>8.3.2 Scheduling with Given Processor Allocation.</p> <p>8.4 Heuristics.</p> <p>8.4.1 List Scheduling.</p> <p>8.4.2 Two-Phase Heuristics.</p> <p>8.4.3 Experimental Results.</p> <p>8.5 Concluding Remarks.</p> <p>8.6 Exercises.</p> <p>Bibliography.</p> <p>Author Index.</p> <p>Subject Index.</p>
"The theoretical framework presented and the realistic parallel computing issues make reading this book worthwhile." (<i>Computing Reviews.com</i>, October 1, 2007)
<b>Oliver Sinnen</b>, PhD, is a senior lecturer in the Department of Electrical and Computer Engineering at the University of Auckland, New Zealand.
<b>A new model for task scheduling that dramatically improves the efficiency of parallel systems</b> <p>Task scheduling for parallel systems can become a quagmire of heuristics, models, and methods that have been developed over the past decades. The author of this innovative text cuts through the confusion and complexity by presenting a consistent and comprehensive theoretical framework along with realistic parallel system models. These new models, based on an investigation of the concepts and principles underlying task scheduling, take into account heterogeneity, contention for communication resources, and the involvement of the processor in communications.</p> <p>For readers who may be new to task scheduling, the first chapters are essential. They serve as an excellent introduction to programming parallel systems, and they place task scheduling within the context of the program parallelization process. The author then reviews the basics of graph theory, discussing the major graph models used to represent parallel programs. Next, the author introduces his task scheduling framework. He carefully explains the theoretical background of this framework and provides several examples to enable readers to fully understand how it greatly simplifies and, at the same time, enhances the ability to schedule.</p> <p>The second half of the text examines both basic and advanced scheduling techniques, offering readers a thorough understanding of the principles underlying scheduling algorithms. The final two chapters address communication contention in scheduling and processor involvement in communications.</p> <p>Each chapter features exercises that help readers put their new skills into practice. An extensive bibliography leads to additional information for further research. Finally, the use of figures and examples helps readers better visualize and understand complex concepts and processes.</p> <p>Researchers and students in distributed and parallel computer systems will find that this text dramatically improves their ability to schedule tasks accurately and efficiently.</p>

Diese Produkte könnten Sie auch interessieren:

Bandwidth Efficient Coding
Bandwidth Efficient Coding
von: John B. Anderson
EPUB ebook
114,99 €
Digital Communications with Emphasis on Data Modems
Digital Communications with Emphasis on Data Modems
von: Richard W. Middlestead
PDF ebook
171,99 €
Bandwidth Efficient Coding
Bandwidth Efficient Coding
von: John B. Anderson
PDF ebook
114,99 €