An introduction to parallel programming / Peter S. Pacheco. Text
Material type:
- 9780123742605
- 005.2/75 22
- QA76.642 .P29 2011
Item type | Current library | Call number | Copy number | Status | Date due | Barcode | Item holds | |
---|---|---|---|---|---|---|---|---|
![]() |
UOE Main Library Short Loan Area | QA76.642 .P29 2011 (Browse shelf(Opens below)) | 20150425 | Available | 20150425 | |||
![]() |
UOE Main Library Short Loan Area | QA76.642 .P29 2011 (Browse shelf(Opens below)) | 20150424 | Available | 20150424 | |||
![]() |
UOE Main Library Short Loan Area | QA76.642 .P29 2011 (Browse shelf(Opens below)) | 20150426 | Available | 20150426 |
Browsing UOE Main Library shelves, Shelving location: Short Loan Area Close shelf browser (Hides shelf browser)
QA76.64 .J63 2007 An Introduction to Java Programming and Object-Oriented Application Development/ | QA76.642 .P29 2011 An introduction to parallel programming / | QA76.642 .P29 2011 An introduction to parallel programming / | QA76.642 .P29 2011 An introduction to parallel programming / | QA76.65 D44 2006 Visual Basic 2005 : | QA76.65 .G67 2001 Theory and problems of programming with visual basic / | QA76.65 .G67 2001 Theory and problems of programming with visual basic / |
Includes bibliographical references (p. 357-359) and index.
Machine generated contents note: 1 Why Parallel Computing1.1 Why We Need Ever-Increasing Performance 1.2 Why We're Building Parallel Systems 1.3 Why We Need to Write Parallel Programs 1.4 How Do We Write Parallel Programs? 1.5 What We'll Be Doing 1.6 Concurrent, Parallel, Distributed 1.7 The Rest of the Book 1.8 A Word of Warning 1.9 Typographical Conventions 1.10 Summary 1.11 Exercises 2 Parallel Hardware and Parallel Software2.1 Some Background 2.2 Modifications to the von Neumann Model 2.3 Parallel Hardware 2.4 Parallel Software 2.5 Input and Output 2.6 Performance 2.7 Parallel Program Design 2.8 Writing and Running Parallel Programs 2.9 Assumptions 2.10 Summary 2.11 Exercises 3 Distributed Memory Programming with MPI3.1 Getting Started 3.2 The Trapezoidal Rule in MPI 3.3 Dealing with I/O 3.4 Collective Communication 3.5 MPI Derived Datatypes 3.7 A Parallel Sorting Algorithm 3.8 Summary3.9 Exercises 3.10 Programming Assignments 4 Shared Memory Programming with Pthreads4.1 Processes, Threads and Pthreads 4.2 Hello, World4.3 Matrix-Vector Multiplication 4.4 Critical Sections 4.5 Busy-Waiting 4.6 Mutexes 4.7 Producer-Consumer Synchronization and Semaphores 4.8 Barriers and Condition Variables 4.9 Read-Write Locks 4.10 Caches, Cache-Coherence, and False Sharing 4.11 Thread-Safety 4.12 Summary 4.13 Exercises4.14 Programming Assignments 5 Shared Memory Programming with OpenMP5.1 Getting Started 5.2 The Trapezoidal Rule 5.3 Scope of Variables 5.4 The Reduction Clause 5.5 The Parallel For Directive 5.6 More About Loops in OpenMP: Sorting 5.7 Scheduling Loops 5.8 Producers and Consumers 5.9 Caches, Cache-Coherence, and False Sharing 5.10 Thread-Safety 5.11 Summary 5.12 Exercises 5.13 Programming Assignments 6 Parallel Program Development6.1 Two N-Body Solvers 6.2 Tree Search 6.3 A Word of Caution 6.4 Which API? 6.5 Summary 6.6 Exercises 6.7 Programming Assignments 7 Where to Go from Here .
There are no comments on this title.