Postgraduate Course: Message Passing Programming (PGPH11007)
||School of Physics and Astronomy
||College of Science and Engineering
||Not available to visiting students
|Credit level (Normal year taken)
||SCQF Level 11 (Postgraduate)
|Home subject area
||Postgraduate (School of Physics and Astronomy)
||Other subject area
||Taught in Gaelic?
||The world's largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. This course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.
Parallel programming by definition involves co-operation between many separate tasks to solve a common problem. The programmer has to define the tasks that will be executed by each of the parallel processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model, each task is a separate operating system process; these processes communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via explicit calls to some message-passing library that is entirely responsible for interfacing with the physical communication network linking the processors together.
This course uses the de facto standard for message-passing libraries: the Message Passing Interface (MPI). The syllabus covers: the message-passing model; compiling MPI programs; point-to-point communication; non-blocking operations; derived datatypes; virtual topologies; collective communication; high-level MPI design issues. Each topic is illustrated with a small practical programming example. At the end of the course there is a more substantial Case Study exercise in parallel image processing which uses all the techniques covered in the lectures.
|| It is RECOMMENDED that students also take
Fundamental Concepts of High Performance Computing (PGPH11014)
|| Only postgraduates or final year MPhys students with suitable computational background, subject to space restrictions and agreement with relevant Programme Coordinator.
Course Delivery Information
|Delivery period: 2010/11 Semester 1, Not available to visiting students (SS1)
||WebCT enabled: Yes
|King's Buildings||Lecture||1-11|| 14:00 - 17:00|
||First class information not currently available|
|No Exam Information
Summary of Intended Learning Outcomes
|On completion of this course students should be able to:
- Understand the message-passing model in detail.
- Implement standard message-passing algorithms in MPI.
- Debug simple MPI codes.
- Measure and comment on the performance of MPI codes.
- Design and implement efficient parallel programs to solve regular-grid problems.
||Dr Judy Hardy
Tel: (0131 6)50 6716
|| Yuhua Lei
Tel: (0131 6) 517067
copyright 2011 The University of Edinburgh -
31 January 2011 8:09 am