Postgraduate Course: Message-passing Programming (INFR11163)
|School||School of Informatics
||College||College of Science and Engineering
|Credit level (Normal year taken)||SCQF Level 11 (Postgraduate)
||Availability||Available to all students
|Summary||Parallel programming by definition involves co-operation between processors to solve a common problem. The programmer has to define the tasks that will be executed by the processors, and also how these tasks exchange data and synchronise with each other.
In the message-passing model the tasks are separate processes that communicate by explicitly sending each other messages. All parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network.
This course uses the de facto standard for message passing, the Message Passing Interface (MPI), which is a library callable from C, C++ or Fortran. Parallel programs written using MPI can run on almost any system from a multicore laptop up to the largest supercomputers in the world.
The course will cover the following topics:
- The message-passing model
- Message-passing parallelisation of a regular domain code
- MPI terminology
- The anatomy of send and receive (synchronous and asynchronous)
- Point-to-point message-passing examples
- Non-blocking operations
- Communicator management
- Derived datatypes (focusing mainly on array subsections)
- Practicalities / Hints and Tips
Entry Requirements (not applicable to Visiting Students)
||Other requirements|| Ability to program in C, C++ or Fortran.
Information for Visiting Students
|Pre-requisites||Ability to program in C, C++ or Fortran.
|High Demand Course?
Course Delivery Information
|Academic year 2018/19, Available to all students (SV1)
|Learning and Teaching activities (Further Info)
Lecture Hours 15,
Supervised Practical/Workshop/Studio Hours 10,
Feedback/Feedforward Hours 1,
Programme Level Learning and Teaching Hours 2,
Directed Learning and Independent Learning Hours
|Assessment (Further Info)
|Additional Information (Assessment)
This is a practical course and the associated programming exercises are at least as important as the lectures. The course is assessed by a single piece of coursework which requires students to write an MPI program to solve a given problem and produce a report covering the program's design, implementation and performance.
||Provided on assessed work and through practical classes.
|No Exam Information
On completion of this course, the student will be able to:
- Describe the message-passing model in detail.
- Explain the circumstances which cause issues such as deadlock.
- Implement standard message-passing algorithms in MPI.
- Measure and comment on the performance of MPI programs.
- Design, implement and debug efficient parallel programs to solve regular-grid problems.
|Using MPI: Portable Parallel Programming with the Message-Passing Interface, William Gropp, Ewing Lusk and Anthony Skjellum.|
Full reading list provided via Learn.
|Graduate Attributes and Skills
Project management skills.
Effective written and diagrammatic communication.
Data collection and analysis.
||There are limited spaces on this course. Students not on the MSc in High Performance Computing or MSc High Performance Computing with Data Science should contact the course secretary to confirm availability and confirm that they have the required prerequisites before being enrolled on the course.
|Additional Class Delivery Information
||Normal class arrangements are: two lectures per week (weeks 1-7), one practical per week (weeks 2-11), with a single lecture in week 9. These details may be subject to change or alteration based on staff/room availability in a particular session, but any such changes will be provided via the timetabling system and posted on the Learn course.
|Keywords||MPI,MPP,EPCC,HPC,High Performance Computing,Parallel,Programming,Parallelism
|Course organiser||Dr David Henty
Tel: (0131 6)50 5960
|Course secretary||Mr Ben Morse
Tel: (0131 6)51 3398