Postgraduate Course: Parallel Design Patterns (INFR11168)
|School of Informatics
|College of Science and Engineering
|Credit level (Normal year taken)
|SCQF Level 11 (Postgraduate)
|Available to all students
|This course has been replaced by EPCC11010 Parallel Design Patterns
If you were given a serial problem, conceptually, how would you go about splitting it up into many different parts that could run concurrently on the latest supercomputers?
The good news is that we don't need to reinvent the wheel. Instead there are many different approaches (called parallel patterns) that have been developed by the community and can be used in a variety of situations. These patterns apply equally well regardless of whether our problem is a computational one or data driven.
Understanding and being able to apply these patterns also helps getting to grips with existing parallel codes and optimising poorly performing computation and data codes. Whilst the lectures take a top down approach, focusing on the patterns themselves, the practical tutorials give students the opportunity to explore the concepts by implementing pattern based solutions to problems using common HPC technologies.
This course has been replaced by EPCC11010 Parallel Design Patterns
Reusability is a key driver of this module, both reusability of knowledge (the patterns themselves) and parallel codes (via frameworks.) The parallel patterns (known as a pattern language) that we cover are split into two categories. This closest to the problem area (and most abstract) are parallel algorithm strategy patterns and include:
- Task Parallelism
- Recursive Splitting
- Geometric Decomposition
- Discrete Event
The other category of patterns is closer to the implementation and drives how the programmer should structure their code and data. These are implementation strategy patterns and include:
- Loop Parallelism
- Shared Data and Queues
- Active Messaging
Patterns are described on an abstract level and we will also discuss enhancements that can be made to improve performance/scalability but at the cost of code complexity. Implementations of these patterns are explored in depth in the tutorials.
This module also covers parallel frameworks and how we might write generic implementations of patterns which can be applied to numerous problems. Design principals of frameworks, along with popular HPC libraries will be explored so that students understand both from a design and also implementation perspective how to write generic parallel code.
Programming exercises use C and Fortran, with MPI and OpenMP.
Entry Requirements (not applicable to Visiting Students)
| It is RECOMMENDED that students have passed
Message-passing Programming (INFR11163)
| Relevant prerequisite material will be made available to students for self-study, but will not be taught as part of this course.
Ability to program in C, C++ or Fortran.
Information for Visiting Students
|Required: Ability to program in C, C++ or Fortran.
Students are RECOMMENDED to have passed: Message-Passing Programming (INFR11163).
Relevant prerequisite material will be made available to students for reference and self-study, but will not be taught as part of this course.
|High Demand Course?
Course Delivery Information
|Not being delivered
On completion of this course, the student will be able to:
- Describe the elements of a design pattern, and describe the advantages that patterns can provide, as well as their limitations.
- Recognise algorithm strategy patterns for the problem at hand.
- Choose the most appropriate implementation pattern.
- Construct reusable parallel codes, understanding the fundamentals of API design.
- Have an awareness of existing frameworks and libraries used to implement parallel design patterns.
|- 'Patterns for Parallel Programming', Timothy G. Mattson, Beverly Sanders, Berna Massingill
- 'The Art of UNIX Programming', Eric S. Raymond
|Graduate Attributes and Skills
Effective written communication
Reflection on learning and practice
Critical analysis of own work and others
Solution exploration and evaluation
|There are limited spaces on this course. Students not on the MSc in High Performance Computing or MSc High Performance Computing with Data Science should contact the course secretary to confirm availability and confirm that they have the required prerequisites before being enrolled on the course.
The course is available to PhD students for class-only study. PhD students requiring a form of assessment (e.g. SUPA/School of Physics and Astronomy CDT students) must contact the course secretary to confirm method of enrolment.
|Additional Class Delivery Information
|Generally: 2 lecturers per week (Weeks 1-8), 1 practical per week (Weeks 1-10). Potential for an additional coursework 'surgery' in Week 11.
|PDP,Parallel Design Patterns,Reusability,Parallel Frameworks,HPC,Parallelism,EPCC,Data
|Dr Nicholas Brown
Tel: (0131 6)50 6420
|Mr James Richards
Tel: 90131 6)51 3578