Postgraduate Course: Extreme Computing (INFR11088)
|School||School of Informatics
||College||College of Science and Engineering
|Credit level (Normal year taken)||SCQF Level 11 (Year 4 Undergraduate)
||Availability||Available to all students
|Summary||Extreme Computing deals with the principles, systems and algorithms behind Web-scale problem solving. This touches upon the technologies and techniques used by companies such as Google, Facebook, Amazon, and Microsoft, using warehouse-scale computing and massive datasets. The course will be in three parts: the principles behind extreme computing (cloud computing, scaling, performance, privacy etc.), supporting infrastructure (distributed file systems, replication, Web services etc.) and algorithms (Map/Reduce, case studies from Natural Language Processing, rudimentary data processing, machine learning, data streaming).
The course is to be conceptually split into three main areas, with each area not necessarily accounting for an equal portion of the syllabus. The three areas and the material covered in each area are as follows:
* Background: Motivation for new computing paradigms; introduction and differences between cloud and cluster computing; scaling, performance, privacy, economics, security, software as service.
* Infrastructure: Distributed file systems; multi-tier systems, virtualisation; replication; fault tolerance; concurrent programming; web services.
* Data structures and algorithms: decentralised data structures (e.g., peer-to-peer systems); programming frameworks (e.g., Hadoop Map/Reduce); workflow languages (e.g., FlumeJava); design and implementation of Map/Reduce programs; dealing with massive amounts of data; case studies from natural language processing, data processing, machine and deep learning; and computation over infinite streams.
The course will also deal with the legal, social, ethical, and professional issues involved in remotely storing data in cloud deployments and will also deal with potential solutions to these problems
Entry Requirements (not applicable to Visiting Students)
||Other requirements|| This course is open to all Informatics students including those on joint degrees. For external students where this course is not listed in your DPT, please seek special permission from the course organiser (lecturer).
Maths background, including basic probability. Programming ability, and be familiar with Unix-like systems. Any programming language is fine; past students find that Python is sufficient.
Information for Visiting Students
|Pre-requisites||Visiting students are required to have comparable background to that assumed by the course prerequisites listed in the Degree Regulations & Programmes of Study.
If in doubt, consult the course organiser (lecturer).
|High Demand Course?
Course Delivery Information
|Academic year 2018/19, Available to all students (SV1)
|Learning and Teaching activities (Further Info)
Lecture Hours 20,
Summative Assessment Hours 2,
Programme Level Learning and Teaching Hours 2,
Directed Learning and Independent Learning Hours
|Assessment (Further Info)
|Additional Information (Assessment)
||As this is a practical course touching a large number of topics and from separate areas, there is a heavier weight on practical work. For proper evaluation, students must be presented with real problems, rather than "toy" ones which can be solved in a very limited time. There will be one programming-based piece of coursework, using Map/Reduce as the programming framework to perform a sequence of computations over large-scale datasets.
You should expect to spend approximately 40 hours on the coursework for this course.
If delivered in semester 1, this course will have an option for semester 1 only visiting undergraduate students, providing assessment prior to the end of the calendar year.
||Hours & Minutes
|Main Exam Diet S1 (December)||2:00|
On completion of this course, the student will be able to:
- Demonstrate knowledge of the need for extreme computing by providing motivating examples of the scale of problems only computing at an extreme scale can solve (e.g., problems motivated by the use of large datasets and complex computation).
- Demonstrate knowledge of the problems associated with computing at an extreme scale, such us the need for multi-tier systems, programming models (with a focus on Map/Reduce), and workflow languages (e.g., FlumeJava).
- Demonstrate knowledge of the infrastructure necessary for computing at an extreme scale through enumerating different file system designs, virtualisation techniques, replication, fault-tolerance paradigms, and alternative system designs.
- Demonstrate knowledge of summary-based data structures (e.g., Bloom filters) that can be used to efficiently process large datasets; and cluster-based algorithms for data processing, machine learning, and low latency processing through the use of distributed programming paradigms.
- Demonstrate knowledge of large-scale distributed deep learning networks for the purposes of solving data mining and machine learning problems of extreme scale.
|Data Intensive Text Processing with MacReduce, Jimmy Linn & Chris Dyer|
Hadoop: The Definitive Guide, Tom White, O'Reilly Media
|Course organiser||Dr Volker Seeker
Tel: (0131 6)50 5191
|Course secretary||Mr Gregor Hall
Tel: (0131 6)50 5194