Postgraduate Course: Large Scale Optimization for Data Science (MATH11147)
|School||School of Mathematics
||College||College of Science and Engineering
|Credit level (Normal year taken)||SCQF Level 11 (Postgraduate)
||Availability||Available to all students
|Summary||The detailed modelling of real life problems requires a knowledgeable choice of the objective function and constraints, and often leads to very large optimization problems. The efficient solution of such problems is a key to the success of optimization in practice.
Data Science provides numerous instances of problems which can be modelled using optimization. The amount of data in some of these models challenges existing optimization techniques and requires the development of new ones.
This course will address the methods for constrained optimization and the assumption will be made that the knowledge of an exact (or an approximation of) the second order information (Hessian of the Lagrangian) is available. The course will cover interior point methods for various classes of optimization problems, addressing their theory and implementation. It will also cover the Benders and Dantzig-Wolfe decomposition techniques.
The successful applications of these techniques in various Data Science problems from areas such as statistics, machine learning, engineering, energy and finance, will be discussed.
The practical component of this course will consist of computing laboratory work using Matlab (including the Matlab-based CVX system for convex optimization). These exercises will reinforce the theoretical analysis of problems, methods and their implementation.
Unconstrained and Constrained Optimization (modelling issues: constraints in optimization)
Interior Point Methods for linear, quadratic, nonlinear, second-order cone and semidefinite programming (motivation, theory, polynomial complexity, implementation). Newton Method and self-concordant barriers in optimization
Implementational aspects of methods for very large scale optimization
(sparse matrices, inexact Newton Method, preconditioning, quasi-Newton)
Decomposition Techniques (Benders and Dantzig-Wolfe decompositions).
Data Science Applications:
- Statistics: regressions, classification, discrimination analysis,
- Machine learning: support vector machines
- Engineering: signal and image processing
- Energy: uncertainty of sustainable sources (wind)
- Finance: portfolio optimization, asset and liability management
Entry Requirements (not applicable to Visiting Students)
|| Students MUST have passed:
Fundamentals of Optimization (MATH11111)
||Other requirements|| None
Information for Visiting Students
|High Demand Course?
Course Delivery Information
|Academic year 2018/19, Available to all students (SV1)
|Learning and Teaching activities (Further Info)
Lecture Hours 18,
Seminar/Tutorial Hours 5,
Supervised Practical/Workshop/Studio Hours 4,
Programme Level Learning and Teaching Hours 2,
Directed Learning and Independent Learning Hours
|Assessment (Further Info)
|Additional Information (Assessment)
||Coursework 20%, Examination 80%
||Hours & Minutes
|Main Exam Diet S2 (April/May)||Large Scale Optimization for Data Science (MATH11147)||2:00|
| - Ability to model real-life problems as optimization problems
- Ability to choose a solution method appropriate to the characteristics of a given problem and obtain a solution using Matlab-based utilities
- Explain how complexity analysis can be used to assess the efficiency of optimization techniques
- Ability to demonstrate the action of optimization methods by solving illustrative problems on paper
- Explain how the implementation of optimization methods yields problems in numerical linear algebra
- Explain how numerical linear algebra for optimization may be implemented efficiently
|Numerical Optimization, J. Nocedal and S. Wright, Springer, 2nd edition. ISBN-10: 038730303 |
Primal-Dual Interior-Point Methods, S. Wright, SIAM, Philadelphia.
|Graduate Attributes and Skills
|Course organiser||Dr Alemseged Weldeyesus
|Course secretary||Miss Gemma Aitchison
Tel: (0131 6)50 9268