University Homepage
DRPS Homepage
DRPS Search
DRPS Contact
DRPS : Course Catalogue : School of Informatics : Informatics

Postgraduate Course: Robot Learning and Sensorimotor Control (INFR11142)

Course Outline
SchoolSchool of Informatics CollegeCollege of Science and Engineering
Credit level (Normal year taken)SCQF Level 11 (Postgraduate) AvailabilityAvailable to all students
SCQF Credits20 ECTS Credits10
SummaryThis course is designed as a follow up to the introductory course on Robotics: Science and Systems and will gear students towards advanced topics in applying Machine Learning towards Adaptive Control and Planning in Robots and in using these insights to understand human sensorimotor control. Control of complex, compliant, multi degree of freedom (DOF) sensorimotor systems like humanoid robots or autonomous vehicles have been pushing the limits of traditional planning and control methods.
This course aims at introducing a machine learning approach to the challenges and will take the students through various aspects involved in motor planning, estimation, prediction, optimal control and learning for adaptation with an emphasis on the computational perspective. We will learn about statistical machine learning tools and methodologies particularly geared towards problems of real-time, online learning for robot control.
Specific methods for implementing optimal control in real world systems with contacts and learning methods geared towards making them adaptive will be explored. Issues and possible approaches for learning in high dimensions, planning under uncertainty and redundancy, sensorimotor transformations and stochastic optimal control will be discussed. This will be put in context through exposure to topics in human motor control, experimental paradigms and the use of computational methods in understanding biological sensorimotor mechanisms.
Course description Syllabus Information:

1. Machine Learning Tools for Robotics - Regression in High Dimensions - Dimensionality Reduction - Online, incremental learning - Multiple Model Learning
2. Optimal Control Approaches - LQR, LQG, Dynamic Programming, Trajectory Optimization: Direct and Shooting Methods (iLQR, DDP)
3. Adaptive Learning and Control - Predictive Control - Underactuation - Multi-contact modelling and optimization - Constrained Operational Space Control - Hierarchical QP and Stack of task formulation -
Trajectory based optimization methods - Re-planning in alternate spaces
4. Interaction and Robust Control - Cartesian Impedance Control - Passivity Methods - Lyapunov
Stability - LQR-Trees and Sum-of-Squares Programming
5. Movement Primitives - Rhythmic vs Point to Point Movements - Dynamical Systems and DMPs - Path Integral Methods - Learning by Demonstration
6. Planning and Optimization - Stochastic Optimal Control - Bayesian Inference Planning - RL Apprenticeship Learning and Inverse Optimal Control
7. Understanding Human Sensorimotor Control - Force Field and Adaptation - Optimal control theory for Explaining Sensorimotor Behaviour - Cue Integration and Sensorimotor Adaptation - Impedance Control - Human(oid) Locomotion and Stability.
Entry Requirements (not applicable to Visiting Students)
Pre-requisites It is RECOMMENDED that students have passed Robotics: Science and Systems (INFR11092)
Prohibited Combinations Other requirements None
Information for Visiting Students
Course Delivery Information
Academic year 2017/18, Available to all students (SV1) Quota:  None
Course Start Semester 2
Timetable Timetable
Learning and Teaching activities (Further Info) Total Hours: 200 ( Lecture Hours 36, Summative Assessment Hours 2, Programme Level Learning and Teaching Hours 4, Directed Learning and Independent Learning Hours 158 )
Assessment (Further Info) Written Exam 60 %, Coursework 40 %, Practical Exam 0 %
Additional Information (Assessment) Assessment Weightings:«br /»
«br /»
Written Examination: 60%«br /»
Assessed Coursework: 30%«br /»
Oral Presentations: 10%«br /»
«br /»
There will be two written assignments, worth 15% each, which will include an implementation on a full physics simulation environment (to be done individually) and an oral presentation of a research paper review (10%) which will be done in teams of 2 or 3 depending on class size.
Feedback Not entered
Exam Information
Exam Diet Paper Name Hours & Minutes
Main Exam Diet S2 (April/May)2:00
Learning Outcomes
On completion of this course, the student will be able to:
  1. Demonstrate knowledge of key areas of robot dynamics control and kinematic planning.
  2. Analyze and evaluate conceptual and empirical problems in adaptive control and robot learning.
  3. Analyze and implement a subset of established algorithms in dynamics learning and stochastic optimal control.
  4. Demonstrate understanding of issues related to optimality in human motor control; develop ability to frame human motor control problems in an optimization framework.
Reading List
Recommended Reading List:

1. Robert F. Stengel, Optimal Control and Estimation
2. Howie Choset et. al, Principles of Robot Motion: Theory, Algorithms, and Implementations
3. Mark W. Spong, Seth Hutchinson and M. Vidyasagar, Robot Modeling and Control
4. Sebastian Thrun, Wolfram Burgard and Dieter Fox, Probabilistic Robotics
5. Betts, Practical Methods for Optimal Control and Estimation Using Nonlinear Programming
6. Sciliano, Khatib (ed.) Springer Handbook of Robotics
Additional Information
Graduate Attributes and Skills Not entered
KeywordsNot entered
Course organiserProf Sethu Vijayakumar
Tel: (0131 6)51 3444
Course secretaryMs Katey Lee
Tel: (0131 6)50 2701
Help & Information
Search DPTs and Courses
Degree Programmes
Browse DPTs
Humanities and Social Science
Science and Engineering
Medicine and Veterinary Medicine
Other Information
Combined Course Timetable
Important Information