Postgraduate Course: Robot Learning and Sensorimotor Control (INFR11142)
|School||School of Informatics
||College||College of Science and Engineering
|Credit level (Normal year taken)||SCQF Level 11 (Postgraduate)
||Availability||Available to all students
|Summary||This course is designed as a follow up to the introductory course on Robotics: Science and Systems and will gear students towards advanced topics in applying Machine Learning towards Adaptive Control and Planning in Robots and in using these insights to understand human sensorimotor control. Control of complex, compliant, multi degree of freedom (DOF) sensorimotor systems like humanoid robots or autonomous vehicles have been pushing the limits of traditional planning and control methods.
This course aims at introducing a machine learning approach to the challenges and will take the students through various aspects involved in motor planning, estimation, prediction, optimal control and learning for adaptation with an emphasis on the computational perspective. We will learn about statistical machine learning tools and methodologies particularly geared towards problems of real-time, online learning for robot control.
Specific methods for implementing optimal control in real world systems with contacts and learning methods geared towards making them adaptive will be explored. Issues and possible approaches for learning in high dimensions, planning under uncertainty and redundancy, sensorimotor transformations and stochastic optimal control will be discussed. This will be put in context through exposure to topics in human motor control, experimental paradigms and the use of computational methods in understanding biological sensorimotor mechanisms.
1. Machine Learning Tools for Robotics - Regression in High Dimensions - Dimensionality Reduction - Online, incremental learning - Multiple Model Learning
2. Optimal Control Approaches - LQR, LQG, Dynamic Programming, Trajectory Optimization: Direct and Shooting Methods (iLQR, DDP)
3. Adaptive Learning and Control - Predictive Control - Underactuation - Multi-contact modelling and optimization - Constrained Operational Space Control - Hierarchical QP and Stack of task formulation -
Trajectory based optimization methods - Re-planning in alternate spaces
4. Interaction and Robust Control - Cartesian Impedance Control - Passivity Methods - Lyapunov
Stability - LQR-Trees and Sum-of-Squares Programming
5. Movement Primitives - Rhythmic vs Point to Point Movements - Dynamical Systems and DMPs - Path Integral Methods - Learning by Demonstration
6. Planning and Optimization - Stochastic Optimal Control - Bayesian Inference Planning - RL Apprenticeship Learning and Inverse Optimal Control
7. Understanding Human Sensorimotor Control - Force Field and Adaptation - Optimal control theory for Explaining Sensorimotor Behaviour - Cue Integration and Sensorimotor Adaptation - Impedance Control - Human(oid) Locomotion and Stability.
Entry Requirements (not applicable to Visiting Students)
|| It is RECOMMENDED that students have passed
Robotics: Science and Systems (INFR11092)
||Other requirements|| None
Information for Visiting Students
Course Delivery Information
|Academic year 2017/18, Available to all students (SV1)
|Learning and Teaching activities (Further Info)
Lecture Hours 36,
Summative Assessment Hours 2,
Programme Level Learning and Teaching Hours 4,
Directed Learning and Independent Learning Hours
|Assessment (Further Info)
|Additional Information (Assessment)
||Assessment Weightings:«br /»
Written Examination: 60%«br /»
Assessed Coursework: 30%«br /»
Oral Presentations: 10%«br /»
There will be two written assignments, worth 15% each, which will include an implementation on a full physics simulation environment (to be done individually) and an oral presentation of a research paper review (10%) which will be done in teams of 2 or 3 depending on class size.
||Hours & Minutes
|Main Exam Diet S2 (April/May)||2:00|
On completion of this course, the student will be able to:
- Demonstrate knowledge of key areas of robot dynamics control and kinematic planning.
- Analyze and evaluate conceptual and empirical problems in adaptive control and robot learning.
- Analyze and implement a subset of established algorithms in dynamics learning and stochastic optimal control.
- Demonstrate understanding of issues related to optimality in human motor control; develop ability to frame human motor control problems in an optimization framework.
|Recommended Reading List:|
1. Robert F. Stengel, Optimal Control and Estimation
2. Howie Choset et. al, Principles of Robot Motion: Theory, Algorithms, and Implementations
3. Mark W. Spong, Seth Hutchinson and M. Vidyasagar, Robot Modeling and Control
4. Sebastian Thrun, Wolfram Burgard and Dieter Fox, Probabilistic Robotics
5. Betts, Practical Methods for Optimal Control and Estimation Using Nonlinear Programming
6. Sciliano, Khatib (ed.) Springer Handbook of Robotics
|Graduate Attributes and Skills
|Course organiser||Prof Sethu Vijayakumar
Tel: (0131 6)51 3444
|Course secretary||Ms Katey Lee
Tel: (0131 6)50 2701