AI60209: AI/ML for Robot Autonomy

Objective

This course presents AI/ML based principles for endowing robots with capabilities to autonomously learn new skills to make the control of robots safe. The global robotics technology market size is expected to surpass around 283 billion USD by 2032 (Ref). The aim of this course is to introduce students with basics as well as modern robot autonomy. Students will be able to understand why AI/ML based methods are required in the modern robotics industry and why the traditional AI/ML is not directly applicable. Students will be introduced to the following

  • a set of control laws that enable motion- planning by learning the dynamical systems
  • reinforcement learning and its relationship to optimal control for robotics, and imitation learning

Modus Operandi

  • Class-room activities: 10%
    • Participate in class-room discussions and ask quality questions
    • Five minutes teaching: Pick any small topic that is taught in the class before your schedule and teach it for five minutes.
    • Attendance
  • Course project: 20%
    • I will provide you codes. Your job will be to run that code by yourself and understand the algorithms and associated theory.
  • Mid-sem: 30%
  • End-sem: 40%

Syllabus

Module 1: Foundation for robotics and control

Robot body (introduction and rigid transformation), Robot motion (SO3, screw, twist and inverse kinematics), Robot dynamics (angular momentum, rotational inertia, kinetic energy, force, change of frames), cart-pole, quadrotors (simulation using ROS/ Gazebo), mobile robots (simulation using ROS/ Gazebo), Open-loop vs. closed-loop control, notions of safety, chance constraints, stability, stabilizability, controllability and reachability

Module 2: Autonomy with dynamical systems

Learning from demonstration: teaching robots via human demonstration such as teleoperation, kinesthetic teaching and observational learning

Learning a control law: three classic regression methods to estimate model and demonstrates their inability to learn a stable dynamical system (DS), brief introduction to Lyapunov theory, Gaussian mixture regression, physically consistent estimation approach for Gaussian mixture models

Adapting and modulating an existing control law: modulations that act locally to preserve the generic properties of the nominal DS (e.g., asymptotic or global stability)

Module 3: Autonomy under constraints

Obstacle avoidance: modulate the DS to contour obstacles or to remain within a given workspace, Model Predictive Control (MPC), Deep MPC, stochastic MPC and constrained estimator

Module 4: Autonomy with RL

Robots as Markov Decision Problems, Intro to RL: Sequential decision making examples of robotics, principle of optimality, dynamic programming, examples of uncertainty in the robotics, challenges and extensions of dynamic programming

Model-based and model-free RL for robot control: problem formulation, value iteration, policy iteration, Q learning, policy gradient, actor-critic, deep RL for robotics

Deep RL: DQN (Deep Q learning, deep value network, training deep Q network), Unbiased policy gradient method

Advanced on-policy RL and off-policy RL: practical first order policy optimization, efficient and stable policy optimization, incremental Monte Carlo value function estimation, trust region method, deep deterministic policy gradient, trouble and tricks in robotics, soft actor critic

Imitation learning: Behavioral cloning, direct policy learning, inverse RL, learning from comparison and physical feedback

Resources

  1. A Mathematical Introduction to Robotic Manipulation by Richard Murray, Zexiang Li and Shankar Sastry, CRC Press
  2. Learning for Adaptive and Reactive Robot Control By Aude Billard, Sina Mirrazavi and Nadia Figueroa, MIT press
  3. Lecture notes of Machine Learning for Robotics by Hao Su
  4. Lecture notes of Principles of Robot Autonomy II by Marco Pavone, Dorsa Sadigh and Jeannette Bohg
  5. Springer Handbook of Robotics, Editors: B. Siciliano and O. Khatib
  6. Robotics, Vision and Control by Peter Corke
  7. Probabilistic Robotics by Sebastian Thrun, Wolfram Burgard and Dieter Fox
  8. Lecture notes on Underactuated Robotics by Russ Tedrake
  9. Lecture notes of Robot Learning by Sanjiban Choudhury

AI61006: Artificial Intelligence for Cyber Physical Systems

Objective

  • How to formulate your domain specific problem as a CPS
  • How to apply some suitable AI techniques to solve a CPS problem

Modus Operandi

  • Class-room activities: 10%
    • Participate in class-room discussions
    • Five minutes teaching: Pick any small topic that is taught in the class before your schedule and teach it for five minutes.
    • Attendance
  • Course project: 20%
  • Mid-sem: 30%
  • End-sem: 40%

Syllabus

Module 1: Introduction

Module 2: Physical Systems

Module 3: Sensing and Perception

Module 4: Planning and Acting

Projects for students (Internship, BTP, MTP, Course)

Student projects are divided in three categories. The expected outcome depends on the duration of the project and therefore, students are not supposed to complete the project. Instead, their performance will be evaluated on the basis of efforts they put in learning. In addition, my inputs will be proportional to the efforts of the student. The three categories of projects are

  1. Review and tutorial: You can select a few papers relevant as per your interest in which control and learning both are present. You can summarize those papers or their problem formulation, try to make a tree to connect other related papers and attempt to provide your own perspective. I will grade you on the basis of consistency, exhaustiveness, and completeness. You can write an article on medium or a post on LinkedIn to get noticed by the people or industries working in the related area. In this way, you will be able to secure your next career stop. Please do not use my name in your social media post without my permission. Maximum two members in a team are allowed.
  2. Reproduce and benchmarking: You can select a paper according to your interest and my recommendations, and reproduce their simulation results. If you have some novelty, you can make your git repository public. This open source contribution will help you to demonstrate your skills in front of interviewers and boost your resume. Please do not use my name anywhere in your git repository without my permission. You can work in a team of at most three members.
  3. Research and invent: For the course project, you can choose any topic within the general theme of the course for your research in a team of maximum four members. You can continue the work that you are doing with other faculty members with their permission. I will provide the publishable and applicable research directions to students who are working with me as intern, BTP or MTP student.

You should consider the papers published in top AI and robotics conferences. You can choose papers written by famous people so that you can understand their research direction. You can also see the work carried out in top universities and industries.