Linear Algebra for AI/ML
Objective
Linear algebra, optimization techniques and statistical methods together form essential tools for most of the algorithms in artificial intelligence and machine learning. In this course, we propose to build some background in these mathematical foundations and prepare students to take on advanced study or research in the field of AI and ML. The objective of this course is to be familiarize the students with the important concepts and computational techniques in linear algebra for AI and ML models.
Modus Operandi
- Class-room activities: 10%
- Participate in class-room discussions and ask quality questions
- Five minutes teaching: Pick any small topic that is taught in the class before your schedule and teach it for five minutes.
- Attendance
- Course project: 20%
- Refer to guidelines.
- Mid-sem: 30%
- End-sem: 40%
Syllabus
- Basics of vector space, norms, distance and linear functions
- System of linear equations and the approaches to solve them
- Eigenvalue and eigenvector of matrices and matrix decomposition
- Low rank matrix approximation and structured low rank approximation
- Convex optimization with gradient descent and accelerated gradient descent
- Missing value and matrix completion methods
Module | Topics | Hr (36) |
---|---|---|
Introduction Module | Motivation, Connection to Machine Learning and Deep Learning, Case studies: Random walk on Graph, PageRank algorithm, Image deburring | 3 |
Vector spaces | Vector spaces, Linear independence, Linear span, Basis and dimension, Inner product, Norm and distance | 3 |
System of Linear Equations | Linear functions, System of linear equations (square, underdetermined and overdetermined) | 3 |
Matrix Decomposition | QR decomposition, Existence and uniqueness of solutions, Inverse and pseudo inverse of matrices | 3 |
Least square | Least squares solution, Constrained least squares | 6 |
Eigenvalue and Eigenvector of Matrices | Eigenvalue and Eigenvector decomposition of square matrices, Spectral theorem for symmetric matrices, Power method for dominant eigenvector computation, Inverse power method, Inverse eigenvalue problem, SVD | 9 |
Low rank matrix approximation | Low rank approximation and structured low rank approximation, Solution using SVD, Hankel structured low rank approximation | 3 |
Introduction to Convex optimization | Convex optimality conditions: differentiable and non-differentiable cases, Gradient descent, Projected gradient methods, Accelerated gradient methods, alternating minimization | 3 |
Matrix completion | Missing data and matrix completion problem, Matrix completion problem using nuclear norm | 3 |
Resources
- Introduction to Applied Linear Algebra: Vectors, Matrices and Least Squares, Stephan Boyd, Lieven Vandenberghe, Cambridge University Press, 2018.
- Linear Algebra and Learning from Data, Gilbert Strang, Wellesley Cambridge Press, 2019.
- Statistical Learning with Sparsity: the LASSO and generalizations, Trevor Hastie, Robert Tibshirani, Martin Wainwright, CRC Press 2017.
Acknowledgement
This wonderful course was designed by Prof. Swanand Khare.