Scientific Machine Learning
Objective
Scientific Machine Learning (SciML) aims to solve the complicated problems of science and engineering with the help of scientific computing and Machine Learning (ML). SciML incorporates uncertainty quantification, propagation and physics-based models to handle high dimensional, noisy, multi-scale and sparse data. This course provides state-of-the-art tools such as physics informed neural networks, neural operators and polynomial chaos theory to students to prepare them for the next wave of data science.
Modus Operandi
- Class-room activities: 10%
- Participate in class-room discussions and ask quality questions
- Five minutes teaching: Pick any small topic that is taught in the class before your schedule and teach it for five minutes.
- Attendance
- Course project: 20%
- Refer to guidelines.
- Mid-sem: 30%
- End-sem: 40%
Pre-requisite
Foundations of Machine Learning (AI60203)
Syllabus
- Module 1: Introduction to Scientific ML
- Module 2: Neural Differential Equations
- Module 3: Physics Informed Neural Networks
- Module 4: Neural Operators
- Module 5: Gaussian Process (GP) Regression
- Module 6: Uncertainty Quantification
S.N. | Module | Unit | Hours (48) |
---|---|---|---|
1 | Introduction to scientific ML | Basics of Artificial Intelligence, Machine Learning, Deep Learning, Optimization and Python libraries | 2 |
2 | Differences between general ML and scientific ML, motivating applications of scientific ML | 2 | |
3 | Neural Differential Equation | Neural ODEs, Multistep neural networks | 2 |
4 | Recurrent neural networks, Structure-preserving neural networks (Symplectic and Poisson nets, generic framework) | 2 | |
5 | Physics Informed Neural Networks | PINN for Burger’s Equation and Boundary Value Problems | 2 |
6 | Constrained and Variational Neural Networks | 2 | |
7 | Convergence Theory | 2 | |
8 | Extensions (Gradient-enhanced PINNs, Conservative PINNs, Extended PINNs, PINNs for fractional PDEs, PINNs for stochastic PDEs) | 4 | |
9 | Neural Operators | Universal approximation theorem for operators | 2 |
10 | DeepOnet (Theory, convergence, variations) | 2 | |
11 | Transforms and operators (Fourier, Laplace and Wavelet and their extensions) | 4 | |
12 | Gaussian Process regression | GP for multi-fidelity data | 2 |
13 | Physics-informed and NN-induced GP kernels | 4 | |
14 | Deep multi-fidelity and diffusion-manifold driven GP | 4 | |
15 | Uncertainty quantification | Methods for UQ | 2 |
16 | Deep Ensembles and Snapshot Ensembles | 2 | |
17 | BNN and BPINN | 4 | |
18 | Unified view (UQ in PINNs, UQ in DeepOnet) | 2 | |
19 | Polynomial Chaos Theory | 2 |
Resources
- Deep Learning for Science and Engineering Teaching Kit by NVIDIA and Brown University
- Rackauckas, C., 2022. Parallel computing and scientific machine learning (SciML): Methods and applications [online]
- Lecture notes on “Introduction to Scientific Machine Learning” by Ilias Bilionis
- Smith, E., 2022. Scientific Machine Learning with Pytorch, Introduction to the Tools of Scientific Computing (Vol. 25). Springer Nature.
- Garikipati, K., 2024. Data-driven Modelling and Scientific Machine Learning in Continuum Physics, Springer.
- Baker, N., Alexander, F., Bremer, T., Hagberg, A., Kevrekidis, Y., Najm, H., Parashar, M., Patra, A., Sethian, J., Wild, S. and Willcox, K., 2019. Workshop report on basic research needs for scientific machine learning: Core technologies for artificial intelligence. USDOE Office of Science (SC), Washington, DC (United States).
- Brunton, S.L. and Kutz, J.N., 2022. Data-driven science and engineering: Machine learning, dynamical systems, and control. Cambridge University Press.
- Geron, A., 2022. Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow. " O'Reilly Media, Inc."
- Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M.W. and Gholami, A., 2024. Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. Advances in Neural Information Processing Systems, 36.
- Lagaris, I.E., Likas, A. and Fotiadis, D.I., 1998. Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans. on Neural networks
- Raissi, M., Perdikaris, P. and Karniadakis, G.E., 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics.
- Kovachki, N., Li, Z., Liu, B., Azizzadenesheli, K., Bhattacharya, K., Stuart, A. and Anandkumar, A., 2023. Neural operator: Learning maps between function spaces with applications to pdes. Journal of Machine Learning Research.
- Lu, L., Jin, P., Pang, G., Zhang, Z. and Karniadakis, G.E., 2021. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature machine intelligence.
- Mishra, P.K., Paulson, J.A. and Braatz, R.D., 2024. Polynomial Chaos-based Stochastic Model Predictive Control: An Overview and Future Research Directions. arXiv preprint arXiv:2406.10734.
- Fan, D., Jodin, G., Consi, T.R., Bonfiglio, L., Ma, Y., Keyes, L.R., Karniadakis, G.E. and Triantafyllou, M.S., 2019. A robotic intelligent towing tank for learning complex fluid-structure dynamics. Science Robotics, 4(36).
- Yang, L., Zhang, D. and Karniadakis, G.E., 2020. Physics-informed generative adversarial networks for stochastic differential equations. SIAM Journal on Scientific Computing, 42(1)
- Cybenko, G., 1989. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4).