A Gentle Introduction to Sequential Deep Latent Variable Models
In 2024, a major battleground in the GenAI start-up world is video generation, and video is a type of sequence data. So, in general, how do deep sequence generative models work? There are many different types of them, and in this tutorial, we will focus on a specific yet widely used class of models — Sequential Deep Latent Variable Models (Sequential Deep LVMs). We will then show you how to extend the principles of variational auto-encoders for image generation to the sequence world.
Speaker Bio – Dr Yingzhen Li
Dr Yingzhen Li is a Senior Lecturer in Machine Learning at Imperial College London. Before that, she worked at Microsoft Research Cambridge and Disney Research. She received her PhD from the University of Cambridge. Yingzhen is passionate about building reliable machine learning systems with probabilistic methods, and her published work has been applied in industrial systems and implemented in popular deep learning frameworks. She is a regularly invited speaker at international machine learning conferences and summer schools, and she gave an invited tutorial on approximate inference at NeurIPS 2020. Her work on Bayesian ML has also been recognised in AAAI 2023 New Faculty Highlights. She has co-organised many international research workshops on probabilistic inference and deep generative models. She regularly serves as Area Chair for ICML, ICLR and NeurIPS, and currently she is a Program Chair for AISTATS 2024.
Time: 13.00 – 14.00
Date: Tuesday 27 February
Location: Hybrid Event | I-X Conference Room, Level 5
Translation and Innovation Hub (I-HUB)
Imperial White City Campus
84 Wood Lane
Any questions, please contact Andreas Joergensen (email@example.com) or Lauren Burton (firstname.lastname@example.org).