You are here: Events / Course: Probabilistic Foundations of Generative Models for Machine Learning

Training / Workshop
AI Innovation Center

E: paul.van.son@hightechcampus.com

Language: English
Date: 05 Aug 2022
Time: 13:00 - 16:00
Entry fee: Free

Location:

HTC 5 | AI Innovation Center

What are likelihood-based generative models and why do they matter: The ability to imagine and synthesize is one of the distinct traits of intelligence that is integral to human survival, creative problem solving, generating arts, discovering new solution directions, and more.

DUE TO HIGH DEMAND, IT'S NO LONGER POSSIBLE TO REGISTER FOR THIS EVENT

Generative models endow the machines with the ability to synthesize new data points, such as audio, text, and images. The applications of generative modelling find their way into diverse domains such as technology, science, and media design. This is probably why generative model-based solutions and services start to enjoy public attention: recently Dall-E 2 [1], developed by OpenAI, made headlines by creating good-quality images based on a simple text prompt. Just a few weeks after Google also announced a similar development, Imagen [2].

The field is progressing rapidly and multiple distinct methodologies have been developed [3]. For the past few years, generative adversarial networks (GANs) [4] provided state-of-the-art synthesis results. More recently likelihood-based generative models made a comeback with the remarkable progress in diffusion models [5]. In this case, the generation of novel synthetic data points is done by (implicitly) estimating the density of the data and then drawing samples from the estimated density. A large body of research has emerged over the past decade, see [3, 4, 6, 7, 8] and references therein.

Progress in likelihood-based models is not limited to diffusion models. Variational auto-encoders (VAE), another likelihood-based generation method, have also made significant progress; a notable example is [9] by NVIDIA which achieved high-quality synthesis results by introducing architectural and modelling improvements. VAEs have the advantage of fast and tractable operation and thus receive our particular attention in this workshop.

The applications of such models are widespread: from synthesizing images that can stimulate the human creative process to time-series forecasting and anomaly detection. Other examples include inpainting corrupt or missing pixels of images, enhancing low-quality images and
interpolation between images (morphing one image into another). There are also applications of these models in engineering [10] as well as scientific settings [11]. Generative models are likely to impact every aspect of innovation and design that requires human creative processes, augmenting them with valuable tools to go further and achieve more.

What is this workshop about:
The main learning goal of this workshop is to allow a person with a technical background that is new or otherwise unfamiliar with machine learning (ML) to get to know the statistical foundations of machine learning algorithms and build their own first generative model, a variational autoencoder! More concretely in this workshop, we provide a gentle introduction to the mathematical ideas that power generative models. There will be open discussions and experimenting with code in Python.

We will look into:

  • Fundamentals: information, variational Bayesian inference, latent variable models and a geometrical intuition of these models.

  • Deep learning architectures, in particular convolutions and res-nets.

  • Likelihood-based generative models with an emphasis on variational autoencoders and normalizing flows, and a brief introduction to diffusion models

While machine-learning today is becoming ever more empirical and requires massive amounts of data, training time and fine-tuning, knowing the fundamental mathematical concepts that govern how this information processing works allows one to be more creative and adapt existing methods to new domains fitting specific requirements, and identify new solution directions.

Who is the target audience:
Anyone interested in taking their first steps in ML and generative models with a focus on understanding. We expect you to have basic knowledge of the following: linear algebra, probability and Python. You also need to bring a laptop!


Facilitators Biography:

Iman Mossavat is a signal processing and machine learning expert with more than 10 years of industrial experience in applications related to challenging inference problems, in particular developing algorithms for scatterometry data analysis in the semiconductor equipment industry.

Jan Jitse Venselaar is a mathematician and data scientist who has implemented machine learning solutions with a wide range of applications, from greenhouse pest detection to digital biomarkers and semiconductor metrology and is experienced with many different (deep) learning techniques, including deep reinforcement learning, convolutional neural networks and transformers.

References
[1] DALL-E 2, OpenAI, https://bit.ly/3bJRxiQ
[2] Imagen, Google research, brain team, https://bit.ly/3IibwBr
[3] Bond-Taylor, Sam, et al. "Deep generative modelling: A comparative review of VAEs,
GANs, normalizing flows, energy-based and autoregressive models." arXiv preprint
arXiv:2103.04922 (2021).
[4] Jie Gui, Zhenan Sun, Yonggang Wen, Dacheng Tao, and Jieping Ye. A Review on
Generative Adversarial Networks: Algorithms, Theory, and Applications.
arXiv:2001.06937, 2020.
[5] Dhariwal, Prafulla, and Alexander Nichol. "Diffusion models beat gans on image synthesis."
Advances in Neural Information Processing Systems 34 (2021): 8780-8794.
[6] Ghojogh, Benyamin, et al. "Factor analysis, probabilistic principal component analysis,
variational inference, and variational autoencoder: Tutorial and survey." arXiv preprint
arXiv:2101.00734 (2021).
[7] George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and
Balaji Lakshminarayanan. Normalizing Flows for Probabilistic Modeling and Inference. JMLR,
22(57):1– 64, 2021.
[8] Yang Song and Diederik P. Kingma. How to Train Your EnergyBased
Models. arXiv:2101.03288, 2021.
[9] Vahdat, Arash, and Jan Kautz. "NVAE: A deep hierarchical variational
autoencoder." Advances in Neural Information Processing Systems 33 (2020):
19667-19679.
[10] Deep Generative Models in Engineering Design: A Review, Lyle Regenwetter,
Amin Heyrani Nobari, Faez Ahmed, J. Mech. Des. Jul 2022
[11] Deep generative models for galaxy image simulations, François Lanusse, Rachel
Mandelbaum, Siamak Ravanbakhsh, Chun-Liang Li, Peter Freeman, Barnabás Póczos, Monthly
Notices of the Royal Astronomical Society, Volume 504, Issue 4, July 2021

Receive the latest insights, news & events

Subscribe to our biweekly newsletter