Few-shot Generative Models

Giorgio Giannone: Few-shot Generalization in Deep Generative Models

Humans are exceptional few-shot learners. We can grasp easily the function of an object never encountered before. This is because we have internal models of the world, and we can combine our prior knowledge about objects’ appearance and their function to make well-educated inferences from very little data. In contrast, traditional deep learning models have to be trained tabula rasa and therefore need orders of magnitude more data.

Machine learning has made impressive progress in computer vision, speech recognition, language modeling during the last decade. When a large amount of data is available, deep learning has enabled massive improvement at scale. In order to solve challenging problems such as safe online learning - where data is scarce and reliable simulation is not feasible - we need methods able to adapt fast and learn efficiently.

This project aims at developing methods for improving few-shot learning leveraging deep generative models. We recast the few-shot learning problem as approximate inference in hierarchical bayesian modeling with the goal of fast adaptation for density estimation, inference and generalization.

 

PhD project

By: Giorgio Giannone

Section: Cognitive Systems

Principal supervisor: Ole Winther

Co-supervisor: Søren Hauberg

Project title: Few-shot Generative Models

Term: 01/06/2020 → 30/09/2023

Contact

Giorgio Giannone
Guest
DTU Compute

Contact

Ole Winther
Professor
DTU Compute
+45 45 25 38 95

Contact

Søren Hauberg
Professor
DTU Compute
+45 45 25 38 99