Projects for the
MVA Course: Generative Modeling
(2024-2025)
Master 2 MVA, Télécom Paris
- Projects by Groups of 2 students.
- Report due for March 21st:
- 4 to 8 pages in simple column format.
- Figures and References can be added in appendix (not counted in the 8 pages).
- Introduction should place the context and connect with the course sessions.
- The practical part should include experiments which are not in the original paper
(other data, other inverse problem, relevant low-dimensional experiments, etc).
- Codes for your experiments should be sent as a zip archive (not including large database).
- Each project comes with a suggestion of 3 work tracks (which you may follow, or not).
- Defense: 15' presentation followed by 15' questions.
- Project Defense at Télécom Paris between March 24th and March 28th.
List of Projects
- Explain convergence guarantees obtained with this regularizer.
- What is the interest of doing a multi-gradient-step instead of mono-gradient-step?
- Compare with Gradient-step Plug-and-Play (TP9).
- Explain the differences between WGAN and MMD-GAN (for theory and training).
- Explain the gradient bias problem that may happen with GANs or WGANs.
- Train a MMD-GAN for a low-dimensional dataset (sklearn make_moons) and a large-scale dataset.
- Explain how weak optimal transport can be used for generative modeling.
- What particular problems may be expected when learning with weak optimal transport.
- Apply the proposed methodology to a synthetic 2D dataset and an image dataset.
- In PnP split Gibbs sampling, can we use generative models with latent variables?
- Compare PnP split Gibbs sampling with Chung et al., 2023 (TP6, Exo3).
- Are there convergence guarantees for the PnP Gibbs sampling algorithm?
- Experiments using the DDPM model used in TP6.
- Compare theoretically and experimentally with Chung et al. ICLR 2023 (TP6, Exo 3) for inpainting and Gaussian deblurring.
- Is the method stable in when the noise measurement increases for Gaussian deblurring?
- Discuss the adequation between the algorithm and the theoretical results.
- For inpainting, does the gray color used to fill the holes has an influence on the algorithm?
- For super-resolution, compute the standard deviation of each pixels when running several times the algorithm and discuss the results.
- Compare theoretically and experimentally with Chung et al. ICLR 2023 (TP6, Exo 3) for inpainting and Gaussian deblurring.
- Evaluate the stochastic aspect of the sampling on some example and discuss the results.
- Can you extend the model to non-linear problems?
- Train a diffusion model on a 2D toy dataset (eg sklearn make_moons), and then train a consistency model using the two different approaches (distillation of diffusion VS independent training).
- For inpainting, discuss the quality of the blending between known and unknwon pixels.
- Compare experimentally with Chung et al. 2023 (TP6).
- Summarize the main results of the paper and explain the insight benefits with Sliced Wasserstein.
- Use Algorithm 1 to learn a generative network for a 2D continuous target distribution.
- Use the proposed algorithm to learn a generative network for a large-scale image dataset.
- How can one constrain a neural network to be the gradient of a convex function?
- Use the algorithm for Brenier maps to compute a generative network for a low-dimensional example.
- Use the proposed algorithm to learn a generative network for a large-scale image dataset.
- Explain the relation and differences between usual OT and GMM OT.
- Exploit the GMM OT cost to learn a generative model for a low-dimensional example.
- Use the proposed algorithm to learn a generative network for a large-scale image dataset.
- What is the performance of Algorithm 3 for image restoration if you use another deep neural network denoiser instead of the proposed flow denoiser?
- Can you comment Proposition 4 and its proof? Especially, is the algorithm deterministic? Can you propose a reformulation of this proposition?
- What are the experimental limits of PnP-Flow? Can you show experiments, where PnP-Flow fails to restore? (Suggestion : you can look at restoration with very noisy images or inpainting with large masked patches)
- Can you test the error quantification method with a Total Variation (TV) regularization for image inverse problems?
- Can you test it with a deep regularization and compare the error quantification results (between TV and deep regularization)?
- How do you generalize this work for RGB images?
- Can you test this reconstruction method with a pre-trained diffusion model?
- Can you compare this method with standard diffusion model sampling?
- Comment the different assumptions of the theoretical part. Can you test the optimilaty of Theorem 1 (in practice)?