Hi, welcome to my homepage.
I am a fourth year PhD student in the Gatsby Computational Neuroscience Unit at UCL and an ELLIS PhD student.
Previously, I was a research assistant at the Istituto Italiano di Tecnologia in the Computational Statistics and Machine Learning team in Genoa, working with Massimiliano Pontil and Carlo Ciliberto. From May 2020 to November 2020, I was a (remote) research intern with Pierre Alquier and Emtiyaz Khan in the Approximate Bayesian Inference Team of the RIKEN Center for Advanced Intelligence Project in Tokyo.
I graduated the Master MVA (Machine Learning and Computer Vision) from ENS Paris Saclay and obtained the engineering degree of ENSAE specialising in Statistics.
Publications
Preprints
- Bozkurt B., Deaner B., Meunier, D., Xu, L., Gretton A., Density Ratio-based Proxy Causal Learning Without Density Ratios, 2024. Submitted.
- Kim J., Meunier, D., Gretton A., Suzuki T., Li Z., Optimality and Adaptivity of Deep Neural Features for Instrumental Variable Regression, 2024. Submitted.
- Meunier D.*, Li Z.*, Gretton A., Kpotufe S., Nonlinear Meta-Learning Can Guarantee Faster Rates, 2023. Available on arxiv:2307.10870. Presented at the NeurIPS 2023 Workshop Mathematics of Modern Machine Learning (M3L). Submitted.
Journal
Li Z.*, Meunier D.*, Mollenhauer M., Gretton A., Towards Optimal Sobolev Norm Rates for the Vector-Valued Regularized Least-Squares Algorithm, 2024. Journal of Machine Learning Research, 2024, vol. 25, no. 181, pp. 1-51. Available on arxiv:2312.07186.
Meunier, D., Alquier, P., Meta-strategy for Learning Tuning Parameters with Guarantees. Entropy, 2021, vol. 23, no. 10, 1257. Part of the special issue on Approximate Bayesian Inference. Available on arXiv:2102.02504, Code.
Conference
Meunier D.*, Shen Z.*, Mollenhauer M., Gretton A., Li Z., Optimal Rates for Vector-Valued Spectral Regularization Learning Algorithms, 2024. To Appear NeurIPS 2024. Available on arxiv:2405.14778.
Li Z.*, Meunier D.*, Mollenhauer M., Gretton A., Optimal Rates for Regularized Conditional Mean Embedding Learning. Advances in Neural Information Processing Systems 35 (NeurIPS), 2022, pp. 4433–4445. Available on arXiv:2208.01711.
- Meunier, D., Pontil, M., Ciliberto, C., Distribution Regression with Sliced Wasserstein Kernels. Proceedings of the 39th International Conference on Machine Learning (ICML), Proceedings of Machine Learning Research, 2022, vol. 162, pp. 15501–15523. Available on arXiv:2202.03926.
(* Equal contribution)
Teaching
- Gatsby Bridging Programme - 2024
- Advanced Topics in Machine Learning, Kernel Methods - Computational Statistics and Machine Learning MSc - UCL - Fall 2022 & 2023 with Arthur Gretton
- Introduction to stochastic processes - Graduate (M1) - ENSAE Paris - Fall 2020 with Nicolas Chopin
- Tutor for first year students in Linear Algebra and Functional Analysis - Université Paris Dauphine - Fall 2017
Education
- MSc in Statistics & Machine Learning, ENS Paris-Saclay, 2019-2020
- MSc in Statistics & Economics, ENSAE Paris, 2018-2020
- BSc in Mathematics, Université Paris Dauphine, 2014-2018
Reading groups
- PIMS online graduate course on Optimal Transport + Gradient Flows, Fall 2023
- Organiser of the Machine Learning Journal Club at Gatsby CNU, UCL, 2022-2023
- High-Dimensional Probability: An Introduction with Applications in Data Science, Roman Vershynin - January 2023 - March 2023
- Introductory Functional Analysis with Application, Erwin Kreyszig, June 2022 - December 2022
- Learning Theory from First Principles, Francis Bach, April 2021 - September 2021
Attendance
- Learning and Optimization in Luminy – LOL, CIRM, Luminy, 2024
- Workshop on Functional Inference and Machine Intelligence - FIMI, Bristol, UK, 2024
- Machine Learning Summer School, OIST, Okinawa, 2024
- Gradient Flows For Sampling, Inference, and Learning, Royal Statistical Society, 2023
- Meeting in Mathematical Statistics, CIRM, Luminy, 2022
- Neurips, New Orleans, 2022
- ICML, Baltimore, 2022
- COLT, London, 2022