
Quentin Bertrand
@qu3ntinb
Researcher at @Inria. Previously, postdoctoral researcher at @Mila_Quebec w/ @SimonLacosteJ and @gauthier_gidel.
ID: 1346422213507952640
https://qb3.github.io/ 05-01-2021 11:44:14
758 Tweet
948 Takipçi
1,1K Takip Edilen




Pleased to see that this time, three Czech ladies are in the list of the European Research Council (ERC) Advanced Grants, and I am very proud to be among them ;). Congrats to Kateřina Čapková a Anna Durnová!

🎉 It’s official! I’ve been awarded an ERC Advanced Grant for my project on Statistical Analysis of Generative Models. More details 👉 crest.science/arnak-dalalyan… #ERCAdG European Research Council (ERC)

❓ How long does SGD take to reach the global minimum on non-convex functions? With Franck Iutzeler, J. Malick, P. Mertikopoulos, we tackle this fundamental question in our new ICML 2025 paper: "The Global Convergence Time of Stochastic Gradient Descent in Non-Convex Landscapes"


It's hard to plan for AGI without knowing what outcomes are even possible, let alone good. So we’re hosting a workshop! Post-AGI Civilizational Equilibria: Are there any good ones? Vancouver, July 14th Featuring: Joe Carlsmith Richard Ngo Emmett Shear 🧵


🧵(1/6) Delighted to share our ICML Conference 2025 spotlight paper: the Feynman-Kac Correctors (FKCs) in Diffusion Picture this: it’s inference time and we want to generate new samples from our diffusion model. But we don’t want to just copy the training data – we may want to sample

New paper on the generalization of Flow Matching arxiv.org/abs/2506.03719 🤯 Why does flow matching generalize? Did you know that the flow matching target you're trying to learn **can only generate training points**? with Quentin Bertrand, Anne Gagneux & Rémi Emonet 👇👇👇



🚀IVADO et le Centre des Compétences futures Future Skills Centre - en collaboration avec le Tech3Lab de HEC Montréal, lancent une nouvelle formation gratuite en #IA pour les professionnel(le)s du #Québec et du #Canada. Lire le communiqué de presse➡️lnkd.in/gdDkqj8N





Burny - Effective Omni Mathurin Massias Quentin Bertrand Yes, it is basically a different way of training normalizing flows via a regressive objective on the vector field, thus avoiding simulation step a training time. Meta uses it!

