Felix Draxler (@felixdrrelax) 's Twitter Profile
Felix Draxler

@felixdrrelax

Machine Learning PostDoc at UC Irvine with Stephan Mandt. PhD from Heidelberg University. Generative models, normalizing flows w/ cell bio applications

ID: 512935817

calendar_today03-03-2012 08:20:24

19 Tweet

90 Takipçi

55 Takip Edilen

Felix Draxler (@felixdrrelax) 's Twitter Profile Photo

We're publishing the code to our ICML 2018 paper "Essentially No Barriers in Neural Network Energy Landscape" (arxiv.org/abs/1803.00885) at github.com/fdraxler/PyTor…. Check out our talk at ICML Conference on Wed 5pm.

Jonathan Frankle (@jefrankle) 's Twitter Profile Photo

PS. When we examine the optimization landscape, we're looking for a linear form of "mode connectivity." In doing so, we build on foundational work by @feldrify et al. & Timur Garipov et al. showing that neural network optima are connected by nonlinear paths of nonincreasing error.

PS. When we examine the optimization landscape, we're looking for a linear form of "mode connectivity." In doing so, we build on foundational work by @feldrify et al. &amp; <a href="/tim_garipov/">Timur Garipov</a> et al. showing that neural network optima are connected by nonlinear paths of nonincreasing error.
Felix Draxler (@felixdrrelax) 's Twitter Profile Photo

Check out our paper on the Role of a Single Affine Layer in Normalizing Flows, awarded a Honorable Mention at GCPR 2020: Video: youtu.be/rIxY94zPPi0?t=… Paper: unitc-my.sharepoint.com/:b:/g/personal…

Check out our paper on the Role of a Single Affine Layer in Normalizing Flows, awarded a Honorable Mention at GCPR 2020:
Video: youtu.be/rIxY94zPPi0?t=…
Paper: unitc-my.sharepoint.com/:b:/g/personal…
Kyle Cranmer (@kylecranmer) 's Twitter Profile Photo

New paper out from group in Heidelberg that has some extension of the work Johann Brehmer & I did on Manifold-learning flows, but for unrestricted (aka non-invertible) auto-encoders. They also suggest a different way to avoid a failure mode we identified arxiv.org/abs/2306.01843

Felix Draxler (@felixdrrelax) 's Twitter Profile Photo

Our new preprint trains any neural network architecture as a generative model via maximum likelihood: arxiv.org/abs/2310.16624 Free-form flows (FFF) work well and sample fast. We showcase this on SBI and molecule generation. Thanks to Peter Sorrenson, Armand, Lea and Ullrich!

Our new preprint trains any neural network architecture as a generative model via maximum likelihood: arxiv.org/abs/2310.16624
Free-form flows (FFF) work well and sample fast. We showcase this on SBI and molecule generation. Thanks to <a href="/PeterSorrenson/">Peter Sorrenson</a>, Armand, Lea and Ullrich!
Farrin Marouf Sofian (@farrinsofian) 's Twitter Profile Photo

🚀 News! Our recent #ICML2025 paper “Variational Control for Guidance in Diffusion Models” introduces a simple yet powerful method for guidance in diffusion models — and it doesn’t need model retraining or extra networks. 📄 Paper: arxiv.org/abs/2502.03686 💻 Code: