Denis Blessing (@denbless94) 's Twitter Profile
Denis Blessing

@denbless94

PhD student at Karlsruhe Institute of Technology, Germany - Working on variational inference and sampling

ID: 1866778249520664576

linkhttps://denisbless.github.io/ calendar_today11-12-2024 09:33:21

1 Tweet

24 Followers

63 Following

Lorenz Richter @ICLR'25 (@lorenz_richter) 's Twitter Profile Photo

Our new work arxiv.org/pdf/2503.01006 extends the theory of diffusion bridges to degenerate noise settings, including underdamped Langevin dynamics (with Denis Blessing, Julius Berner). This enables more efficient diffusion-based sampling with substantially fewer discretization steps.

Our new work arxiv.org/pdf/2503.01006 extends the theory of diffusion bridges to degenerate noise settings, including underdamped Langevin dynamics (with <a href="/DenBless94/">Denis Blessing</a>, <a href="/julberner/">Julius Berner</a>). This enables more efficient diffusion-based sampling with substantially fewer discretization steps.
Onur Celik (@onclk_) 's Twitter Profile Photo

I am happy to share that our work ‘DIME: Diffusion-Based Maximum Entropy Reinforcement Learning’ has been accepted to ICML 2025. Many thanks to my colleagues and collaborators Zechu Li, Denis Blessing, Ge Li, Daniel Palenicek, Jan Peters, Georgia Chalvatzaki,Gerhard Neumann

Kirill Neklyudov (@k_neklyudov) 's Twitter Profile Photo

1/ Where do Probabilistic Models, Sampling, Deep Learning, and Natural Sciences meet? 🤔 The workshop we’re organizing at #NeurIPS2025! 📢 FPI@NeurIPS 2025: Frontiers in Probabilistic Inference – Learning meets Sampling Learn more and submit → fpiworkshop.org

Lorenz Richter @ICLR'25 (@lorenz_richter) 's Twitter Profile Photo

Excited for #ICML2025 in Vancouver! On Thursday morning, I'm presenting our paper (arxiv.org/pdf/2506.00962) on a critical issue in reinforcement learning: how to correctly handle random time horizons. We've identified incorrect formulas and offer a solution. Let's chat, write me!

Excited for #ICML2025 in Vancouver! On Thursday morning, I'm presenting our paper (arxiv.org/pdf/2506.00962) on a critical issue in reinforcement learning: how to correctly handle random time horizons. We've identified incorrect formulas and offer a solution. Let's chat, write me!
Jiajun He (@jiajunhe614) 's Twitter Profile Photo

When sampling from multimodal distributions, we rely on multiple temperatures to balance exploration and exploitation. Can we bring this idea into the world of diffusion-based neural samplers? 👉Check out our ICML paper to see how this idea can lead to significant improvements!