leander (@leanderkur) 's Twitter Profile
leander

@leanderkur

PhD in prob. ML @ APRIL-lab at the university of Edinburgh. Tractable probabilistic ML, closed form stat. quantities and probabilistic ML under hard constraints

ID: 95281114

calendar_today07-12-2009 21:20:15

124 Tweet

44 Followers

202 Following

antonio vergari - hiring PhD students (@tetraduzione) 's Twitter Profile Photo

Classical mixture models are limited to positive weights and this requires learning very large mixtures! Can we learn (deep) mixtures with negative weights? Answer in our #ICLR2024 spotlight by Lorenzo Loconte Aleks, Martin, Stefan, Nicolas Arno Solin 📜openreview.net/forum?id=xIHi5…

Andreas Grivas (@andreasgrv) 's Twitter Profile Photo

The softmax bottleneck is an interesting problem; it has many side effects which we do not yet fully understand! If you want to build an intuition for the problem, here is an interactive visualisation I made grv.unargmaxable.ai/static/files/s… (best viewed on desktop).

Nicola Branchini (@branchini_nic) 's Twitter Profile Photo

Will be at AISTATS this week, would love to chat if you have integrals to approximate or are generally into compstat / opt. transport / estimation in causal inference. Also we have two papers around importance sampling and variational inference. 1st: adaptive IS for heavy tails

Will be at AISTATS this week, would love to chat if you have integrals to approximate or are generally into compstat / opt. transport / estimation in causal inference. 
Also we have two papers around importance sampling and variational inference.
1st: adaptive IS for heavy tails
Emile van Krieken (@emilevankrieken) 's Twitter Profile Photo

In our ICML 2024 (ICML Conference) paper, we study neurosymbolic methods under the very common independence assumption… and find many problems! 1️⃣ Non-convexity 2️⃣ Disconnected minima 3️⃣ Unable to represent uncertainty Making optimisation very challenging 😱 Let’s explore 👇

In our ICML 2024 (<a href="/icmlconf/">ICML Conference</a>) paper, we study neurosymbolic methods under the very common independence assumption… and find many problems!

1️⃣ Non-convexity
2️⃣ Disconnected minima
3️⃣ Unable to represent uncertainty

Making optimisation very challenging 😱

Let’s explore 👇
Lorenzo Loconte (@loreloc_) 's Twitter Profile Photo

We learn more expressive mixture models that can subtract probability density by squaring them 🚨We show squaring can reduce expressiveness To tackle this we build sum of squares circuits🆘 🚀We explain why complex parameters help, and show an expressiveness hierarchy around🆘

IMS (@instmathstat) 's Twitter Profile Photo

Exciting news in the global statistics community! Grace Wahba was awarded the prestigious 2025 International Prize in Statistics for her groundbreaking work on smoothing splines, which revolutionized data analysis and machine learning. https://www. statprize.org/index.cfm

Exciting news in the global statistics community! Grace Wahba was awarded the prestigious 2025 International Prize in Statistics for her groundbreaking work on smoothing splines, which revolutionized data analysis and machine learning. https://www. statprize.org/index.cfm