Gatsby Computational Neuroscience Unit (@gatsbyucl) 's Twitter Profile
Gatsby Computational Neuroscience Unit

@gatsbyucl

We study mathematical principles of learning, perception & action in brains & machines. Funded by Gatsby Charitable Foundation.
Also on bluesky & mastodon.

ID: 1177566963180167168

linkhttps://www.ucl.ac.uk/gatsby/ calendar_today27-09-2019 12:55:34

570 Tweet

5,5K Followers

176 Following

Kira Düsterwald (@kiradusterwald) 's Twitter Profile Photo

Looking forward to presenting Tree-WSV for fast unsupervised ground metric learning at ICLR 2026 2025 tomorrow at 3 pm! Come find me at poster #197 🌳

Looking forward to presenting Tree-WSV for fast unsupervised ground metric learning at <a href="/iclr_conf/">ICLR 2026</a> 2025 tomorrow at 3 pm! Come find me at poster #197 🌳
Siu Lun Chau (@chau9991) 's Twitter Profile Photo

📍 I’ll be presenting our AISTATS 2025 paper “Credal Two-sample Tests of Epistemic Uncertainty” this week in Phuket! Joint work with the amazing Antonin Schrab , Arthur Gretton , Dino Sejdinovic , and JUNJIRA MUANDET 🙌 paper: arxiv.org/pdf/2410.12921 video: youtu.be/Rq9qW0GZJeE

📍 I’ll be presenting our AISTATS 2025 paper “Credal Two-sample Tests of Epistemic Uncertainty” this week in Phuket! 

Joint work with the amazing <a href="/AntoninSchrab/">Antonin Schrab</a> , <a href="/ArthurGretton/">Arthur Gretton</a> , <a href="/sejDino/">Dino Sejdinovic</a> , and <a href="/muandet/">JUNJIRA MUANDET</a> 🙌

paper: arxiv.org/pdf/2410.12921
video: youtu.be/Rq9qW0GZJeE
Arthur Gretton (@arthurgretton) 's Twitter Profile Photo

Density Ratio-based Proxy Causal Learning Without Density Ratios 🤔 at #AISTATS2025 An alternative bridge function for proxy causal learning with hidden confounders. arxiv.org/abs/2503.08371 Bariscan Bozkurt, Ben Deaner, Dimitri Meunier LY9988

Density Ratio-based Proxy Causal Learning Without Density Ratios 🤔

at #AISTATS2025 

An alternative bridge function for proxy causal learning with hidden confounders.
arxiv.org/abs/2503.08371

<a href="/BozkurtBariscan/">Bariscan Bozkurt</a>, Ben Deaner,  <a href="/DimitriMeunier1/">Dimitri Meunier</a> <a href="/ly9988/">LY9988</a>
Gatsby Computational Neuroscience Unit (@gatsbyucl) 's Twitter Profile Photo

📢 We have an opportunity for students to join our PhD programme in Theoretical Neuroscience and Machine Learning this September. Application deadline is 27 May 2025. Information & how to apply 👉 ucl.ac.uk/gatsby/study-a…

Arthur Gretton (@arthurgretton) 's Twitter Profile Photo

Credal Two-Sample Tests of Epistemic Uncertainty at #AISTATS25 Compare credal sets: convex sets of prob measures where elements capture aleatoric uncertainty and set itself represents epistemic uncertainty. arxiv.org/abs/2410.12921 Siu Lun Chau Antonin Schrab Dino Sejdinovic Krikamol (Hiring Postdoc)

Credal Two-Sample Tests of Epistemic Uncertainty
at #AISTATS25

Compare credal sets: convex sets of prob measures where elements capture aleatoric uncertainty and set itself represents epistemic uncertainty.

arxiv.org/abs/2410.12921

<a href="/Chau9991/">Siu Lun Chau</a> <a href="/AntoninSchrab/">Antonin Schrab</a> <a href="/sejDino/">Dino Sejdinovic</a> <a href="/krikamol/">Krikamol (Hiring Postdoc)</a>
Arthur Gretton (@arthurgretton) 's Twitter Profile Photo

Kernel Single Proxy Control for Deterministic Confounding at #AISTATS2025 Proxy causal learning generally requires two proxy variables - a treatment and an outcome proxy. When is it possible to use just one? arxiv.org/abs/2308.04585 LY9988

Kernel Single Proxy Control for Deterministic Confounding  

at #AISTATS2025 

Proxy causal learning generally requires two proxy variables - a treatment and an outcome proxy. When is it possible to use just one?  

arxiv.org/abs/2308.04585

<a href="/ly9988/">LY9988</a>
Antonin Schrab (@antoninschrab) 's Twitter Profile Photo

Oral #AISTATS25 Robust Kernel Hypothesis Testing under Data Corruption -Robustify any permutation test to be immune to data corruption of up to X samples -MMD HSIC robust minimax optimality Monday 5 May -Oral Session 7 Robust Learning -Poster Session 3 Presented by Ilmun Kim

Oral #AISTATS25

Robust Kernel Hypothesis Testing under Data Corruption

-Robustify any permutation test to be immune to data corruption of up to X samples
-MMD HSIC robust minimax optimality

Monday 5 May
-Oral Session 7 Robust Learning
-Poster Session 3

Presented by Ilmun Kim
Clémentine Dominé 🍊 (@clementinedomi6) 's Twitter Profile Photo

Our paper "Solve Layerwise Linear Models First to Understand Neural Dynamical Phenomena" has been accepted as a position paper to ICML 2025! arxiv.org/abs/2502.21009 These models offer a tractable path to understanding complex neural dynamics—before diving into full nonlinearity.

SWC (@swc_neuro) 's Twitter Profile Photo

SWC researchers have uncovered a new way that the brain learns. The second learning system is frequency-based and may help explain how the brain forms habits and why they are so hard to break. Full story ⤵️ sainsburywellcome.org/web/research-n…

SWC researchers have uncovered a new way that the brain learns.

The second learning system is frequency-based and may help explain how the brain forms habits and why they are so hard to break.

Full story ⤵️ sainsburywellcome.org/web/research-n…
Stefano Sarao Mannelli (@stefsmlab) 's Twitter Profile Photo

Our paper just came out in PRX! Congrats to Nishil Patel and the rest of the team. TL;DR : We analyse neural network learning through the lens of statistical physics, revealing distinct scaling regimes with sharp transitions. 🔗 journals.aps.org/prx/abstract/1…

Dimitri Meunier (@dimitrimeunier1) 's Twitter Profile Photo

🚨 New paper accepted at SIMODS! 🚨 “Nonlinear Meta-learning Can Guarantee Faster Rates” arxiv.org/abs/2307.10870 When does meta learning work? Spoiler: generalise to new tasks by overfitting on your training tasks! Here is why: 🧵👇

Andrew Saxe (@saxelab) 's Twitter Profile Photo

How does in-context learning emerge in attention models during gradient descent training? Sharing our new Spotlight paper ICML Conference: Training Dynamics of In-Context Learning in Linear Attention arxiv.org/abs/2501.16265 Led by Yedi Zhang with Aaditya Singh and Peter Latham

Arthur Gretton (@arthurgretton) 's Twitter Profile Photo

Composite Goodness-of-fit Tests with Kernels, now out in JMLR! jmlr.org/papers/v26/24-… Test if your distribution comes from ✨any✨ member of a parametric family. Comes in MMD and KSD flavours, and with code. Oscar Key François-Xavier Briol Tamara Fernandez

Kevin Han Huang (@kevinhanhuang1) 's Twitter Profile Photo

Missing ICML due to visa :'(, but looking forward to share our ICML paper (arxiv.org/abs/2502.05318) as a poster at #BayesComp, Singapore! Work on symmetrising neural nets for schrodinger equation in crystals, with the amazing Zhan Ni, Elif Ertekin, Peter Orbanz and Ryan Adams

Kevin Han Huang (@kevinhanhuang1) 's Twitter Profile Photo

Meanwhile, excited to be in #Lyon for #COLT2025, with a co-first author paper (arxiv.org/abs/2502.15752) with the amazing team -- Matthew Mallory and our advisor Morgane Austern! Keywords: Gaussian universality, dependent data, convex Gaussian min-max theorem, data augmentation!

Kevin Han Huang (@kevinhanhuang1) 's Twitter Profile Photo

Last but not least of my travel updates: Courtesy of the very kind Siu Lun Chau, I'm giving a talk at 14:30, 27 Jun NTU College of Computing and Data Science (CCDS) in SG on data augmentation & Gaussian universality, which strings together several works over my PhD. If you're in SG/Lyon the next few weeks, let me know!

Aaditya Singh (@aaditya6284) 's Twitter Profile Photo

Excited to share this work has been accepted as an Oral at #icml2025 -- looking forward to seeing everyone in Vancouver, and an extra thanks to my amazing collaborators for making this project so much fun to work on :)

SWC (@swc_neuro) 's Twitter Profile Photo

New research shows long-term learning is shaped by dopamine signals that act as partial reward prediction errors. The study in mice reveals how early behavioural biases predict individual learning trajectories. Find out more ⬇️ sainsburywellcome.org/web/blog/long-…

New research shows long-term learning is shaped by dopamine signals that act as partial reward prediction errors.

The study in mice reveals how early behavioural biases predict individual learning trajectories.

Find out more ⬇️

sainsburywellcome.org/web/blog/long-…