Nicola Branchini (@branchini_nic) 's Twitter Profile
Nicola Branchini

@branchini_nic

🇮🇹 3rd yr Stats PhD @EdinUniMaths 🏴󠁧󠁢󠁳󠁣󠁴󠁿.🤔💭 about reliable uncertainty quantification. Interested in sampling and measure transport methodologies.

ID: 3275439755

linkhttp://branchini.fun/about calendar_today17-05-2015 14:43:47

1,1K Tweet

738 Followers

2,2K Following

Itai Yanai (@itaiyanai) 's Twitter Profile Photo

Act with chivalry when working with another person. Don't cling to your own ideas & status but rather dare to give up (or at least share) control. Allow yourself to be changed by the ideas. I wish we'd included in our paper this old concept by Keith Johnstone from improv theater.

Act with chivalry when working with another person. Don't cling to your own ideas & status but rather dare to give up (or at least share) control. Allow yourself to be changed by the ideas. I wish we'd included in our paper this old concept by Keith Johnstone from improv theater.
François-Xavier Briol (@fx_briol) 's Twitter Profile Photo

Just finished delivering a course on 'Robust and scalable simulation-based inference (SBI)' at Greek Stochastics. This covered an introduction to SBI, open challenges, and some recent contributions from my own group. The slides are now available here: fxbriol.github.io/pdfs/slides-SB…

Just finished delivering a course on 'Robust and scalable simulation-based inference (SBI)' at Greek Stochastics. This covered an introduction to SBI, open challenges, and some recent contributions from my own group.

The slides are now available here: fxbriol.github.io/pdfs/slides-SB…
ELLIS (@ellisforeurope) 's Twitter Profile Photo

🎓 Interested in a #PhD in machine learning or #AI? The ELLIS PhD Program connects top students with leading researchers across Europe. The application portal opens on Oct 1st. Curious? Join our info session on the same day. Get all the info 👉 bit.ly/45DSe75 #ELLISPhD

François Chollet (@fchollet) 's Twitter Profile Photo

The most important skill for a researcher is not technical ability. It's taste. The ability to identify interesting and tractable problems, and recognize important ideas when they show up. This can't be taught directly. It's cultivated through curiosity and broad reading.

Jingfeng Wu (@uuujingfeng) 's Twitter Profile Photo

sharing a new paper w Peter Bartlett, Jason Lee, Sham Kakade, Bin Yu ppl talking about implicit regularization, but how good is it? We show its surprisingly effective, that GD dominates ridge for all linear regression, w/ more cool stuff on GD vs SGD arxiv.org/abs/2509.17251

sharing a new paper w Peter Bartlett, <a href="/jasondeanlee/">Jason Lee</a>, <a href="/ShamKakade6/">Sham Kakade</a>, Bin Yu
ppl talking about implicit regularization, but how good is it? We show its surprisingly effective, that GD dominates ridge for all linear regression, w/ more cool stuff on GD vs SGD
arxiv.org/abs/2509.17251
Journal of Machine Learning Research (@jmlrorg) 's Twitter Profile Photo

'Regularized Rényi Divergence Minimization through Bregman Proximal Gradient Algorithms', by Thomas Guilmeau, Emilie Chouzenoux, Víctor Elvira. jmlr.org/papers/v26/23-… #minimizer #variational #minimizing

Andrew Curran (@andrewcurran_) 's Twitter Profile Photo

Scott Aaronson has, for the first time, put out a paper in which a key technical step in the proof of the main result came from AI. He describes his process using GPT5-Thinking. 'There's not the slightest doubt that, if a student had given it to me, I would've called it clever'

Scott Aaronson has, for the first time, put out a paper in which a key technical step in the proof of the main result came from AI. He describes his process using GPT5-Thinking. 

'There's not the slightest doubt that, if a student had given it to me, I would've called it clever'
malkin1729 (@felineautomaton) 's Twitter Profile Photo

One of our three papers in “Frontiers in Probabilistic Inference” @ NeurIPS’25, along with arxiv.org/abs/2509.26364 and arxiv.org/abs/2510.01159. Pleasure to work with the brilliant tamogashev on all of them!

Hugo Larochelle (@hugo_larochelle) 's Twitter Profile Photo

We at TMLR are proud to announce that selected papers will now be eligible for an opportunity to present at the joint NeurIPS/ICML/ICLR Journal-to-Conference (J2C) Track: medium.com/@TmlrOrg/tmlr-…

GLADIA Research Lab (@gladialab) 's Twitter Profile Photo

LLMs are injective and invertible. In our new paper, we show that different prompts always map to different embeddings, and this property can be used to recover input tokens from individual embeddings in latent space. (1/6)

LLMs are injective and invertible.

In our new paper, we show that different prompts always map to different embeddings, and this property can be used to recover input tokens from individual embeddings in latent space.

(1/6)
François Chollet (@fchollet) 's Twitter Profile Photo

To really understand a concept, you have to "invent" it yourself in some capacity. Understanding doesn't come from passive content consumption. It is always self-built. It is an active, high-agency, self-directed process of creating and debugging your own mental models.

Stefano Ermon (@stefanoermon) 's Twitter Profile Photo

Tired of chasing references across dozens of papers? This monograph distills it all: the principles, intuition, and math behind diffusion models. Thrilled to share!