Bruno Neri (@neribr) 's Twitter Profile
Bruno Neri

@neribr

Technical Leader - Artificial Intelligence and Machine Learning Enthusiast - Senior Software Engineer @altenitalia

ID: 357736899

linkhttps://www.linkedin.com/in/brunoneri calendar_today18-08-2011 20:42:56

3,3K Tweet

1,1K Followers

2,2K Following

Simone Scardapane (@s_scardapane) 's Twitter Profile Photo

For those asking for material - I will mostly be following the amazing "elements of differentiable programming" (arxiv.org/abs/2403.14606) by Mathieu Blondel Vincent Roulet with some added labs & material. 🙃

Simone Scardapane (@s_scardapane) 's Twitter Profile Photo

Twitter friends, help me share! Sapienza Università di Roma has an incoming PhD deadline approaching - contact me if you want more info and/or if the idea of a PhD with me intrigues you (not mutually exclusive). 🙃 [Disclaimer: The sadness of our stock photo is not representative of PhD life]

Twitter friends, help me share! <a href="/SapienzaRoma/">Sapienza Università di Roma</a> has an incoming PhD deadline approaching - contact me if you want more info and/or if the idea of a PhD with me intrigues you (not mutually exclusive). 🙃

[Disclaimer: The sadness of our stock photo is not representative of PhD life]
Petar Veličković (@petarv_93) 's Twitter Profile Photo

overseen in the recent GNN+ paper: gnn arch diagrams that are remarkably transformer-style. this might help make gnns more approachable, and is appreciated. the results are cool too! it further highlights nicely how gnns/transformers have compatible underlying symmetries :)

overseen in the recent GNN+ paper: gnn arch diagrams that are remarkably transformer-style.

this might help make gnns more approachable, and is appreciated. the results are cool too!

it further highlights nicely how gnns/transformers have compatible underlying symmetries :)
Simone Scardapane (@s_scardapane) 's Twitter Profile Photo

Happy to share I just started as associate professor in Sapienza Università di Roma! I have now reached my perfect thermodynamical equilibrium. 😄 Also, ChatGPT's idea of me is way infinitely cooler so I'll leave it here to trick people into giving me money.

Happy to share I just started as associate professor in <a href="/SapienzaRoma/">Sapienza Università di Roma</a>! I have now reached my perfect thermodynamical equilibrium. 😄

Also, ChatGPT's idea of me is way infinitely cooler so I'll leave it here to trick people into giving me money.
Gabriele Corso (@gabricorso) 's Twitter Profile Photo

Excited to unveil Boltz-2, our new model capable not only of predicting structures but also binding affinities! Boltz-2 is the first AI model to approach the performance of FEP simulations while being more than 1000x faster! All open-sourced under MIT license! A thread… 🤗🚀

MIT Jameel Clinic for AI & Health (@aihealthmit) 's Twitter Profile Photo

Delighted to announce the release of Boltz-2, which demonstrates unprecedented accuracy in predicting structure and binding affinity! Congrats to Gabriele Corso and Jeremy Wohlwend on this breakthrough achievement! 📄Paper: bit.ly/boltz2-pdf 💻Code: github.com/jwohlwend/boltz

Francesca Grisoni (@fra_grisoni) 's Twitter Profile Photo

Great to see our work out in Angewandte Chemie! 🎉 We introduce ‘supramolecular’ language processing to predict co-crystallization — with experimental validation! 🙈 Led by the unstoppable Rebecca Birolo, w/ Rıza Özçelik; TU Eindhoven, Molecular Machine Learning; Università di Torino 💪🏻 onlinelibrary.wiley.com/doi/10.1002/an…

Simone Scardapane (@s_scardapane) 's Twitter Profile Photo

*Interpretability in Parameter Space* by Dan Braun Lee Sharkey & al. They look for "interpretable" components directly in weight space of a model, by optimizing for several desiderata (faithfulness, low-rank, etc.). For now only works on toy tasks. arxiv.org/abs/2501.14926

*Interpretability in Parameter Space*
by <a href="/danbraunai/">Dan Braun</a> <a href="/leedsharkey/">Lee Sharkey</a> &amp; al.

They look for "interpretable" components directly in  weight space of a model, by optimizing for several desiderata (faithfulness, low-rank, etc.). For now only works on toy tasks.

arxiv.org/abs/2501.14926
Petar Veličković (@petarv_93) 's Twitter Profile Photo

the 1st draft 'g' chapter of the geometric deep learning book is live! 🚀 alice enters the magical, branchy world of graphs & gnns 🕸️ (llms are there too!) i've spent 7+ years studying, researching & talking about graphs. this text conveys what i've learnt. more in thread 💎

the 1st draft 'g' chapter of the geometric deep learning book is live! 🚀

alice enters the magical, branchy world of graphs &amp; gnns 🕸️ (llms are there too!)

i've spent 7+ years studying, researching &amp; talking about graphs. this text conveys what i've learnt.

more in thread 💎
Simone Scardapane (@s_scardapane) 's Twitter Profile Photo

Twitter friends, here's some draft notes for my upcoming course on automatic differentiation, mostly based on the "Elements of differentiable programming" book. Let me know what you think! They also include a notebook on operator overloading. 🙃 notion.so/sscardapane/Au…

Twitter friends, here's some draft notes for my upcoming course on automatic differentiation, mostly  based on the "Elements of differentiable programming" book. Let me know what you think! They also include a notebook on operator overloading. 🙃

notion.so/sscardapane/Au…
Simone Scardapane (@s_scardapane) 's Twitter Profile Photo

*Into the land of automatic differentiation* Material is out! A short PhD course for the CS PhD in Sapienza Università di Roma covering basic and advanced topics in autodiff w/ slides, (rough) Notion notes, and two notebooks including a PyTorch-like implementation. 😅 sscardapane.it/teaching/phd-a…

*Into the land of automatic differentiation*

Material is out! A short PhD course for the CS PhD in <a href="/SapienzaRoma/">Sapienza Università di Roma</a> covering basic and advanced topics in autodiff w/ slides, (rough) Notion notes, and two notebooks including a PyTorch-like implementation. 😅

sscardapane.it/teaching/phd-a…
Simone Scardapane (@s_scardapane) 's Twitter Profile Photo

*Dense Backpropagation Improves Training for Sparse MoEs* by Ashwinee Panda Tom Goldstein et al. They modify the top-k router of a MoE by adding a "default" activation for unselected experts in order to have a dense gradient during the backward pass. arxiv.org/abs/2504.12463

*Dense Backpropagation Improves Training for Sparse MoEs*
by <a href="/PandaAshwinee/">Ashwinee Panda</a> <a href="/tomgoldsteincs/">Tom Goldstein</a> et al.

They modify the top-k router of a MoE by adding a "default" activation for unselected experts in order to have a dense gradient during the backward pass.

arxiv.org/abs/2504.12463