Oscar Davis (@osclsd) 's Twitter Profile
Oscar Davis

@osclsd

PhD ML @UniofOxford; generative modelling; previously at MSR, EPFL, Imperial

ID: 1794032046622277632

linkhttps://olsdavis.github.io/ calendar_today24-05-2024 15:45:55

34 Tweet

250 Followers

150 Following

Alvaro Arroyo (@arroyo_alvr) 's Twitter Profile Photo

A little late, but happy to announce that our paper on Rough Transformers ⛰️ has been accepted at NeurIPS Conference! We present a way to make Transformers for temporal data more efficient and robust to irregular sampling through path signatures! Read on! #neurips2024 (1/7)

Yoav Gelberg (@yoav_gelberg) 's Twitter Profile Photo

🍩 Topological blindspots is coming to ICLR as an oral presentation! 🍩 We prove that message-passing based topological deep learning (TDL) architectures are unable capture basic topological invariants including homology, orientability, planarity and more.

🍩 Topological blindspots is coming to ICLR as an oral presentation! 🍩

We prove that message-passing based topological deep learning (TDL) architectures are unable capture basic topological invariants including homology, orientability, planarity and more.
Xingyue Huang (@hxyscott) 's Twitter Profile Photo

Knowledge Graph Foundation Models (KGFMs) are at the frontier of graph learning - but we didn’t have a principled understanding of what we can (or can’t) do with them. Now we do! 💡🚀 🧵 with Pablo Barcelo, İsmail İlkan Ceylan, Michael Bronstein, Michael Galkin, Juan Reutter, Miguel

Knowledge Graph Foundation Models (KGFMs) are at the frontier of graph learning - but we didn’t have a principled understanding of what we can (or can’t) do with them. Now we do! 💡🚀
🧵
with Pablo Barcelo, <a href="/ismaililkanc/">İsmail İlkan Ceylan</a>, <a href="/mmbronstein/">Michael Bronstein</a>, <a href="/michael_galkin/">Michael Galkin</a>, <a href="/JuanLReutter/">Juan Reutter</a>, <a href="/OrthMiguel/">Miguel</a>
charliebtan (@charliebtan) 's Twitter Profile Photo

New preprint! 🚨 We scale equilibrium sampling to hexapeptide (in cartesian coordinates!) with Sequential Boltzmann generators!  📈 🤯 Work with Joey Bose, Chen Lin, Leon Klein, Michael Bronstein and Alex Tong Thread 🧵 1/11

New preprint! 🚨 We scale equilibrium sampling to hexapeptide (in cartesian coordinates!) with Sequential Boltzmann generators!  📈 🤯

Work with <a href="/bose_joey/">Joey Bose</a>, <a href="/WillLin1028/">Chen Lin</a>, <a href="/leonklein26/">Leon Klein</a>, <a href="/mmbronstein/">Michael Bronstein</a> and <a href="/AlexanderTong7/">Alex Tong</a>

Thread 🧵 1/11
Frank Nielsen (@frnknlsn) 's Twitter Profile Photo

NeurIPS'24 has over 4k papers! Below is my selection of 5 papers which considers information geometry: 1/ arxiv.org/abs/2405.16441 2/ arxiv.org/abs/2411.02623 3/ arxiv.org/abs/2405.14073 4/ arxiv.org/abs/2411.00680 5/ arxiv.org/abs/2405.14664

Joey Bose (@bose_joey) 's Twitter Profile Photo

🎉Personal update: I'm thrilled to announce that I'm joining Imperial College London Imperial College London as an Assistant Professor of Computing Imperial Computing starting January 2026. My future lab and I will continue to work on building better Generative Models 🤖, the hardest

Xingyue Huang (@hxyscott) 's Twitter Profile Photo

🚨 Excited to announce that "How Expressive are Knowledge Graph Foundation Models?" is coming to ICML 2025! 🎉 📅 Wednesday, July 16th 🕟 4:30 PM 📍 Booth #E-3011 Come by to chat about motifs, expressiveness, and the future of graph foundation models! 🔍📊🔗

Jacob Bamberger (@jacobbamberger) 's Twitter Profile Photo

🚨 ICML 2025 Paper 🚨 "On Measuring Long-Range Interactions in Graph Neural Networks" We formalize the long-range problem in GNNs: 💡Derive a principled range measure 🔧 Tools to assess models & benchmarks 🔬Critically assess LRGB 🧵 Thread below 👇 #ICML2025

🚨 ICML 2025 Paper 🚨

"On Measuring Long-Range Interactions in Graph Neural Networks"

We formalize the long-range problem in GNNs:
💡Derive a principled range measure
🔧 Tools to assess models &amp; benchmarks
🔬Critically assess LRGB

🧵 Thread below 👇
#ICML2025
Joey Bose (@bose_joey) 's Twitter Profile Photo

GenBio Workshop ORAL Presentation 📜 Title: FORT: Forward-Only Regression Training of Normalizing Flows 🕐 When: Fri 18 Jul 🗺️ Where: East Exhibition Hall A 🔗 arXiv: arxiv.org/pdf/2506.01158 w/ Danyal Rehman Oscar Davis Jiarui Lu Jian Tang Michael Bronstein Yoshua Bengio

Danyal Rehman (@danyalrehman17) 's Twitter Profile Photo

Wrapping up #ICML2025 on a high note — thrilled (and pleasantly surprised!) to win the Best Paper Award at GenBio Workshop @ ICML25 🎉 Big shoutout to the team that made this happen! Paper: Forward-Only Regression Training of Normalizing Flows (arxiv.org/abs/2506.01158) Mila - Institut québécois d'IA

Wrapping up #ICML2025 on a high note — thrilled (and pleasantly surprised!) to win the Best Paper Award at <a href="/genbio_workshop/">GenBio Workshop @ ICML25</a> 🎉

Big shoutout to the team that made this happen!

Paper: Forward-Only Regression Training of Normalizing Flows (arxiv.org/abs/2506.01158) 

<a href="/Mila_Quebec/">Mila - Institut québécois d'IA</a>
Michael Bronstein @ICLR2025 🇸🇬 (@mmbronstein) 's Twitter Profile Photo

Apply for the AITHYRA-CeMM International PhD Program! 15-20 fully funded PhD fellowships available in Vienna in AI/ML and Life Sciences Deadline for applications: 10 September 2025 apply.cemm.at

Apply for the AITHYRA-CeMM International PhD Program! 

15-20 fully funded PhD fellowships available in Vienna in AI/ML and Life Sciences
 
Deadline for applications: 10 September 2025

apply.cemm.at
Joey Bose (@bose_joey) 's Twitter Profile Photo

📢Interested in doing a PhD in generative models 🤖, AI4Science 🧬, Sampling 🧑‍🔬, and beyond? I am hiring PhD students at Imperial College London Imperial Computing for the next application cycle. 🔗See the call below: joeybose.github.io/phd-positions/ And a light expression of interest:

İsmail İlkan Ceylan (@ismaililkanc) 's Twitter Profile Photo

Very excited to share this! We introduce a new approach to knowledge graph foundation models built on probabilistic equivariance. The model is simple, expressive, and probabilistically equivariant — and it works remarkably well! Collaboration led by Jinwoo Kim and Xingyue Huang.

Oscar Davis (@osclsd) 's Twitter Profile Photo

As the IMM paper came out in March, I implemented it myself for some project, before the true source code was made available. I am releasing my version now: github.com/olsdavis/imm It contains most/all features, and should be easy to (re-)use! Hope someone finds it helpful 🙂