Maxence Faldor @ ICLR 2025 (@maxencefaldor) 's Twitter Profile
Maxence Faldor @ ICLR 2025

@maxencefaldor

PhD student @ImperialCollege. Research intern @SakanaAILabs. 🧠 Artificial Intelligence ✨ Open-endedness 🤖 Robotics 🦎 Emergence

ID: 1414690531700101127

linkhttp://maxencefaldor.github.io calendar_today12-07-2021 20:58:56

134 Tweet

670 Takipçi

523 Takip Edilen

Mengyue Yang ✈️ ICLR 2025 (@mengyue_yang_) 's Twitter Profile Photo

🚨 Schedule drop! 🚨 The World Models Workshop @ #ICLR2025 is coming🔥! Mark your calendars 🗓️ for April 28th 08:00-18:00.🕗 sites.google.com/view/worldmode… We’re excited to welcome a fantastic lineup of speakers and panellists from across #Robotics, #RL, #GenerativeAI, and #Causality

🚨 Schedule drop! 🚨

The World Models Workshop @ #ICLR2025 is coming🔥!
Mark your calendars 🗓️ for April 28th 08:00-18:00.🕗
sites.google.com/view/worldmode…

We’re excited to welcome a fantastic lineup of speakers and panellists from across #Robotics, #RL, #GenerativeAI, and #Causality
Maxence Faldor @ ICLR 2025 (@maxencefaldor) 's Twitter Profile Photo

I just arrived in Singapore for ICLR, where I will present: - OMNI-EPIC poster on Fri 25 Apr 15:00-17:30 - CAX poster on Sat 26 Apr 10:00-12:30 I am super happy that CAX got accepted at ICLR 2025 as an oral presentation. 🤩 If you are interested in open-endedness/artificial

Jenny Zhang (@jennyzhangzt) 's Twitter Profile Photo

We will be presenting OMNI-EPIC today afternoon at #ICLR2025 ! Do drop by to say hi, ask questions, or give any feedback 🤩👂 work done together with Maxence Faldor, Antoine Cully, Jeff Clune

We will be presenting OMNI-EPIC today afternoon at #ICLR2025 ! Do drop by to say hi, ask questions, or give any feedback 🤩👂 work done together with <a href="/maxencefaldor/">Maxence Faldor</a>, <a href="/CULLYAntoine/">Antoine Cully</a>, <a href="/jeffclune/">Jeff Clune</a>
Antoine Cully (@cullyantoine) 's Twitter Profile Photo

Several people from the AIRL lab and I are at ICLR'25 this week. Come to see our latest work and have a chat. See below for the full program!

Several people from the AIRL lab and I are at ICLR'25 this week. Come to see our latest work and have a chat.

See below for the full program!
Vitalis Vosylius (@vitalisvos19) 's Twitter Profile Photo

Had a great time at #ICLR2025, especially at the Robot Learning Workshop, where our paper Instant Policy (w/ Edward Johns @ CoRL 2025) won the Best Paper Award! It's an exciting time for in-context learning and robotics! 🦾 robot-learning.uk/instant-policy

Had a great time at #ICLR2025, especially at the Robot Learning Workshop, where our paper Instant Policy (w/ <a href="/Ed__Johns/">Edward Johns @ CoRL 2025</a>) won the Best Paper Award!

It's an exciting time for in-context learning and robotics! 🦾

robot-learning.uk/instant-policy
Tim Rocktäschel (@_rockt) 's Twitter Profile Photo

Harder, Better, Faster, Stronger, Real-time! We are excited to reveal Genie 3, our most capable real-time foundational world model. Fantastic cross-team effort led by Jack Parker-Holder and Shlomi Fruchter. Below some interactive worlds and capabilities that were highlights for me

Antoine Cully (@cullyantoine) 's Twitter Profile Photo

Almost exactly 10 years after joining Imperial College London as a Postdoc, I am honoured to announce that I am now Professor in Machine Learning and Robotics! 👨‍🎓 🤖 My fantastic team found the best gift to celebrate this special occasion!

Almost exactly 10 years after joining <a href="/imperialcollege/">Imperial College London</a>  as a Postdoc, I am honoured to announce that I am now Professor in Machine Learning and Robotics! 👨‍🎓 🤖

My fantastic team found the best gift to celebrate this special occasion!
Sakana AI (@sakanaailabs) 's Twitter Profile Photo

We’re excited to introduce ShinkaEvolve: An open-source framework that evolves programs for scientific discovery with unprecedented sample-efficiency. Blog: sakana.ai/shinka-evolve/ Code: github.com/SakanaAI/Shink… Like AlphaEvolve and its variants, our framework leverages LLMs to

François Chollet (@fchollet) 's Twitter Profile Photo

The narrative around LLMs is that they got better purely by scaling up pretraining *compute*. In reality, they got better by scaling up pretraining *data*, while compute is only a means to the end of cramming more data into the model. Data is the fundamental bottleneck. You can't