Andrii Zadaianchuk 🇺🇦 (@zadaianchukml) 's Twitter Profile
Andrii Zadaianchuk 🇺🇦

@zadaianchukml

Postdoc @UvA_Amsterdam PhD @ETH Zürich and @MPI_IS, intern in @AmazonScience. Structured representation learning for and by autonomous agents. 🦋 @zadaianchuk

ID: 1273965675292364802

linkhttps://zadaianchuk.github.io/ calendar_today19-06-2020 13:07:56

563 Tweet

399 Followers

360 Following

Sara Magliacane (she/her) (@saramagliacane) 's Twitter Profile Photo

New PhD position at UvA AMLab on learning concepts with theoretical guarantees using #causality and #RL with me, Frans Oliehoek (TU Delft) and Herke van Hoof 💥 Deadline: 15 June werkenbij.uva.nl/en/vacancies/p…

Aishwarya Agrawal (@aagrawalaa) 's Twitter Profile Photo

We will present this work in the afternoon poster session today at #CVPR2025, poster #322, Exhibition Hall D, 4-6pm. Do stop by if you are interested in learning how to extract visual features for *specific* concepts specified by language queries!

Yufei Wang (@yufeiwang25) 's Twitter Profile Photo

Introducing ArticuBot🤖at #RSS2025, in which we learn a single policy for manipulating diverse articulated objects across 3 robot embodiments in different labs, kitchens & lounges, achieved via large-scale simulation and hierarchical imitation learning. articubot.github.io 🧵

TimDarcet (@timdarcet) 's Twitter Profile Photo

In case there is any ambiguity: DINOv2 is 100% a product of dumb hill-climbing on ImageNet-1k knn accuracy (and linear too) Overfitting an eval can be bad. But sometimes the reward signal is reliable, and leads to truly good models. It's about finding a balance

Konstantin Mishchenko (@konstmish) 's Twitter Profile Photo

I believe successful neural network training represents cases of "near convexity": the optimization landscape, while technically non-convex, behaves enough like a convex problem that standard convex optimization is often applicable. At the same time, *in general* neural nets

Andrii Zadaianchuk 🇺🇦 (@zadaianchukml) 's Twitter Profile Photo

🌍🤖 What is the best way to explore the world to learn a robust world model from high dimensional data? 🤖🌍 SENSEI learns to explore from humans by reusing semantics structure discovered by VLMs and exploring around most interesting for VLM states. #ICML2025

Marco Bagatella (@mar_baga) 's Twitter Profile Photo

When multiple tasks need improvements, fine-tuning a generalist policy becomes tricky. How do we allocate a demonstration budget across a set of tasks of varied difficulty and familiarity? We are presenting a possible solution at ICML on Wednesday! (1/3)

When multiple tasks need improvements, fine-tuning a generalist policy becomes tricky. How do we allocate a demonstration budget across a set of tasks of varied difficulty and familiarity?

We are presenting a possible solution at ICML on Wednesday!

(1/3)
Georg Martius (@gmartius) 's Twitter Profile Photo

Sergey Levine was just presenting in the Exploration in AI @ #ICML2025 and promoted that exploration needs to be grounded, and that VLMs are a good source ;-) Check our paper below 👇

<a href="/svlevine/">Sergey Levine</a> was just presenting in the Exploration in AI @ #ICML2025 and promoted that exploration needs to be grounded, and that VLMs are a good source ;-) Check our paper below
👇
François Chollet (@fchollet) 's Twitter Profile Photo

Intelligence isn't a collection of skills. It's the efficiency with which you acquire and deploy new skills. It's an efficiency ratio. And that's why benchmark scores can be very misleading about the actual intelligence of AI systems.

Suning Huang (@suning_huang) 's Twitter Profile Photo

🚀 Excited to share our #CoRL2025 paper! See you in Korea 🇰🇷!🎉 We present ParticleFormer, a Transformer-based 3D world model that learns from point cloud perception and captures complex dynamics across multiple objects and material types ! 🌐 Project website:

Efstratios Gavves (@egavves) 's Twitter Profile Photo

The next wave of AI will be cyberphysical, embodied, empowered with robot worlds! Super psyched for our first workshop on robot world models, marrying Computer Vision, Robot Learning, and Deep Learning, and with a dream team of co-organizers including Toyota Robotics.

Andrii Zadaianchuk 🇺🇦 (@zadaianchukml) 's Twitter Profile Photo

Are you working on real-to-sim, sim-to-real, learning world models, or using physics-based simulators? There are two weeks left until the submission deadline for our CoRL workshop, Learning to Simulate Robot Worlds. More details here: 🔗simulatingrobotworlds.github.io/submit.html

Jim Fan (@drjimfan) 's Twitter Profile Photo

World modeling for robotics is incredibly hard because (1) control of humanoid robots & 5-finger hands is wayyy harder than ⬆️⬅️⬇️➡️ in games (Genie 3); and (2) object interaction is much more diverse than FSD, which needs to *avoid* coming into contact. Our GR00T Dreams work was

Xiaoan (@_seanliu) 's Twitter Profile Photo

Talking to your AI glasses is silly, so we built Reality Proxy, a direct manipulation interface that lets you instantly select real-world objects and share context with AI. A step toward JARVIS.

Andrii Zadaianchuk 🇺🇦 (@zadaianchukml) 's Twitter Profile Photo

When we would manage to scale this approach from inferred navigation actions to low level robotic actions, we would get completely next level in robotic world model.