Ivan Kapelyukh 🇺🇦 (@ivankapelyukh) 's Twitter Profile
Ivan Kapelyukh 🇺🇦

@ivankapelyukh

PhD student at Imperial College London, working on robot learning and computer vision. Intern at RAI (previously Boston Dynamics AI Institute). 🇬🇧 🇺🇦

ID: 1420668854641074176

linkhttps://ivankapelyukh.com calendar_today29-07-2021 08:54:55

75 Tweet

290 Takipçi

532 Takip Edilen

Norman Di Palo (@normandipalo) 's Twitter Profile Photo

The future of robotics is in your hands. Literally. Excited to announce our new paper,✨R+X✨. A person records everyday activities while wearing a camera.A robot passively learns those skills. No labels, no training. Here's how. 👇

Shikun Liu (@liu_shikun) 's Twitter Profile Photo

🎉 Excited to share Clarity, an open-source website template I designed to better present and visualise AI research! If you appreciate the minimalist design and want to use it in your own projects, please check it out here: shikun.io/projects/clari…

🎉 Excited to share Clarity, an open-source website template I designed to better present and visualise AI research! If you appreciate the minimalist design and want to use it in your own projects, please check it out here: shikun.io/projects/clari…
Norman Di Palo (@normandipalo) 's Twitter Profile Photo

Excited to introduce Diffusion Augmented Agents (DAAGs)✨. We give an agent control of a diffusion model, so it can create its own *synthetic experience*.🪄 The result is a lifelong agent that can learn new reward detectors and policies, much more efficiently. Here's how. 👇

George Papagiannis (@geopgs) 's Twitter Profile Photo

✨New #IROS2024 paper: “Adapting Skills to Novel Grasps: A Self-Supervised Approach” We leverage self-supervised data to adapt skill trajectories to novel object grasp poses. No need for depth, no prior knowledge, no CAD models, no calibration - just RGB images. Here's how👇

Shikun Liu (@liu_shikun) 's Twitter Profile Photo

Introducing MarDini 🍸 -- our latest exploration in video diffusion models from AI at Meta! MarDini brings an asymmetric design that breaks down video modelling into two sub-tasks: 1. A masked, auto-regressive, heavy-weight planning model focusing on long-range temporal

Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

Open-source AI enabled home robot with voice chat. Tell it what to do and it will try to clean up your home for you. This is locally running Qwen2.5 on my desktop. No need to send information from your home to the cloud.

Vitalis Vosylius (@vitalisvos19) 's Twitter Profile Photo

Learning manipulation policies instantly after the demonstrations can unlock so many possibilities… 🤖 Instant Policy does just that! It efficiently tackles In-Context Imitation Learning and relies on cheap procedurally generated data! robot-learning.uk/instant-policy 1/

Ivan Kapelyukh 🇺🇦 (@ivankapelyukh) 's Twitter Profile Photo

The graphics community has such beautiful simulation demos. Are these running full physics like in robotics sims, or are there some physics shortcuts which still give realistic renders?

Riku Murai (@rmurai0610) 's Twitter Profile Photo

Introducing MASt3R-SLAM, the first real-time monocular dense SLAM with MASt3R as a foundation. Easy to use like DUSt3R/MASt3R, from an uncalibrated RGB video it recovers accurate, globally consistent poses & a dense map. With Eric Dexheimer*, Andrew Davison (*Equal Contribution)

Norman Di Palo (@normandipalo) 's Twitter Profile Photo

Imagine a robot that learns all sorts of tasks simply by watching you. This future is closer than ever: R+X was accepted at ICRA 2025. From an hour long POV video we can perform in-context imitation learning, from human to robot. No training or robot data needed.

Vitalis Vosylius (@vitalisvos19) 's Twitter Profile Photo

Instant Policy got accepted as an oral at ICLR! 🎉 Exciting times for In-Context Imitation Learning and cheap procedurally generated data! 🦾

Ivan Kapelyukh 🇺🇦 (@ivankapelyukh) 's Twitter Profile Photo

Excited to be interning at the Robotics and AI Institute (formerly Boston Dynamics AI Institute). Hoping to learn lots and work on interesting robotics research. If you're in the Cambridge/Boston area and want to meet up, let me know!

Maxence Faldor @ ICLR 2025 (@maxencefaldor) 's Twitter Profile Photo

Thrilled to share our work Sakana AI on The AI CUDA Engineer! 👷‍♂️ A framework that autonomously designs & optimizes CUDA kernels. Like OMNI-EPIC, we maintain an innovation archive of diverse solutions to get stepping stones that can be reused for future optimizations!

Jad Abou-Chakra (@jadachakra) 's Twitter Profile Photo

1/ Real2Sim and Sim2Real are well-known paradigms in robotics🤖. Is there another paradigm where sim and the real world are more tightly interlinked? We introduce Real-IS-Sim: a framework featuring an always-in-the-loop, correctable simulator for behavior cloning policies.

Siddhant Haldar (@haldar_siddhant) 's Twitter Profile Photo

We are extending the paper submission deadline to August 15, 2025 23:59 AOE. Submit your latest works on generalizable robot priors to the #CoRL2025 #GenPriors workshop! Webpage: corl25-genpriors.github.io

Edward Johns (@ed__johns) 's Twitter Profile Photo

I'm very excited to finally announce one of the most ambitious projects we've worked on — which makes the front cover of Science Robotics today: ☀️ Learning a Thousand Tasks in a Day ⭐️ Everyday tasks — like those below — can now be learned from a single demonstration each...