Cheng Chi (@chichengcc) 's Twitter Profile
Cheng Chi

@chichengcc

🤖PhD student @Stanford and @Columbia

ID: 4720998255

linkhttp://cheng-chi.github.io calendar_today06-01-2016 03:48:43

321 Tweet

3,3K Takipçi

2,2K Takip Edilen

Ignat Georgiev (@imgeorgiev) 's Twitter Profile Photo

Behavior Cloning (BC) has been the new hot thing in #Robotics for the past year. I finally sucked my teeth into it and tried to decipher why it has worked so well for problems where RL struggles imgeorgiev.com/2025-01-31-why… Let me know if you have other interesting perspectives!

Shuran Song (@songshuran) 's Twitter Profile Photo

🚀 Meet ToddlerBot 🤖– the adorable, low-cost, open-source humanoid anyone can build, use, and repair! We’re making everything open-source & hope to see more Toddys out there!

Karl Pertsch (@karlpertsch) 's Twitter Profile Photo

We are releasing the π₀ model today -- code + weights + fine-tuning instructions, including our recent π₀-FAST model! 🎉 We hope the model will be useful to others! I am really excited about this release because it also marks a shift in how we can *evaluate* policies! Mini 🧵/

Seohong Park (@seohong_park) 's Twitter Profile Photo

Excited to introduce flow Q-learning (FQL)! Flow Q-learning is a *simple* and scalable data-driven RL method that trains an expressive policy with flow matching. Paper: arxiv.org/abs/2502.02538 Project page: seohong.me/projects/fql/ Thread ↓

Zhao-Heng Yin (@zhaohengyin) 's Twitter Profile Photo

We introduce Dexterity Gen (DexGen), a foundation controller that enables unprecedented dexterous manipulation capabilities. For the first time, it allows human teleoperation of tasks such as using a pen, screwdriver, and syringe. Developed by @berkeley_AI and @MetaAI. A Thread.

Boyuan Chen (@boyuanchen0) 's Twitter Profile Photo

Announcing Diffusion Forcing Transformer (DFoT), our new video diffusion algorithm that generates ultra-long videos of 800+ frames. DFoT enables History Guidance, a simple add-on to any existing video diffusion models for a quality boost. Website: boyuan.space/history-guidan… (1/7)

Huy Ha (@haqhuy) 's Twitter Profile Photo

Happy Valentines Day! 🌹 Enjoy a special Valentine's day themed policy (sound on!) from the AquaBot team 👬❤️🦾 Visit aquabot.cs.columbia.edu to learn more about our recent ICRA publication!

Jimmy Wu (@jimmyyhwu) 's Twitter Profile Photo

Two months ago, we introduced TidyBot++, our open-source mobile manipulator. Today, I'm excited to share our significantly expanded docs: • Assembly guide: tidybot2.github.io/docs • Usage guide: tidybot2.github.io/docs/usage Thanks to early adopters, TidyBot++ can now be fully

Two months ago, we introduced TidyBot++, our open-source mobile manipulator.

Today, I'm excited to share our significantly expanded docs:
• Assembly guide: tidybot2.github.io/docs
• Usage guide: tidybot2.github.io/docs/usage

Thanks to early adopters, TidyBot++ can now be fully
Jason Liu (@jasonjzliu) 's Twitter Profile Photo

Low-cost teleop systems have democratized robot data collection, but they lack any force feedback, making it challenging to teleoperate contact-rich tasks. Many robot arms provide force information — a critical yet underutilized modality in robot learning. We introduce: 1. 🦾A

Yanjie Ze (@zeyanjie) 's Twitter Profile Photo

🤖Introducing TWIST: Teleoperated Whole-Body Imitation System. We develop a humanoid teleoperation system to enable coordinated, versatile, whole-body movements, using a single neural network. This is our first step toward general-purpose robots. 🌐humanoid-teleop.github.io

Yuanhang Zhang (@yuanhang__zhang) 's Twitter Profile Photo

🦾How can humanoids unlock real strength for heavy-duty loco-manipulation? Meet FALCON🦅: Learning Force-Adaptive Humanoid Loco-Manipulation. 🌐: lecar-lab.github.io/falcon-humanoi… See the details below👇:

Tony Tao @ RSS 🤖 (@_tonytao_) 's Twitter Profile Photo

Training robots for the open world needs diverse data But collecting robot demos in the wild is hard! Presenting DexWild 🙌🏕️ Human data collection system that works in diverse environments, without robots 💪🦾 Human + Robot Cotraining pipeline that unlocks generalization 🧵👇

Siddharth Ancha (@siddancha) 's Twitter Profile Photo

Diffusion/flow policies 🤖 sample a “trajectory of trajectories” — a diffusion/flow trajectory of action trajectories. Seems wasteful? Presenting Streaming Flow Policy that simplifies and speeds up diffusion/flow policies by treating action trajectories as flow trajectories! 🌐

Mengda Xu (@mengdaxu__) 's Twitter Profile Photo

Can we collect robot dexterous hand data directly with human hand? Introducing DexUMI: 0 teleoperation and 0 re-targeting dexterous hand data collection system → autonomously complete precise, long-horizon and contact-rich tasks Project Page: dex-umi.github.io

Shuran Song (@songshuran) 's Twitter Profile Photo

Meet the newest member of the UMI family: DexUMI! Designed for intuitive data collection — and it fixes a few things the original UMI couldn’t handle: 🖐️ Supports multi-finger dexterous hands — tested on both under- and fully-actuated types 🧂 Records tactile info — it can tell

Krishan Rana (@krshnrana) 's Twitter Profile Photo

Are Diffusion and Flow Matching the best generative modelling algorithms for behaviour cloning in robotics? ✅Multimodality ❌Fast, Single-Step Inference ❌Sample Efficient 💡 We introduce IMLE Policy, a novel behaviour cloning approach that can satisfy all the above. 🧵👇

Gokul Swamy (@g_k_swamy) 's Twitter Profile Photo

Say ahoy to 𝚂𝙰𝙸𝙻𝙾𝚁⛵: a new paradigm of *learning to search* from demonstrations, enabling test-time reasoning about how to recover from mistakes w/o any additional human feedback! 𝚂𝙰𝙸𝙻𝙾𝚁 ⛵ out-performs Diffusion Policies trained via behavioral cloning on 5-10x data!

Maximilian Du (@du_maximilian) 's Twitter Profile Photo

Normally, changing robot policy behavior means changing its weights or relying on a goal-conditioned policy. What if there was another way? Check out DynaGuide, a novel policy steering approach that works on any pretrained diffusion policy. dynaguide.github.io 🧵

Yunzhu Li (@yunzhuliyz) 's Twitter Profile Photo

I was really impressed by the UMI gripper (Cheng Chi et al.), but a key limitation is that **force-related data wasn’t captured**: humans feel haptic feedback through the mechanical springs, but the robot couldn’t leverage that info, limiting the data’s value for fine-grained