Yifan Hou (@yifanhou2) 's Twitter Profile
Yifan Hou

@yifanhou2

PostDoc at Stanford. Work on robotic manipulation.

ID: 926068920628498433

linkhttps://yifan-hou.github.io/ calendar_today02-11-2017 12:50:11

16 Tweet

236 Followers

123 Following

Yifan Hou (@yifanhou2) 's Twitter Profile Photo

Can robots learn to manipulate with both care and precision? Introducing Adaptive Compliance Policy, a framework to dynamically adjust robot compliance both spatially and temporally for given manipulation tasks from human demonstrations. Full detail at adaptive-compliance.github.io

Yifan Hou (@yifanhou2) 's Twitter Profile Photo

Our code/data/checkpoints are available here: github.com/yifan-hou/adap… You can find a complete guide from setting up the compliance controller to data collection/training/evaluation on your hardware.

Jiao Sun (@sunjiao123sun_) 's Twitter Profile Photo

Mitigating racial bias from LLMs is a lot easier than removing it from humans! Can’t believe this happened at the best AI conference NeurIPS Conference We have ethical reviews for authors, but missed it for invited speakers? 😡

Mitigating racial bias from LLMs is a lot easier than removing it from humans! 

Can’t believe this happened at the best AI conference <a href="/NeurIPSConf/">NeurIPS Conference</a> 

We have ethical reviews for authors, but missed it for invited speakers? 😡
Yifan Hou (@yifanhou2) 's Twitter Profile Photo

Modern vision models are excellent at extracting useful info from cameras. More views generally leads to more capability. So, what would happen when we take the idea to the extreme? --- impressive full-body dexterity even on a cheap robot.

Yifan Hou (@yifanhou2) 's Twitter Profile Photo

mtmason.com/the-inner-robo… Many robotics researchers today hold a pessimistic view and think manipulation is solved and the only thing left to do is to scale up. This is why I really like this article, which shows a few potentially very large gap that most people have ignored.

Yifan Hou (@yifanhou2) 's Twitter Profile Photo

Adaptive Compliance Policy just won the best paper award at the ICRA Contact-Rich Manipulation workshop! Huge thanks to the team and everyone who supported us at the workshop. adaptive-compliance.github.io contact-rich.github.io

Adaptive Compliance Policy just won the best paper award at the ICRA Contact-Rich Manipulation workshop! Huge thanks to the team and everyone who supported us at the workshop. adaptive-compliance.github.io contact-rich.github.io
Yifan Hou (@yifanhou2) 's Twitter Profile Photo

Excited to introduce DexUMI, our new paradigm for intuitive, accurate and generalizable data collection for dexterous hands. We make your own hand feels like the robot hand both kinematically and visually, critical for transferring complex skills to robots. Details below!

Yifan Hou (@yifanhou2) 's Twitter Profile Photo

Checkout DexMachina, our solution to learning dexterous, long horizon, bimanual tasks from a single human demonstration. project-dexmachina.github.io

Yifan Hou (@yifanhou2) 's Twitter Profile Photo

Very impressive results! Curious how much data collection effort are needed to reach this level of accuracy and dexterity. Also I really like the pinch finger design, looks like a result of a huge amount of design optimizations. Looking forward to a technical report.

Maximilian Du (@du_maximilian) 's Twitter Profile Photo

Normally, changing robot policy behavior means changing its weights or relying on a goal-conditioned policy. What if there was another way? Check out DynaGuide, a novel policy steering approach that works on any pretrained diffusion policy. dynaguide.github.io 🧵

Yifan Hou (@yifanhou2) 's Twitter Profile Photo

A common missed opportunity in learning manipulation from human is to only pay attention to the hand motion. Vision-in-Action additionally learns from the head&torso movement about how to position the eyes for the best view during a task, so you can solve a lot of manipulation

Zhanyi S (@s_zhanyi) 's Twitter Profile Photo

How to prevent behavior cloning policies from drifting OOD on long horizon manipulation tasks? Check out Latent Policy Barrier (LPB), a plug-and-play test-time optimization method that keeps BC policies in-distribution with no extra demo or fine-tuning: project-latentpolicybarrier.github.io