Jingyun Yang (@yjy0625) 's Twitter Profile
Jingyun Yang

@yjy0625

PhD student at Stanford

ID: 3240672667

calendar_today09-06-2015 09:18:47

117 Tweet

713 Followers

235 Following

Priya Sundaresan (@priyasun_) 's Twitter Profile Photo

what's the move? a question i ask the group chat when i’m bored... and now, my robot too. enter SPHINX: a hybrid IL agent that dynamically selects its action space (waypoints / dense actions) and input type (point clouds / wrist images) for precise, generalizable manip! 🧵 1/7

Jimmy Wu (@jimmyyhwu) 's Twitter Profile Photo

When will robots help us with our household chores? TidyBot++ brings us closer to that future. Our new open-source mobile manipulator makes it more accessible and practical to do robot learning research outside the lab, in real homes!

Zhou Xian (@zhou_xian_) 's Twitter Profile Photo

Everything you love about generative models — now powered by real physics! Announcing the Genesis project — after a 24-month large-scale research collaboration involving over 20 research labs — a generative physics engine able to generate 4D dynamical worlds powered by a physics

Physical Intelligence (@physical_int) 's Twitter Profile Photo

There are great tokenizers for text and images, but existing action tokenizers don’t work well for dexterous, high-frequency control. We’re excited to release (and open-source) FAST, an efficient tokenizer for robot actions. With FAST, we can train dexterous generalist policies

There are great tokenizers for text and images, but existing action tokenizers don’t work well for dexterous, high-frequency control. We’re excited to release (and open-source) FAST, an efficient tokenizer for robot actions.

With FAST, we can train dexterous generalist policies
Juntao Ren (@juntaoren) 's Twitter Profile Photo

Human videos contain rich data on how to complete everyday tasks. But how can robots directly learn from human videos alone without robot data? We present MT-π, an IL framework that takes in human video and predicts actions as 2D motion tracks. portal.cs.cornell.edu/motion_track_p… 🧵1/6

Haochen Shi (@haochenshi74) 's Twitter Profile Photo

Time to democratize humanoid robots! Introducing ToddlerBot, a low-cost ($6K), open-source humanoid for robotics and AI research. Watch two ToddlerBots seamlessly chain their loco-manipulation skills to collaborate in tidying up after a toy session. toddlerbot.github.io

Physical Intelligence (@physical_int) 's Twitter Profile Photo

Many of you asked for code & weights for π₀, we are happy to announce that we are releasing π₀ and pre-trained checkpoints in our new openpi repository! We tested the model on a few public robots, and we include code for you to fine-tune it yourself.

Jimmy Wu (@jimmyyhwu) 's Twitter Profile Photo

Two months ago, we introduced TidyBot++, our open-source mobile manipulator. Today, I'm excited to share our significantly expanded docs: • Assembly guide: tidybot2.github.io/docs • Usage guide: tidybot2.github.io/docs/usage Thanks to early adopters, TidyBot++ can now be fully

Two months ago, we introduced TidyBot++, our open-source mobile manipulator.

Today, I'm excited to share our significantly expanded docs:
• Assembly guide: tidybot2.github.io/docs
• Usage guide: tidybot2.github.io/docs/usage

Thanks to early adopters, TidyBot++ can now be fully
Roei Herzig (@roeiherzig) 's Twitter Profile Photo

What happens when vision🤝 robotics meet? Happy to share our new work on Pretraining Robotic Foundational Models!🔥 ARM4R is an Autoregressive Robotic Model that leverages low-level 4D Representations learned from human video data to yield a better robotic model. Berkeley AI Research😊

Sergey Levine (@svlevine) 's Twitter Profile Photo

We made π0 “think harder”: our new Hierarchical Interactive Robot (Hi Robot) method “thinks” through complex tasks and prompts, directing π0 to break up complex tasks into basic steps, handling human feedback, and modifying tasks on the fly.

Moo Jin Kim (@moo_jin_kim) 's Twitter Profile Photo

Introducing OFT—an Optimized Fine-Tuning recipe for VLAs! Fine-tuning OpenVLA w/ OFT, we see: -25-50x faster inference ⚡️ -SOTA 97.1% avg SR in LIBERO 💪 -high-freq control w/ 7B model on real bimanual robot -outperforms π₀, RDT-1B, DiT Policy, MDT, Diffusion Policy, ACT 🧵👇

Toru (@toruo_o) 's Twitter Profile Photo

Sim2Real RL for Vision-Based Dexterous Manipulation on Humanoids toruowo.github.io/recipe/ TLDR - we train a humanoid robot with two multifingered hands to perform a range of dexterous manipulation tasks robust generalization and high performance without human demonstration :D

Rika Antonova (@contactrika) 's Twitter Profile Photo

Join our team at Cambridge! We have fully funded PhD positions in robot learning, novel robot hardware design, and reinforcement learning. Looking for applicants with a strong background in dexterous manipulation & hardware prototyping. Interested? Please send me a message/email.

Join our team at Cambridge! We have fully funded PhD positions in robot learning, novel robot hardware design, and reinforcement learning. Looking for applicants with a strong background in dexterous manipulation & hardware prototyping.
Interested? Please send me a message/email.
Marion Lepert (@marionlepert) 's Twitter Profile Photo

Introducing Phantom 👻: a method to train robot policies without collecting any robot data — using only human video demonstrations. Phantom turns human videos into "robot" demonstrations, making it significantly easier to scale up and diversify robotics data. 🧵1/9

Shuang Li (@shuangl13799063) 's Twitter Profile Photo

Video generation is powerful but too slow for real-world robotic tasks. How can we enable both video and action generation while ensuring real-time policy inference? Check out our work on the Unified Video Action Model (UVA) to find out! unified-video-action-model.github.io (1/7)

Yunfan Jiang (@yunfanjiang) 's Twitter Profile Photo

🤖 Ever wondered what robots need to truly help humans around the house? 🏡 Introducing 𝗕𝗘𝗛𝗔𝗩𝗜𝗢𝗥 𝗥𝗼𝗯𝗼𝘁 𝗦𝘂𝗶𝘁𝗲 (𝗕𝗥𝗦)—a comprehensive framework for mastering mobile whole-body manipulation across diverse household tasks! 🧹🫧 From taking out the trash to

Jingyun Yang (@yjy0625) 's Twitter Profile Photo

🚀 We're hosting the #RSS2025 Mobile Manipulation (MoMA) Workshop! Join us to explore how mobility + dexterity unlock general-purpose robots in dynamic, human-centric spaces. 📅 Share your latest work with us by May 25 🔗 rss-moma-2025.github.io

Huy Ha (@haqhuy) 's Twitter Profile Photo

Excited to announce the 1st Workshop on Robot Hardware-Aware Intelligence @ #RSS2025 in LA! We’re bringing together interdisciplinary researchers exploring how to unify hardware design and intelligent algorithms in robotics! Full info: rss-hardware-intelligence.github.io Robotics: Science and Systems

Excited to announce the 1st Workshop on Robot Hardware-Aware Intelligence @ #RSS2025 in LA! We’re bringing together interdisciplinary researchers exploring how to unify hardware design and intelligent algorithms in robotics! Full info: rss-hardware-intelligence.github.io

<a href="/RoboticsSciSys/">Robotics: Science and Systems</a>