Geng Chen (@gengchen358) 's Twitter Profile
Geng Chen

@gengchen358

Master student @UCSDJacobs | Prev Undergrad @sjtu1896, visitor
@Stanford and @Tsinghua_IIIS | Working on Embodied AI & Robot Learning.

ID: 1531589088200904705

linkhttps://jc043.github.io/ calendar_today31-05-2022 10:51:31

25 Tweet

211 Followers

124 Following

Yang Gao (@gao_young) 's Twitter Profile Photo

Vision-language-action models (VLAs) need to REASON, but more importantly, they need to know WHEN to reason (or not)! Thrilled to introduce OneTwoVLA, a single, unified model that combines acting (System One) ⚡ and reasoning (System Two) 🤔, and can adaptively switch between

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

If you are impressed by Tesla Optimus, also check out Roger Qiu 's talk on leveraging human videos for humanoid bimanual manipulation. Paper: Humanoid Policy ~ Human Policy Link: human-as-robot.github.io

Vincent Liu (@vincentjliu) 's Twitter Profile Photo

The future of robotics isn't in the lab – it's in your hands. Can we teach robots to act in the real world without a single robot demonstration? Introducing EgoZero. Train real-world robot policies from human-first egocentric data. No robots. No teleop. Just Aria glasses and

yisha (@yswhynot) 's Twitter Profile Photo

For years, I’ve been tuning parameters for robot designs and controllers on specific tasks. Now we can automate this on dataset-scale. Introducing Co-Design of Soft Gripper with Neural Physics - a soft gripper trained in simulation to deform while handling load.

Yitang Li (@li_yitang) 's Twitter Profile Photo

🤖Can a humanoid robot carry a full cup of beer without spilling while walking 🍺? Hold My Beer ! Introducing Hold My Beer🍺: Learning Gentle Humanoid Locomotion and End-Effector Stabilization Control Project: lecar-lab.github.io/SoFTA/ See more details below👇

Mandi Zhao (@zhaomandi) 's Twitter Profile Photo

How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.

Siyuan Huang (@siyuanhuang95) 's Twitter Profile Photo

🤖 Ever dreamed of controlling a humanoid robot to perform complex, long-horizon tasks — using just a single Vision Pro? 🎉 Meet CLONE: a holistic, closed-loop, whole-body teleoperation system for long-horizon humanoid control! 🏃‍♂️🧍 CLONE enables rich and coordinated

Zixuan Chen (@c___eric417) 's Twitter Profile Photo

🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

This work is not about a new technique. GMT (General Motion Tracking) shows good engineering practices that you can actually train a single unified whole-body control policy for all agile motion, and it works in the real world, directly with sim2real without adaptation. This is

Generalist (@generalistai_) 's Twitter Profile Photo

Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early

Haoyu Xiong (@haoyu_xiong_) 's Twitter Profile Photo

Your bimanual manipulators might need a Robot Neck 🤖🦒 Introducing Vision in Action: Learning Active Perception from Human Demonstrations ViA learns task-specific, active perceptual strategies—such as searching, tracking, and focusing—directly from human demos, enabling robust

yisha (@yswhynot) 's Twitter Profile Photo

🚀Heading to #RSS2025? Swing by EEB 248 on Wednesday, June 25 at 3:30 PM for a live demo of our data-driven, co-design soft gripper 🥢 at the workshop Robot Hardware-Aware Intelligence!

Jianglong Ye (@jianglong_ye) 's Twitter Profile Photo

How to generate billion-scale manipulation demonstrations easily? Let us leverage generative models! 🤖✨ We introduce Dex1B, a framework that generates 1 BILLION diverse dexterous hand demonstrations for both grasping 🖐️and articulation 💻 tasks using a simple C-VAE model.

Haoru Xue (@haoruxue) 's Twitter Profile Photo

🚀 Introducing LeVERB, the first 𝗹𝗮𝘁𝗲𝗻𝘁 𝘄𝗵𝗼𝗹𝗲-𝗯𝗼𝗱𝘆 𝗵𝘂𝗺𝗮𝗻𝗼𝗶𝗱 𝗩𝗟𝗔 (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. 🧵 ember-lab-berkeley.github.io/LeVERB-Website/

Rui Yan (@hi_im_ruiyan) 's Twitter Profile Photo

🚀 Meet ACE-F — a next-gen teleop system merging human and robot precision. Foldable, portable, cross-platform — it enables 6-DoF haptic control for force-aware manipulation. 🦾 See our demo & talk at the Robot Hardware-Aware Intelligence workshop this Wed Robotics: Science and Systems!

Yutong Bai (@yutongbai1002) 's Twitter Profile Photo

What would a World Model look like if we start from a real embodied agent acting in the real world? It has to have: 1) A real, physically grounded and complex action space—not just abstract control signals. 2) Diverse, real-life scenarios and activities. Or in short: It has to