Mazeyu Ji (@jimazeyu) 's Twitter Profile
Mazeyu Ji

@jimazeyu

MS in ECE @UCSanDiego

ID: 1805525989147160576

linkhttps://jimazeyu.github.io/ calendar_today25-06-2024 08:58:45

59 Tweet

103 Takipçi

504 Takip Edilen

yisha (@yswhynot) 's Twitter Profile Photo

For years, I’ve been tuning parameters for robot designs and controllers on specific tasks. Now we can automate this on dataset-scale. Introducing Co-Design of Soft Gripper with Neural Physics - a soft gripper trained in simulation to deform while handling load.

Xialin He (@xialin_he) 's Twitter Profile Photo

There are so many tracking paper nowadays. One policy that can track all fragile motions is impressive. Checkout this GMT paper.

RoboHub🤖 (@xrobohub) 's Twitter Profile Photo

Meet GMT: a new framework by the Zixuan Chen team that enables high-fidelity motion tracking on humanoid robots via a single policy trained on large, unstructured human motion datasets.

Xuxin Cheng (@xuxin_cheng) 's Twitter Profile Photo

Coordinating diverse, high-speed motions with a single control policy has been a long-standing challenge. Meet GMT—our universal tracker that keeps up with a whole spectrum of agile movements, all with one single policy.

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

This work is not about a new technique. GMT (General Motion Tracking) shows good engineering practices that you can actually train a single unified whole-body control policy for all agile motion, and it works in the real world, directly with sim2real without adaptation. This is

Haoru Xue (@haoruxue) 's Twitter Profile Photo

Impressive work. Lots of works this year shows good engineering can really demystify WBC. There is no more excuse for crappy policies. Next steps: making WBC policy more accessible, making it easier to interface with vision-language.

Mazeyu Ji (@jimazeyu) 's Twitter Profile Photo

This is awesome! Using generative models to create tons of robot manipulation data—super exciting direction from Jianglong Ye. Can’t wait to see more!

Mazeyu Ji (@jimazeyu) 's Twitter Profile Photo

So many works have shown that humanoid robots can now almost replicate any human motion. It’s time to move on to the next goal: making robots truly understand what to do. This is a great work that takes a solid step forward.

Mazeyu Ji (@jimazeyu) 's Twitter Profile Photo

ACE‑F combines foldability, cross-platform control, and sensorless force feedback—pushing teleoperation to the next level. Congrats!

Ruihan Yang (@rchalyang) 's Twitter Profile Photo

How can we leverage diverse human videos to improve robot manipulation? Excited to introduce EgoVLA — a Vision-Language-Action model trained on egocentric human videos by explicitly modeling wrist & hand motion. We build a shared action space between humans and robots, enabling