Zhikai Zhang (@zhikai273) 's Twitter Profile
Zhikai Zhang

@zhikai273

1st year Ph.D student in IIIS, Tsinghua University

ID: 1763212898699292672

calendar_today29-02-2024 14:41:44

9 Tweet

67 Takipçi

88 Takip Edilen

Tony Z. Zhao (@tonyzzhao) 's Twitter Profile Photo

Today, we present a step-change in robotic AI Sunday. Introducing ACT-1: A frontier robot foundation model trained on zero robot data. - Ultra long-horizon tasks - Zero-shot generalization - Advanced dexterity 🧵->

Xuanbin Peng (@xuanbin_peng) 's Twitter Profile Photo

What if a humanoid robot could choose how to interact with the environment 🤖 — soft when it needs compliance, stiff when it needs precision, and force-aware when it must push/pull? That’s exactly what our Heterogeneous Meta-Control (HMC) framework enables. Our new framework

Tairan He (@tairanhe99) 's Twitter Profile Photo

Zero teleoperation. Zero real-world data. ➔ Autonomous humanoid loco-manipulation in reality. Introducing VIRAL: Visual Sim-to-Real at Scale. We achieved 54 autonomous cycles (walk, stand, place, pick, turn) using a simple recipe: 1. RL 2. Simulation 3. GPUs Website:

Xiongyi Cai (@xiongyicai) 's Twitter Profile Photo

A large human behavior model. Introducing In-N-On, our latest findings in scaling egocentric data for humanoids. 1. Pre-training and post-training with human data 2. 1,000+ hours of in-the-wild data and 20+ hours of on-task data with accurate action labels Website:

Ziwen Zhuang (@ziwenzhuang_leo) 's Twitter Profile Photo

We believe robots need instinct, not only reasoning. Introducing Project-Instinct — a full-stack, instinct-level whole-body control toolkit for legged & humanoid robots. 🔗 project-instinct.github.io (1/3)

C Zhang (@chongzitazhang) 's Twitter Profile Photo

releasing AME2: Agile and Generalized Legged Locomotion via Attention-Based Neural Map Encoding arxiv.org/abs/2601.08485 In this work, we discuss how to achieve a combination of generalization and agility in legged locomotion, and propose a general solution.

Haoyang Weng (@elijahgalahad) 's Twitter Profile Photo

Introducing vla-scratch: a modular, performant and efficient stack for VLAs. github.com/EGalahad/vla-s… I started it because existing codebases were either slow, or hard to extend for co-training with clean data abstractions. This repo is a ground-up attempt to address both.

Li Yi (@ericyi0124) 's Twitter Profile Photo

Introducing Click-and-Traverse: command humanoid robots to navigate cluttered indoor environments with a single click. Why are real-world, cluttered indoor scenes challenging? • Omni-spatial constraints: obstacles on the ground, lateral, and overhead levels. • Intricate

Zhikai Zhang (@zhikai273) 's Twitter Profile Photo

Towards More Intelligent Humanoid Teleoperation😮 In OpenWBT (github.com/GalaxyGeneralR…), the robot can only walk and squat down. Not much different from a wheeled robot. Now with one click, the humanoid can traverse your cluttered home like a true legged robot. Congrats!

GALBOT (@galbotrobotics) 's Twitter Profile Photo

🚀 Galbot S1 breaks industry limits with a 50kg dual-arm payload! 💡 Designed for 24/7 high-load operation & autonomous battery swapping. 🤖 Revolutionizing heavy-duty robotics in inudusrty. #Galbot #Robotics #AI #Technology #EmbodiedAI #HumanoidRobots

Yinhuai (@nligjvjbycsed6t) 's Twitter Profile Photo

Introduce HumanX, a full-stack framework that compiles human video into generalizable, real-world interaction skills 🏀⚽️🥊📦 for humanoids, without task-specific rewards. Paper: arxiv.org/abs/2602.02473 Page: wyhuai.github.io/human-x/ #humanoid #ai #hkust #robotics #sports

Fanqi Lin (@lfqirrrrr) 's Twitter Profile Photo

𝑪𝒐-𝒕𝒓𝒂𝒊𝒏𝒊𝒏𝒈 is a promising way to scale Large Behavior Models (LBMs) beyond robot data, yet the data and training recipe are far from settled. 🤔 We present a large-scale empirical study leveraging 4,000h of robot/human data and 50M vision-language samples, evaluating

Yuanhang Zhang (@yuanhang__zhang) 's Twitter Profile Photo

Robust humanoid perceptive locomotion is still underexplored. Especially when different cameras see different terrains, paths get narrow, and payloads disturb balance... Introduce RPL, tackling this with one unified policy: • Challenging terrains (slopes, stairs and stepping

Shi Soul (@shi_soul) 's Twitter Profile Photo

Introducing our recent work, TextOp: Real-time Interactive Text-Driven Humanoid Robot Motion Generation and Control TL;DR * Control humanoid robots via real-time revisable text prompt * Seamless switching between multiple skills Paper: text-op.github.io/static/pdf/pap… Check website&code

Sirui Xu (@xu_sirui) 's Twitter Profile Photo

Humanoids need autonomy + versatility + generalization to be truly useful. Loco-manipulation makes that hard. InterPrior is our step toward bridging the gap — one policy, no reference. Could be promising for immersive games 🎮 and real robots 🤖 🔗 sirui-xu.github.io/InterPrior 📜

Zi-ang Cao (@ziang_cao) 's Twitter Profile Photo

🚀 Introducing CHIP: Adaptive Compliance for Humanoid Control through Hindsight Perturbation! Current humanoids face a trade-off: they are either Agile & Stiff OR Slow & Soft. CHIP breaks this barrier. We enable on-the-fly switching between Compliant (wiping 🧼,

Ruiqian Nai (@ruiqiannai) 's Twitter Profile Photo

🤖 Can we demonstrate humanoid complex whole-body manipulation skills without a physical robot present? Introducing HuMI: A portable, robot-free interface for learning diverse humanoid manipulation tasks. 📄 arxiv.org/abs/2602.06643 🌐 …noid-manipulation-interface.github.io

Zhikai Zhang (@zhikai273) 's Twitter Profile Photo

The results are absolutely incredible!!!🤩 Whole-body teleoperation for loco-manipulation in free space is almost solved now.