Runpei Dong (@runpeidong) 's Twitter Profile
Runpei Dong

@runpeidong

CS PhD student @UofIllinois | Previously @Tsinghua_IIIS and XJTU | Interested in robot learning & computer vision

ID: 1251358690113761280

linkhttps://runpeidong.web.illinois.edu/ calendar_today18-04-2020 03:55:43

64 Tweet

356 Takipçi

1,1K Takip Edilen

elvis (@omarsar0) 's Twitter Profile Photo

Reasoning Models Thinking Slow and Fast at Test Time Another super cool work on improving reasoning efficiency in LLMs. They show that slow-then-fast reasoning outperforms other strategies. Here are my notes:

Reasoning Models Thinking Slow and Fast at Test Time

Another super cool work on improving reasoning efficiency in LLMs.

They show that slow-then-fast reasoning outperforms other strategies.

Here are my notes:
Junyi Chen (@sotamak1r) 's Twitter Profile Photo

Can you imagine playing various games through an AI model? Like BlackMyth: Wukong.🤩 Sharing our latest work: DeepVerse, an autoregressive paradigm-based world model🌏DeepVerse can fantasize the entire world behind images and enable free exploration through interaction🎮.

Runpei Dong (@runpeidong) 's Twitter Profile Photo

Motion tracking is a hard problem, especially when you want to track a lot of motions with only a single policy. Good to know that MoE distilled student works so well, congrats Zixuan Chen on such exciting results!

Runpei Dong (@runpeidong) 's Twitter Profile Photo

#RSS2025 Excited to be presenting our HumanUP tomorrow at the Humanoids Session (Sunday, June 22, 2025) 📺 Spotlight talk: 4:30pm–5:30pm, Bovard Auditorium 📜Poster: 6:30pm-8:00pm, #3, Associates Park

CyberRobo (@cyberrobooo) 's Twitter Profile Photo

AGIBOT X2-N (Nazhe) new video Shows the ability to carry goods blindly on stairs and slopes📦 The robot autonomously switches between bipedal and wheeled modes while maintaining balance and stability throughout the process — a feature that will be highly valuable in dim or

Haoran Geng (@haorangeng2) 's Twitter Profile Photo

🤖 What if a humanoid robot could make a hamburger from raw ingredients—all the way to your plate? 🔥 Excited to announce ViTacFormer: our new pipeline for next-level dexterous manipulation with active vision + high-resolution touch. 🎯 For the first time ever, we demonstrate

Yana Wei (@yanawei_) 's Twitter Profile Photo

🔥 Thrilled to release our new multimodal RL work: Open Vision Reasoner! A powerful 7B model with SOTA performance on language & vision reasoning benchmarks, trained with nearly 1K steps of multimodal RL. Our journey begins with a central question: Can the cognitive behaviors

🔥 Thrilled to release our new multimodal RL work: Open Vision Reasoner!
A powerful 7B model with SOTA performance on language & vision reasoning benchmarks, trained with nearly 1K steps of multimodal RL.

Our journey begins with a central question: 
Can the cognitive behaviors
Qiyang Li (@qiyang_li) 's Twitter Profile Photo

Everyone knows action chunking is great for imitation learning. It turns out that we can extend its success to RL to better leverage prior data for improved exploration and online sample efficiency! colinqiyangli.github.io/qc/ The recipe to achieve this is incredibly simple. 🧵 1/N

Hao-Shu Fang (@haoshu_fang) 's Twitter Profile Photo

How do we unlock the full dexterity of robot hands with data, even beyond what teleoperation can achieve? DEXOP captures natural human manipulation with full-hand tactile & proprio sensing, plus direct force feedback to users, without needing a robot👉dex-op.github.io

Runpei Dong (@runpeidong) 's Twitter Profile Photo

Visual manipulation is really challenging for humanoids, and it is impressive to see such interesting results with a depth policy!

Zhen Wu (@zhenkirito123) 's Twitter Profile Photo

Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR

Pieter Abbeel (@pabbeel) 's Twitter Profile Photo

ResMimic: learns a whole-body loco-manipulation policy on top of general motion tracking a policy Key ideas: (i) pre-train general motion tracking (ii) post-train task-specific residual policy with: (a) object tracking reward (b) contact reward (c) virtual object force

Runpei Dong (@runpeidong) 's Twitter Profile Photo

Thrilled to share our work AlphaOne🔥 at EMNLP 2025 2025, Junyu Zhang and I will be presenting this work online, and please feel free to join and talk to us!!! 📆Date: 8:00-9:00, Nov 7, Friday (Beijing Standard Time, UTC+8) 📺Session: Gather Session 4