Kyungmin Lee (@lee_kyungmin21) 's Twitter Profile
Kyungmin Lee

@lee_kyungmin21

ID: 1735497643508297730

linkhttps://kyungminn.github.io/ calendar_today15-12-2023 03:11:30

39 Tweet

56 Takipรงi

39 Takip Edilen

Claire Silver ๐ŸŒธ (@clairesilver12) 's Twitter Profile Photo

Hereโ€™s Claude absolutely RIPPING building a modern mansion draft with primitivesโ€”using UnrealMCP to control Unreal Engine by himself. You prompt what you want, he does what you ask. He can be creative too. Tiny little mini tutorial thread ๐Ÿ‘‡ ๐ŸŒธ

Carlos Barreto (@carlosedubarret) 's Twitter Profile Photo

Here is an example of why I say "it depends" when someone asks what is the best monocular mocap solution. Most would say that PromptHMR is the best one today. Is it? I was just giving a try on a video and turns out, that, in my opinion 1ยบ- 4d humans 2ยบ- GVHMR 3ยบ-

Ziwen Zhuang (@ziwenzhuang_leo) 's Twitter Profile Photo

We believe robots need instinct, not only reasoning. Introducing Project-Instinct โ€” a full-stack, instinct-level whole-body control toolkit for legged & humanoid robots. ๐Ÿ”— project-instinct.github.io (1/3)

Michael Xu (@mxu_cg) 's Twitter Profile Photo

Check out this repo if you want to play with this real-time character controller (need RTX 4090): github.com/mshoe/PARC The dataset and models are here: huggingface.co/datasets/mxucgโ€ฆ I also wrote a blog post about the autophagous data augmentation method: michaelx.io/blog/firstpapeโ€ฆ

Yinhuai (@nligjvjbycsed6t) 's Twitter Profile Photo

Introduce HumanX, a full-stack framework that compiles human video into generalizable, real-world interaction skills ๐Ÿ€โšฝ๏ธ๐ŸฅŠ๐Ÿ“ฆ for humanoids, without task-specific rewards. Paper: arxiv.org/abs/2602.02473 Page: wyhuai.github.io/human-x/ #humanoid #ai #hkust #robotics #sports

Zi-ang Cao (@ziang_cao) 's Twitter Profile Photo

๐Ÿš€ Introducing CHIP: Adaptive Compliance for Humanoid Control through Hindsight Perturbation! Current humanoids face a trade-off: they are either Agile & Stiff OR Slow & Soft. CHIP breaks this barrier. We enable on-the-fly switching between Compliant (wiping ๐Ÿงผ,

Yuke Zhu (@yukez) 's Twitter Profile Photo

We have seen rapid progress in humanoid control โ€” specialist robots can reliably generate agile, acrobatic, but preset motions. Our singular focus this year: putting generalist humanoids to do real work. To progress toward this goal, we developed SONIC (nvlabs.github.io/GEAR-SONIC/),

Kinam Kim (@kinam_0252) 's Twitter Profile Photo

๐Ÿš€Excited to share that our paper EgoX๐Ÿ‘€ has been accepted to #CVPR2026 ! Huge thanks to my co-first authors (taewoongkang, dohyeon), co-authors (Minho Park,junhahyung) and Prof. Jaegul Choo. See you in Denver!๐Ÿ”๏ธ #VideoGeneration #WorldModeling #Robotics

Zhengyi โ€œZenโ€ Luo (@zhengyiluo) 's Twitter Profile Photo

Whole-body intilligence doesnโ€™t need to be huge models; it needs to scale on the right task and compute Proud to be part of the team for bringing GEAR-SONIC to life. Open sourced!

Sirui Xu (@xu_sirui) 's Twitter Profile Photo

InterPrior is accepted to #CVPR2026. Meanwhile, weโ€™d love to share our ongoing open-sourcing efforts: InterAct: github.com/wzyabcas/Interโ€ฆ InterMimic: github.com/Sirui-Xu/Interโ€ฆ 1. InterMimic now supports multi-GPU training, as well as IsaacLab replay and inference. 2. InterActโ€”our

xy-C (@xiaoyan_cong) 's Twitter Profile Photo

๐Ÿ’กIntroducing ๐‘ผ๐‘ด๐‘ถ -- one unified model that unlocks motion foundation model (HY-Motion Tencent HY) priors for ๐Ÿ๐ŸŽ+ ๐ญ๐š๐ฌ๐ค๐ฌ: ๐ž๐๐ข๐ญ๐ข๐ง๐ , ๐ซ๐ž๐š๐œ๐ญ๐ข๐จ๐ง ๐ ๐ž๐ง๐ž๐ซ๐š๐ญ๐ข๐จ๐ง, ๐ฌ๐ญ๐ฒ๐ฅ๐ข๐ณ๐š๐ญ๐ข๐จ๐ง, ๐ญ๐ซ๐š๐ฃ๐ž๐œ๐ญ๐จ๐ซ๐ฒ ๐œ๐จ๐ง๐ญ๐ซ๐จ๐ฅ, ๐จ๐›๐ฌ๐ญ๐š๐œ๐ฅ๐ž

๐Ÿ’กIntroducing ๐‘ผ๐‘ด๐‘ถ -- one unified model that unlocks motion foundation model (HY-Motion <a href="/TencentHunyuan/">Tencent HY</a>) priors for ๐Ÿ๐ŸŽ+ ๐ญ๐š๐ฌ๐ค๐ฌ: ๐ž๐๐ข๐ญ๐ข๐ง๐ , ๐ซ๐ž๐š๐œ๐ญ๐ข๐จ๐ง ๐ ๐ž๐ง๐ž๐ซ๐š๐ญ๐ข๐จ๐ง, ๐ฌ๐ญ๐ฒ๐ฅ๐ข๐ณ๐š๐ญ๐ข๐จ๐ง, ๐ญ๐ซ๐š๐ฃ๐ž๐œ๐ญ๐จ๐ซ๐ฒ ๐œ๐จ๐ง๐ญ๐ซ๐จ๐ฅ, ๐จ๐›๐ฌ๐ญ๐š๐œ๐ฅ๐ž
Neerja Thakkar (@neerjathakkar) 's Twitter Profile Photo

Whatโ€™s the right representation for a world model? 3D, pixels, or something else? Excited to release our new paper โ€œForecasting Motion in the Wildโ€ where we propose point tracks as tokens for generating complex non-rigid motion and behavior From @GoogleDeepmind @Berkeley_AI

Hojoon Lee (@hojoon_ai) 's Twitter Profile Photo

We scaled off-policy RL to sim-to-real. To our knowledge, FlashSAC is the fastest and most performant RL algorithm across IsaacLab, MuJoCo Playground, and many more, all with a single set of hyperparameters. Project page: holiday-robot.github.io/FlashSAC Paper: arxiv.org/pdf/2604.04539