Zhongang Cai (@caizhongang) 's Twitter Profile
Zhongang Cai

@caizhongang

Staff Research Scientist, SenseTime Research.
MLLM-powered 3D Social Characters.
Ph.D., S-Lab/MMLab@NTU, advised by Prof Ziwei Liu and Prof Chen Change Loy.

ID: 1290226925546295296

linkhttps://caizhongang.com/ calendar_today03-08-2020 10:05:35

59 Tweet

845 Takipçi

183 Takip Edilen

Gradio (@gradio) 's Twitter Profile Photo

📢Gradio demo for 𝐒𝐌𝐏𝐋𝐞𝐫-𝐗: Scaling Up Expressive Human Pose & Shape Estimation 🏆FIRST foundation model and demo for monocular 4D motion capture Input a video➡️get cool video of the 3D reconstructions for the detected human ✅smplx files and ✅mesh files are provided

camenduru (@camenduru) 's Twitter Profile Photo

💃 SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation 🕺 @gradio Jupyter Notebook 🥳 Thanks to Zhongang Cai ❤ Wanqi Yin ❤ Ailing Zeng ❤ Chen Wei ❤ Qingping Sun ❤ Yanjun Wang ❤ Hui En Pang ❤ Haiyi Mei ❤ Mingyuan Zhang ❤ Lei Zhang ❤ Chen Change Loy ❤ Lei Yang ❤

Zhongang Cai (@caizhongang) 's Twitter Profile Photo

Interested in modeling human-{scene, object} interaction with physically plausible contacts? Don't miss Shashank's talk, happening tomorrow (4pm UTC+8 and 9am UTC-8) at The AI Talks! Join the talk via: mailchi.mp/8eed735e361d/a…

Dreaming Tulpa 🥓👑 (@dreamingtulpa) 's Twitter Profile Photo

Bye bye expensive mocap equipment 👋 AiOS is yet another method that can reconstruct human from videos and images! This one also supports hands and facial expressions. ttxskk.github.io/AiOS/

Zhongang Cai (@caizhongang) 's Twitter Profile Photo

HuMMan v1.0: 3D Vision Subset (HuMMan-Point) has just been released! ✅ RGB-D @ 30 FPS ✅ Captured with Kinect & iPhone ✅ 340 subjects & 247 motions ✅ SMPL annotations included 🔗Homepage: caizhongang.com/projects/HuMMa…

Zhongang Cai (@caizhongang) 's Twitter Profile Photo

🚀 HuMMan-MoGen is here! 🚀 HuMMan v1.0: Motion Generation Subset features 112,112 fine-grained temporal (by stage) and spatial (by part) text annotations for 179 subjects, 320 actions, and 6264 motion sequences! 🔗Homepage: caizhongang.com/projects/HuMMa…

Zhongang Cai (@caizhongang) 's Twitter Profile Photo

🚀 Announcing GTA-Human II for expressive human pose & shape estimation! Compared to its predecessor, this latest game-playing dataset features: ✅ Multi-person scenes ✅ SMPL-X annotations (upgraded from SMPL) ✅ Point cloud data 🔗 Homepage: caizhongang.com/projects/GTA-H…

AK (@_akhaliq) 's Twitter Profile Photo

GTA-Human II project page: caizhongang.com/projects/GTA-H… built upon GTA-V for expressive human pose and shape estimation. It features multi-person scenes with SMPL-X annotations. In addition to color image sequences, 3D bounding boxes and cropped point clouds (generated from

Zhongang Cai (@caizhongang) 's Twitter Profile Photo

On the path towards making vision-based MoCap fully production-ready, we tested it with a pro animator using AiOS. The results were encouraging: up to a 50% cut in animation time! Check out more below: Homepage: ttxskk.github.io/AiOS/ Demo (SMPL-X): huggingface.co/spaces/ttxskk/…

Alan Jiang (@alan_jjp) 's Twitter Profile Photo

🔥SOLAMI: Step Out, Play Together!🔥 3D C.AI in VR powered by social VLA model Key features: - Almost real-time immersive multimodal dialogue - Diverse 3D characters🦸🧜‍♀️🧚 - Body language, gameplay, and social interaction Project: solami-ai.github.io

Ziwei Liu (@liuziwei7) 's Twitter Profile Photo

🔥Character AI in VR Space🔥 We present #SOLAMI, a social vision-language-action (VLA) model that enables 3D autonomous characters with *speech and body language* interaction - Project: solami-ai.github.io - Paper Hugging Face : huggingface.co/papers/2412.00… . Thanks AK !

Ziwei Liu (@liuziwei7) 's Twitter Profile Photo

📢Large Motion Model Released📢 🔥Large Motion Model🔥 has been open-sourced, serving as a unified multimodal motion generation foundation model - Project: mingyuan-zhang.github.io/projects/LMM.h… - Code: github.com/mingyuan-zhang… - Demo Gradio : huggingface.co/spaces/mingyua… . Thanks to AK !

Zhongang Cai (@caizhongang) 's Twitter Profile Photo

🔥 WHAC is here! Code released + WHAC-A-Mole dataset that features dual motions & moving cameras. Powered by SMPLest-X—ultimate scaling to hit data saturation for the first time with 40 SMPL(-X) datasets! 🚀 🔗 wqyin.github.io/projects/WHAC/ 🔗 caizhongang.com/projects/SMPLe…

Ziwei Liu (@liuziwei7) 's Twitter Profile Photo

🔥Foundation Models for 3D/4D Motion Capture🔥 We present 📸SMPLest-X📸, the ultimate scaling law for expressive human pose and shape estimation. - Project: caizhongang.com/projects/SMPLe… - Paper: arxiv.org/pdf/2501.09782 - Code: github.com/wqyin/SMPLest-X

AK (@_akhaliq) 's Twitter Profile Photo

EgoLife Towards Egocentric Life Assistant introduce EgoLife, a project to develop an egocentric life assistant that accompanies and enhances personal efficiency through AI-powered wearable glasses

Jingkang Yang @NTU🇸🇬 (@jingkangy) 's Twitter Profile Photo

Thank you AK for sharing! Introducing our newest CVPR 2025 paper EgoLife: Towards Egocentric Life Assistant Homepage: egolife-ai.github.io Blog: egolife-ai.github.io/blog/ Code: github.com/EvolvingLMMs-L… EgoLife is a project focused on building AI-powered egocentric life

Jingkang Yang @NTU🇸🇬 (@jingkangy) 's Twitter Profile Photo

🚀 𝐈𝐧𝐭𝐫𝐨𝐝𝐮𝐜𝐢𝐧𝐠 𝐄𝐠𝐨𝐋𝐢𝐟𝐞 👓 How can AI truly assist in daily life—not just for a moment, but across an entire week? huggingface.co/papers/2503.03… We invited 💁🙇‍♀️💁‍♀️6 volunteers 🙎‍♀️🧏‍♀️🙋‍♂️ to live together for a full week, each wearing Project Aria @Meta glasses to capture

Alan Jiang (@alan_jjp) 's Twitter Profile Photo

🎉🎉Our paper SOLAMI has been accepted by CVPR 2025! We have released our training code, data generation pipeline and VR demo code to support our community. - Project: solami-ai.github.io - Code: github.com/AlanJiang98/SO… #CVPR2025