Suning Huang (@suning_huang) 's Twitter Profile
Suning Huang

@suning_huang

PhD @Stanford๏ฝœBEng @Tsinghua_Uni. Learning to teach robots to learn. Nice to meet you ;)

ID: 1750326366795661312

linkhttps://suninghuang19.github.io/ calendar_today25-01-2024 01:15:11

28 Tweet

245 Followers

298 Following

Guowei Xu (@kevin_guoweixu) 's Twitter Profile Photo

๐Ÿš€ Introducing LLaVA-o1: The first visual language model capable of spontaneous, systematic reasoning, similar to GPT-o1! ๐Ÿ” ๐ŸŽฏOur 11B model outperforms Gemini-1.5-pro, GPT-4o-mini, and Llama-3.2-90B-Vision-Instruct! ๐Ÿ”‘The key is training on structured data and a novel inference

๐Ÿš€ Introducing LLaVA-o1: The first visual language model capable of spontaneous, systematic reasoning, similar to GPT-o1! ๐Ÿ”
๐ŸŽฏOur 11B model outperforms Gemini-1.5-pro, GPT-4o-mini, and Llama-3.2-90B-Vision-Instruct!
๐Ÿ”‘The key is training on structured data and a novel inference
Yuanchen_Ju (@ju_yuanchen) 's Twitter Profile Photo

๐ŸŒWe present DenseMatcher๏ผ ๐Ÿค–๏ธDenseMatcher enables robots to acquire generalizable skills across diverse object categories by only seeing one demo, by finding correspondences between 3D objects even with different types, shapes, and appearances.

Guanya Shi (@guanyashi) 's Twitter Profile Photo

When I was a Ph.D. student at Caltech, Ludwig Schmidt discussed the paper "Do ImageNet Classifiers Generalize to ImageNet?" in his job talk, which left me with a super deep impression until today. Basically, they recreated an ImageNet and found the SOTA models in circa 2019 had

Zhengrong Xue (@zhengrongx) 's Twitter Profile Photo

๐‘ซ๐’†๐’Ž๐’๐‘ฎ๐’†๐’ has been accepted to #RSS2025 ๐Ÿฅณ See you in LA this June ๐Ÿ™Œ

Suning Huang (@suning_huang) 's Twitter Profile Photo

Excited to share that MENTOR has been accepted to #ICML2025! See you in Vancouver this July๐Ÿค– x.com/suning_huang/sโ€ฆ

Tyler Lum (@tylerlum23) 's Twitter Profile Photo

๐Ÿง‘๐Ÿค– Introducing Human2Sim2Robot!ย  ๐Ÿ’ช๐Ÿฆพ Learn robust dexterous manipulation policies from just one human RGB-D video. Our Realโ†’Simโ†’Real framework crosses the human-robot embodiment gap using RL in simulation. #Robotics #DexterousManipulation #Sim2Real ๐Ÿงต1/7

Jingyun Yang (@yjy0625) 's Twitter Profile Photo

Introducing Mobi-ฯ€: Mobilizing Your Robot Learning Policy. Our method: โœˆ๏ธ enables flexible mobile skill chaining ๐Ÿชถ without requiring additional policy training data ๐Ÿ  while scaling to unseen scenes ๐Ÿงตโ†“

Priya Sundaresan (@priyasun_) 's Twitter Profile Photo

How can we move beyond static-arm lab setups and learn robot policies in our messy homes? We introduce HoMeR, an imitation learning agent for in-the-wild mobile manipulation. ๐Ÿงต1/8

Christopher Agia (@agiachris) 's Twitter Profile Photo

What makes data โ€œgoodโ€ for robot learning? We argue: itโ€™s the data that drives closed-loop policy success! Introducing CUPID ๐Ÿ’˜, a method that curates demonstrations not by "quality" or appearance, but by how they influence policy behavior, using influence functions. (1/6)

S. Lester Li (@sizhe_lester_li) 's Twitter Profile Photo

Now in Nature! ๐Ÿš€ Our method learns a controllable 3D model of any robot from vision, enabling single-camera closed-loop control at test time! This includes robots previously uncontrollable, soft, and bio-inspired, potentially lowering the barrier of entry to automation! Paper:

Now in Nature! ๐Ÿš€ Our method learns a controllable 3D model of any robot from vision, enabling single-camera closed-loop control at test time! This includes robots previously uncontrollable, soft, and bio-inspired, potentially lowering the barrier of entry to automation!

Paper:
Suning Huang (@suning_huang) 's Twitter Profile Photo

Unfortunately I cannot attend the conference in person this year, but our co-author Guowei Xu will be presenting the paper and answer all your questions! ๐Ÿ“œPoster session: Time: Wed 16 Jul 11 a.m. PDT โ€” 1:30 p.m. PDT Location: West Exhibition Hall B2-B3 #W-607

Stephen James (@stepjamuk) 's Twitter Profile Photo

๐—œ'๐˜ƒ๐—ฒ ๐—ต๐—ฒ๐—ฎ๐—ฟ๐—ฑ ๐˜๐—ต๐—ถ๐˜€ ๐—ฎ ๐—น๐—ผ๐˜ ๐—ฟ๐—ฒ๐—ฐ๐—ฒ๐—ป๐˜๐—น๐˜†: "๐—ช๐—ฒ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ฒ๐—ฑ ๐—ผ๐˜‚๐—ฟ ๐—ฟ๐—ผ๐—ฏ๐—ผ๐˜ ๐—ผ๐—ป ๐—ผ๐—ป๐—ฒ ๐—ผ๐—ฏ๐—ท๐—ฒ๐—ฐ๐˜ ๐—ฎ๐—ป๐—ฑ ๐—ถ๐˜ ๐—ด๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐—น๐—ถ๐˜€๐—ฒ๐—ฑ ๐˜๐—ผ ๐—ฎ ๐—ป๐—ผ๐˜ƒ๐—ฒ๐—น ๐—ผ๐—ฏ๐—ท๐—ฒ๐—ฐ๐˜ - ๐˜๐—ต๐—ฒ๐˜€๐—ฒ ๐—ป๐—ฒ๐˜„ ๐—ฉ๐—Ÿ๐—” ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ ๐—ฎ๐—ฟ๐—ฒ ๐—ฐ๐—ฟ๐—ฎ๐˜‡๐˜†!" Let's talk about what's actually

Marion Lepert (@marionlepert) 's Twitter Profile Photo

Introducing Masquerade ๐ŸŽญ: We edit in-the-wild videos to look like robot demos, and find that co-training policies with this data achieves much stronger performance in new environments. โ—Note: No real robots in these videosโ—Itโ€™s all ๐Ÿ’ช๐Ÿผ โžก๏ธ ๐Ÿฆพ ๐Ÿงต1/6