Hao Su Lab (@haosulabucsd) 's Twitter Profile
Hao Su Lab

@haosulabucsd

Researching at the frontier of AI on topics of Computer Vision, Computer Graphics, Robotics, Embodied AI, and Reinforcement Learning @UCSanDiego @haosu_twitr

ID: 1399258411049443334

linkhttps://cseweb.ucsd.edu/~haosu/ calendar_today31-05-2021 06:56:28

142 Tweet

2,2K Followers

211 Following

Nicklas Hansen (@ncklashansen) 's Twitter Profile Photo

TD-MPC2 is accepted at #ICLR2024 as a spotlight presentation with 8888 scores! Since its release, we have open-sourced several new features: - Multi-modal observations (pixels/state) - Multi-GPU training - A new test-time regularizer for offline RL - 1.5x faster training speed

Ted Xiao (@xiao_ted) 's Twitter Profile Photo

Happy to share that RT-Trajectory has been accepted as a Spotlight (Top 5%) at #ICLR2024! This was my first last-author project, it was a ton of fun collaborating with a strong team led by Jiayuan Gu@SIGGRAPH2025 🥳 Blogpost: deepmind.google/discover/blog/… Website: rt-trajectory.github.io 🧵⬇️

Hao Su (@haosu_twitr) 's Twitter Profile Photo

Excited to share that we have 5 submissions on embodied AI accepted at #ICLR2024 with two spotlights! Through these works, we improve the versatility, generalizability, effectiveness, and training efficiency of embodied AI models. Check them out in the thread below:

Hao Su (@haosu_twitr) 's Twitter Profile Photo

Checkout Xinyue Wei 's latest work, which presents a transformer-based model that reconstructs a high-fidelity 3D mesh from 4 (sparse) input images in less than one second! arxiv.org/abs/2404.12385

Hao Su (@haosu_twitr) 's Twitter Profile Photo

Checkout DG-Mesh from Isabella Liu, which reconstructs time-consistent, high-quality dynamic mesh with flexible topology change from monocular videos. liuisabella.com/DG-Mesh/

Stone Tao (@stone_tao) 's Twitter Profile Photo

maniskill sneak peak 3: lots of new robots to use! Whether its mobile manipulation, humanoids, quadrupeds, or even tactile dextrous hands (see the shadow hand at the bottom with red tactile sensors), we have a ton of new domains being added to try out on GPU state/visual sim

maniskill sneak peak 3:  lots of new robots to use! Whether its mobile manipulation, humanoids, quadrupeds, or even tactile dextrous hands (see the shadow hand at the bottom with red tactile sensors), we have a ton of new domains being added to try out on GPU state/visual sim
Stone Tao (@stone_tao) 's Twitter Profile Photo

📢 ManiSkill 3 beta is out! Simulate everything everywhere all at once 🥯 - 18K RGBD FPS on 1 GPU, 3K on Colab! - Diverse parallel GPU sim - Tons of new robots/tasks All open-sourced: github.com/haosulab/ManiS… Photo: MS3 Tasks w/ scenes from AI2THOR and ReplicaCAD 🧵(1/6)

📢 ManiSkill 3 beta is out! Simulate everything everywhere all at once 🥯  

- 18K RGBD FPS on 1 GPU, 3K on Colab!  
- Diverse parallel GPU sim  
- Tons of new robots/tasks

All open-sourced: github.com/haosulab/ManiS…
Photo: MS3 Tasks w/ scenes from AI2THOR and ReplicaCAD 
🧵(1/6)
Xuanlin Li (Simon) (@xuanlinli2) 's Twitter Profile Photo

Scalable, reproducible, and reliable robotic evaluation remains an open challenge, especially in the age of generalist robot foundation models. Can *simulation* effectively predict *real-world* robot policy performance & behavior? Presenting SIMPLER!👇 simpler-env.github.io

Stone Tao (@stone_tao) 's Twitter Profile Photo

Don’t have a real robot/setup but want to evaluate policies trained on real world datasets? Check out SIMPLER, fast, safe, and reliable evaluation of real robot policies in sim via ManiSkill 2. The ManiSkill 3 beta will port SIMPLER over soon so stay tuned!

Hao Su (@haosu_twitr) 's Twitter Profile Photo

#ICRA2024 Linghao Chen will present our differentiable rendering-based hand-eye calibration method, EasyHec! May 16 13:30@CC-313 (oral); May 16 16:30-18:[email protected] (poster) It produces accurate calibration results in a fully automatic manner! ootts.github.io/easyhec/@Lingh…

Nicklas Hansen (@ncklashansen) 's Twitter Profile Photo

🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with Jyothir S V Vlad Sobal Yann LeCun Xiaolong Wang Hao Su Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)

Hao Su (@haosu_twitr) 's Twitter Profile Photo

Join us at our first workshop on 3D Foundation Models @CVPR2024, June 18 in Summit 434, starting at 8:50AM! We have fantastic speakers to discuss the progress and prospects in 3D foundation models. Check out more details at 3dfm.github.io

Join us at our first workshop on 3D Foundation Models @CVPR2024, June 18 in Summit 434, starting at 8:50AM!

We have fantastic speakers to discuss the progress and prospects in 3D foundation models. 

Check out more details at 3dfm.github.io
Yuchen Zhou (@yuchen010807) 's Twitter Profile Photo

While the Segment Anything Model (SAM) greatly improves 2D segmentation annotation efficiency, is there a foundation model that works for 3D point clouds and meshes like SAM? Introducing Point-SAM, a 3D prompt segmentation foundation model! 👇 point-sam.github.io

Xinyue Wei (@sarahweii) 's Twitter Profile Photo

🚀 Thrilled to announce the release of the reproduced MeshLRM demo! 🎉 Generate textured 3D meshes from one or more unposed images in seconds. Check it out: huggingface.co/spaces/sudo-ai…

Stone Tao (@stone_tao) 's Twitter Profile Photo

We now have an initial ManiSkill3 paper out on arXiv which you can cite, just in time for the ICLR submission deadline 😁 arxiv.org/abs/2410.00425

Xuanlin Li (Simon) (@xuanlinli2) 's Twitter Profile Photo

SIMPLER will be presented at #CoRL2024 at 4pm on Nov 8 (Section 4)! While I won't be in person due to visa constraints, Ted Xiao, Karl Pertsch, and Oier Mees will be presenting the paper and are happy to chat about it in person!

Xuanlin Li (Simon) (@xuanlinli2) 's Twitter Profile Photo

Learning bimanual, contact-rich robot manipulation policies that generalize over diverse objects has long been a challenge. Excited to share our work: Planning-Guided Diffusion Policy Learning for Generalizable Contact-Rich Bimanual Manipulation! glide-manip.github.io 🧵1/n

Arth Shukla @ ICLR 2025 (@arth_shukla) 's Twitter Profile Photo

📢 Introducing ManiSkill-HAB: A benchmark for low-level manipulation in home rearrangement tasks! - GPU-accelerated simulation - Extensive RL/IL baselines - Vision-based, whole-body control robot dataset All open-sourced: arth-shukla.github.io/mshab 🧵(1/5)

Adria Lopez (@alopeze99) 's Twitter Profile Photo

🤖Introducing DEMO3: our new model-based RL framework for multi-stage robotic manipulation from visual inputs and sparse rewards. 🧵🔽 📜 Paper: [arxiv.org/abs/2503.01837] 🌍 Project Page: [adrialopezescoriza.github.io/demo3/] 💻 Code: [github.com/adrialopezesco…]

🤖Introducing DEMO3: our new model-based RL framework for multi-stage robotic manipulation from visual inputs and sparse rewards. 🧵🔽

📜 Paper: [arxiv.org/abs/2503.01837]
🌍 Project Page: [adrialopezescoriza.github.io/demo3/]
💻 Code: [github.com/adrialopezesco…]