Isabella Liu (@isabella__liu) 's Twitter Profile
Isabella Liu

@isabella__liu

CS PhD @ UC SanDiego

ID: 1597877844

linkhttp://liuisabella.com calendar_today16-07-2013 08:44:02

84 Tweet

718 Takipçi

290 Takip Edilen

Isabella Liu (@isabella__liu) 's Twitter Profile Photo

Excited to be at #ICLR2025 in person this year! Looking forward to reconnecting and making new friends.🤩 Come chat with us about Dynamic Gaussians Mesh at poster #97 tomorrow (4/26, 3–5:30pm). See you there!🥳 Website: liuisabella.com/DG-Mesh

Excited to be at #ICLR2025 in person this year! Looking forward to reconnecting and making new friends.🤩

Come chat with us about Dynamic Gaussians Mesh at poster #97 tomorrow (4/26, 3–5:30pm). See you there!🥳

Website: liuisabella.com/DG-Mesh
Stone Tao (@stone_tao) 's Twitter Profile Photo

I’ll be at the robot learning workshop today and giving an oral talk at 9:15 AM on ManiSkill3 in rooms Garnet 216/217 at ICLR. Come see the crazy things you can do with fast sim+rendering like fast visual RL and zero-shot RGB sim2real! ManiSkill was also accepted at RSS!

Hanwen Jiang (@hanwenjiang1) 's Twitter Profile Photo

Supervised learning has held 3D Vision back for too long. Meet RayZer — a self-supervised 3D model trained with zero 3D labels: ❌ No supervision of camera & geometry ✅ Just RGB images And the wild part? RayZer outperforms supervised methods (as 3D labels from COLMAP is noisy)

Xuxin Cheng (@xuxin_cheng) 's Twitter Profile Photo

Meet 𝐀𝐌𝐎 — our universal whole‑body controller that unleashes the 𝐟𝐮𝐥𝐥  kinematic workspace of humanoid robots to the physical world. AMO is a single policy trained with RL + Hybrid Mocap & Trajectory‑Opt. Accepted to #RSS2025. Try our open models & more 👉

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

On my way to ICRA! Our group will be presenting Mobile-TeleVision (below) and WildMA (wildlma.github.io). Looking forward to chatting!

Yi Zhou (@papagina_yi) 's Twitter Profile Photo

🚀 Struggling with the lack of high-quality data for AI-driven human-object interaction research? We've got you covered! Introducing HUMOTO, a groundbreaking 4D dataset for human-object interaction, developed with a combination of wearable motion capture, SOTA 6D pose

Bo Ai (@boai0110) 's Twitter Profile Photo

🧠 Can a single robot policy control many, even unseen, robot bodies? We scaled training to 1000+ embodiments and found: More training bodies → better generalization to unseen ones. We call it: Embodiment Scaling Laws. A new axis for scaling. 🔗 embodiment-scaling-laws.github.io 🧵👇

yisha (@yswhynot) 's Twitter Profile Photo

For years, I’ve been tuning parameters for robot designs and controllers on specific tasks. Now we can automate this on dataset-scale. Introducing Co-Design of Soft Gripper with Neural Physics - a soft gripper trained in simulation to deform while handling load.

Chong Zeng (@iam_ncj) 's Twitter Profile Photo

What if a Transformer could render? Not text → image. But mesh → image — with global illumination. No rasterizers. No ray-tracers. Just a Transformer without per-scene training. RenderFormer does exactly that. #SIGGRAPH2025 🔗microsoft.github.io/renderformer

What if a Transformer could render?
Not text → image.
But mesh → image — with global illumination.

No rasterizers. No ray-tracers. Just a Transformer without per-scene training.

RenderFormer does exactly that.

#SIGGRAPH2025 
🔗microsoft.github.io/renderformer
Tianyuan Zhang (@tianyuanzhang99) 's Twitter Profile Photo

Bored of linear recurrent memories (e.g., linear attention) and want a scalable, nonlinear alternative? Our new paper “Test-Time Training Done Right” propose LaCT (Large Chunk Test-Time Training) — a highly efficient, massively scalable nonlinear memory with: 💡 Pure PyTorch

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

The code of GSPN #CVPR2025 is released! We proposed a new sqrt(N) complexity attention mechanism, which enables efficient high resolution image generation. We can generate 8k images with 42x speed up compared to self-attention in StableDiffusionXL! Code: github.com/NVlabs/GSPN

Zhao Dong (@flycooler_zd) 's Twitter Profile Photo

🚀 Excited to announce our CVPR 2025 Workshop: 3D Digital Twin: Progress, Challenges, and Future Directions 🗓 June 12, 2025 · 9:00 AM–5:00 PM 📢 Incredible lineup: Richard Newcombe, Andrea Vedaldi Visual Geometry Group (VGG),Hao (Richard) Zhang,Qianqian Wang,Dr. Xiaoshuai Zhang Hillbot,

🚀 Excited to announce our CVPR 2025 Workshop:  
3D Digital Twin: Progress, Challenges, and Future Directions  
🗓 June 12, 2025 · 9:00 AM–5:00 PM  
📢 Incredible lineup: <a href="/rapideRobot/">Richard Newcombe</a>, Andrea Vedaldi
<a href="/Oxford_VGG/">Visual Geometry Group (VGG)</a>,<a href="/richardzhangsfu/">Hao (Richard) Zhang</a>,<a href="/QianqianWang5/">Qianqian Wang</a>,Dr. Xiaoshuai Zhang <a href="/Hillbot_AI/">Hillbot</a>,
Xueyan Zou (@xyz2maureen) 's Twitter Profile Photo

Welcome to join us on June 12th AM sessions! Location and time are shown in the image 😃 Thanks for the speakers and organizers : ) #CVPR2025

Welcome to join us on June 12th AM sessions!
Location and time are shown in the image 😃
Thanks for the speakers and organizers : )  
#CVPR2025
Jiteng Mu (@jitengmu) 's Twitter Profile Photo

🥳 EditAR code is released! Welcome to check it out. 👉Presenting EditAR at #CVPR2025! (Friday afternoon, Jun 13, 4:00pm-6:00pm, Hall D #242) Code: github.com/JitengMu/EditAR Project: jitengmu.github.io/EditAR

Anpei Chen (@anpeic) 's Twitter Profile Photo

📢 We’re presenting two posters at #CVPR2025 today! 🗓️ June 13 | 🕓 16:00–18:00 | 📍 Exhibit Hall D 🔹 Genfusion — Booth 61 🔹 Feat2GS — Booth 93 Come by to chat about generative 3D, geometry, and beyond. See you there! #CVPR25 #3Dvision #AI

Zhiyang (Frank) Dou (@frankzydou) 's Twitter Profile Photo

Check out 🌟Vid2Sim: Generalizable, Video-based Reconstruction of Appearance, Geometry & Physics for Mesh-Free Simulation #CVPR2025, from Lingjie Liu’s lab at UPenn. Congrats to Chuhao Chen! Vid2Sim aims to achieve system identification by reconstructing geometry, appearance,

yisha (@yswhynot) 's Twitter Profile Photo

🚀Heading to #RSS2025? Swing by EEB 248 on Wednesday, June 25 at 3:30 PM for a live demo of our data-driven, co-design soft gripper 🥢 at the workshop Robot Hardware-Aware Intelligence!

Jianglong Ye (@jianglong_ye) 's Twitter Profile Photo

How to generate billion-scale manipulation demonstrations easily? Let us leverage generative models! 🤖✨ We introduce Dex1B, a framework that generates 1 BILLION diverse dexterous hand demonstrations for both grasping 🖐️and articulation 💻 tasks using a simple C-VAE model.