Georgios Pavlakos (@geopavlakos) 's Twitter Profile
Georgios Pavlakos

@geopavlakos

Assistant Professor at UT Austin @UTCompSci | Working on Computer Vision and Machine Learning

ID: 1272527553652236288

linkhttps://geopavlakos.github.io/ calendar_today15-06-2020 13:53:22

175 Tweet

2,2K Followers

267 Following

Yao Feng (@yaofeng1995) 's Twitter Profile Photo

Thrilled to see a full room of attendees—thank you all for joining our workshop! Special thanks to our amazing speakers: Siyu Tang @VLG-ETHZ, Xavier Puig, Jingyi Yu, Michael Black and Angjoo Kanazawa. And a big thank you to all the authors for the poster presentation!

Thrilled to see a full room of attendees—thank you all for joining our workshop!
Special thanks to our amazing speakers: <a href="/SiyuTang3/">Siyu Tang @VLG-ETHZ</a>, <a href="/xavierpuigf/">Xavier Puig</a>, Jingyi Yu, <a href="/Michael_J_Black/">Michael Black</a> and <a href="/akanazawa/">Angjoo Kanazawa</a>. 
And a big thank you to all the authors for the poster presentation!
Justin Kerr (@justkerrding) 's Twitter Profile Photo

Robot See, Robot Do allows you to teach a robot articulated manipulation with just your hands and a phone! RSRD imitates from 1) an object scan and 2) a human demonstration video, reconstructing 3D motion to plan a robot trajectory. robot-see-robot-do.github.io #CoRL2024 (Oral)

Ivona Najdenkoska (@ivonajdenkoska) 's Twitter Profile Photo

Excited to share TULIP🌷 a method to upgrade the caption length of CLIP-like models for long caption understanding. By design, CLIP’s positional encodings limit inputs to 77 tokens. Here’s how TULIP breaks this barrier: 🧵 📜arxiv.org/pdf/2410.10034 💻 github.com/ivonajdenkoska…

Hanwen Jiang (@hanwenjiang1) 's Twitter Profile Photo

We will present CoFie at #NeurIPS2024 tomorrow - a compact geometry-aware surface representation. CoFie disentangles the transformation of local patches and explicitly models it in SE(3), aligning local patches and reducing their complexity. Location: West Ballroom A-D #6900

We will present CoFie at #NeurIPS2024 tomorrow - a compact geometry-aware surface representation. CoFie disentangles the transformation of local patches and explicitly models it in SE(3), aligning local patches and reducing their complexity.

Location: West Ballroom A-D #6900
Hanwen Jiang (@hanwenjiang1) 's Twitter Profile Photo

💥 Think more real data is needed for scene reconstruction? Think again! Meet MegaSynth: scaling up feed-forward 3D scene reconstruction with synthesized scenes. In 3 days, it generates 700K scenes for training—70x larger than real data! ✨ The secret? Reconstruction is mostly

Qixing Huang (@qixing_huang) 's Twitter Profile Photo

Very happy that AtlasGaussians was accepted by ICLR 25. openreview.net/forum?id=H2Gxi…). I did very little and the students, in particular the first author Haitao Yang (yanghtr.github.io), came up with the idea. Haitao is graduating soon. Also, my second published paper with

Georgios Pavlakos (@geopavlakos) 's Twitter Profile Photo

Atlas Gaussians will be presented as a 🎉Spotlight🎉 at ICLR 2025! 🥳 Huge congratulations to Haitao Yang (yanghtr.github.io) for this amazing work! Project Page: yanghtr.github.io/projects/atlas…

Ethan Weber (@ethanjohnweber) 's Twitter Profile Photo

I'm excited to present "Fillerbuster: Multi-View Scene Completion for Casual Captures"! This is work with my amazing collaborators Norman Müller, Yash Kant, Vasu Agrawal, Michael Zollhoefer, Angjoo Kanazawa, Christian Richardt during my internship at Meta Reality Labs. ethanweber.me/fillerbuster/

I'm excited to present "Fillerbuster: Multi-View Scene Completion for Casual Captures"! This is work with my amazing collaborators <a href="/Normanisation/">Norman Müller</a>, <a href="/yash2kant/">Yash Kant</a>, Vasu Agrawal, <a href="/MZollhoefer/">Michael Zollhoefer</a>, <a href="/akanazawa/">Angjoo Kanazawa</a>, <a href="/c_richardt/">Christian Richardt</a> during my internship at Meta Reality Labs. ethanweber.me/fillerbuster/
Georgios Pavlakos (@geopavlakos) 's Twitter Profile Photo

Make sure to check out Hanwen's Hanwen Jiang latest work! 🚀 We introduce RayZer, a self-supervised model for novel view synthesis. We use zero 3D supervision, yet we outperform supervised methods! Some surprising and exciting results inside! 🔍🔥

Qianli Ma (@qianli_m) 's Twitter Profile Photo

The 2nd 3D HUMANS workshop is back at #CVPR2025! 📍Join us on June 12 afternoon in Nashville for a 2025 perspective on 3D human perception, reconstruction & synthesis. 🖼️ Got a CVPR paper on 3D humans? Nominate it to be featured in our poster session! 👉 tinyurl.com/3d-humans-2025

The 2nd 3D HUMANS workshop is back at <a href="/CVPR/">#CVPR2025</a>!

📍Join us on June 12 afternoon in Nashville for a 2025 perspective on 3D human perception, reconstruction &amp; synthesis.

🖼️ Got a CVPR paper on 3D humans? Nominate it to be featured in our poster session! 👉 tinyurl.com/3d-humans-2025
Hanwen Jiang (@hanwenjiang1) 's Twitter Profile Photo

🔍 3D is not just pixels—we care about geometry, physics, topology, and functions. But how to balance these inductive biases with scalable learning? 👀 Join us at Ind3D workshop #CVPR2025 (June 12, afternoon) for discussions on the future of 3D models! 🌐 ind3dworkshop.github.io/cvpr2025

🔍 3D is not just pixels—we care about geometry, physics, topology, and functions.
But how to balance these inductive biases with scalable learning?

👀 Join us at Ind3D workshop <a href="/CVPR/">#CVPR2025</a>  (June 12, afternoon)
for discussions on the future of 3D models!
 🌐 ind3dworkshop.github.io/cvpr2025
Yi Zhou (@papagina_yi) 's Twitter Profile Photo

🚀 Struggling with the lack of high-quality data for AI-driven human-object interaction research? We've got you covered! Introducing HUMOTO, a groundbreaking 4D dataset for human-object interaction, developed with a combination of wearable motion capture, SOTA 6D pose