Michael Zollhoefer (@mzollhoefer) 's Twitter Profile
Michael Zollhoefer

@mzollhoefer

I am a Director, Research Scientist in the Codec Avatar Lab (@RealityLabs @Meta) in Pittsburgh working on fully immersive remote communication and interaction.

ID: 1355287783326310402

linkhttp://zollhoefer.com/ calendar_today29-01-2021 22:52:53

108 Tweet

3,3K Followers

236 Following

AK (@_akhaliq) 's Twitter Profile Photo

Drivable 3D Gaussian Avatars paper page: huggingface.co/papers/2311.08โ€ฆ present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats. Current photorealistic drivable avatars require either accurate 3D registrations during

Jia-Bin Huang (@jbhuang0604) 's Twitter Profile Photo

Looking forward to the next seminar by Michael Zollhoefer! Michael and his team at Reality Labs Research have developed insanely cool technology on Codec Telepresence! Students at UMD Department of Computer Science UMD Center for Machine Learning UMIACS, don't miss the talk!

Looking forward to the next seminar by <a href="/MZollhoefer/">Michael Zollhoefer</a>! 

Michael and his team at Reality Labs Research have developed insanely cool technology on Codec Telepresence!

Students at <a href="/umdcs/">UMD Department of Computer Science</a> <a href="/ml_umd/">UMD Center for Machine Learning</a> <a href="/umiacs/">UMIACS</a>, don't miss the talk!
Michael Zollhoefer (@mzollhoefer) 's Twitter Profile Photo

The Codec Avatars Lab in Pittsburgh is looking for summer Research Scientist interns. Reach out to me if you are interested to work on novel neural reconstruction/rendering approaches, digital humans, view synthesis, generative models, or audio research. metacareers.com/jobs/336581275โ€ฆ

AK (@_akhaliq) 's Twitter Profile Photo

HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces paper page: huggingface.co/papers/2312.03โ€ฆ Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render. One reason is that they make use of volume rendering, thus requiring

Christian Richardt (@c_richardt) 's Twitter Profile Photo

HybridNeRF combines the best of NeRF with SDFs using a hybrid surfaceโ€“volume representation. ๐ŸคฉState-of-the-art visual quality ๐Ÿš€40+ FPS at 2Kร—2K VR on a single 4090 GPU (almost 10ร— faster than VR-NeRF!) Project: haithemturki.com/hybrid-nerf/ Paper: arxiv.org/abs/2312.03160

Matthias Niessner (@mattniessner) 's Twitter Profile Photo

(1/3) ๐•๐จ๐ฑ๐ž๐ฅ ๐‡๐š๐ฌ๐ก๐ข๐ง๐  received the Test-of-Time Award SIGGRAPH Asia โžก๏ธ Hong Kong! What an honor together with Michael Zollhoefer Shahram Izadi @mcstammi Voxel hashing is a sparse & efficient data structure for 3D scenes/grids! What's the core idea and why is still relevant today? โฌ‡๏ธ

(1/3)
๐•๐จ๐ฑ๐ž๐ฅ ๐‡๐š๐ฌ๐ก๐ข๐ง๐  received the Test-of-Time Award <a href="/SIGGRAPHAsia/">SIGGRAPH Asia โžก๏ธ Hong Kong</a>!

What an honor together with <a href="/MZollhoefer/">Michael Zollhoefer</a> <a href="/izadi_shahram/">Shahram Izadi</a> @mcstammi

Voxel hashing is a sparse &amp; efficient data structure for 3D scenes/grids!

What's the core idea and why is still relevant today? โฌ‡๏ธ
AK (@_akhaliq) 's Twitter Profile Photo

SpecNeRF: Gaussian Directional Encoding for Specular Reflections paper page: huggingface.co/papers/2312.13โ€ฆ Neural radiance fields have achieved remarkable performance in modeling the appearance of 3D scenes. However, existing approaches still struggle with the view-dependent

AK (@_akhaliq) 's Twitter Profile Photo

Meta announces URHand Universal Relightable Hands paper page: huggingface.co/papers/2401.05โ€ฆ model is a high-fidelity Universal prior for Relightable Hands built upon light-stage data. It generalizes to novel viewpoints, poses, identities, and illuminations, which enables quick

Christian Richardt (@c_richardt) 's Twitter Profile Photo

Nerfstudio 1.0 is out and it includes support for our Eyeful Tower dataset containing 11 high-res, room-scale HDR scenes! ๐Ÿ“ทunprecedented density: 28,572 photos ๐Ÿ”unprecedented resolution: up to 50 MP Learn more: github.com/facebookresearโ€ฆ

Matthias Niessner (@mattniessner) 's Twitter Profile Photo

(1/3) Can we turn text-to-image models into photorealistic 3D generators? ViewDiff (#CVPR2024) produces realistic, multi-view consistent images of real-world 3D objects in authentic surroundings. Website lukashoel.github.io/ViewDiff Video youtu.be/SdjoCqHzMMk How does it work?

Matthias Niessner (@mattniessner) 's Twitter Profile Photo

(1/2) How to accelerate the reconstruction of 3D Gaussian Splatting? 3DGS-LM replaces the commonly used ADAM optimizer with a tailored Levenberg-Marquardt (LM). => We are ๐Ÿ‘๐ŸŽ% ๐Ÿ๐š๐ฌ๐ญ๐ž๐ซ ๐ญ๐ก๐š๐ง ๐Ÿ‘๐ƒ๐†๐’ for the same quality. lukashoel.github.io/3DGS-LM/ youtu.be/tDiGuGMssg8

(1/2)
How to accelerate the reconstruction of 3D Gaussian Splatting?

3DGS-LM replaces the commonly used ADAM optimizer with a tailored Levenberg-Marquardt (LM). 
=&gt; We are ๐Ÿ‘๐ŸŽ% ๐Ÿ๐š๐ฌ๐ญ๐ž๐ซ ๐ญ๐ก๐š๐ง ๐Ÿ‘๐ƒ๐†๐’ for the same quality.

lukashoel.github.io/3DGS-LM/
youtu.be/tDiGuGMssg8
Michael Zollhoefer (@mzollhoefer) 's Twitter Profile Photo

Looking for a research internship in 2025? The Social AI Research group at Metaโ€™s Codec Avatars Lab in Pittsburgh is offering topics such as neural rendering, body tracking, motion synthesis, and animation from multimodal sensor inputs. Link: metacareers.com/jobs/822652889โ€ฆ

Ethan Weber (@ethanjohnweber) 's Twitter Profile Photo

I'm excited to present "Fillerbuster: Multi-View Scene Completion for Casual Captures"! This is work with my amazing collaborators Norman Mรผller, Yash Kant, Vasu Agrawal, Michael Zollhoefer, Angjoo Kanazawa, Christian Richardt during my internship at Meta Reality Labs. ethanweber.me/fillerbuster/

I'm excited to present "Fillerbuster: Multi-View Scene Completion for Casual Captures"! This is work with my amazing collaborators <a href="/Normanisation/">Norman Mรผller</a>, <a href="/yash2kant/">Yash Kant</a>, Vasu Agrawal, <a href="/MZollhoefer/">Michael Zollhoefer</a>, <a href="/akanazawa/">Angjoo Kanazawa</a>, <a href="/c_richardt/">Christian Richardt</a> during my internship at Meta Reality Labs. ethanweber.me/fillerbuster/
AI at Meta (@aiatmeta) 's Twitter Profile Photo

๐Ÿš€New from Meta FAIR: today weโ€™re introducing Seamless Interaction, a research project dedicated to modeling interpersonal dynamics. The project features a family of audiovisual behavioral models, developed in collaboration with Metaโ€™s Codec Avatars lab + Core AI lab, that

Michael Zollhoefer (@mzollhoefer) 's Twitter Profile Photo

My research group in the Codec Avatars lab at Meta contributed by developing the technology required to display the outputs of FAIR SeamlessNext's dyadic motion models as 3D Full-body Codec Avatars.