Pratul Srinivasan (@_pratul_) 's Twitter Profile
Pratul Srinivasan

@_pratul_

Research Scientist at @GoogleAI. UC Berkeley PhD 2020 + Duke 2014. 3D computer vision + graphics (NeRF!)

ID: 359730197

linkhttps://pratulsrinivasan.github.io/ calendar_today22-08-2011 02:32:30

73 Tweet

1,1K Followers

105 Following

Jon Barron (@jon_barron) 's Twitter Profile Photo

A huge problem with modern radiance field technology (DreamFusion, NeRF, 3DGS, etc) is that the models you recover are damn near impossible for an artist (or an AI) to manually retexture. We fixed this! Excellent new paper from Pratul Srinivasan & co: pratulsrinivasan.github.io/nuvo/

Radiance Fields (@radiancefields) 's Twitter Profile Photo

🚀 Exciting news from Google Research! Their latest innovation, Nuvo, is changing the game in UV Mapping. Nuvo tackles complex geometries with neural field-based UV mapping, paving the way for more flexible and editable NeRFs and Generative outputs. 🔗neuralradiancefields.io/nuvo-revolutio…

Bilawal Sidhu (@bilawalsidhu) 's Twitter Profile Photo

NeRFs are cool, but they're HARD to edit. Turn 'em into a mesh, and the geometry and UV maps are a dumpster fire -- making simple texture editing mission impossible for your 3D artist. Well, not anymore! Google AI's latest paper Nuvo, employs neural fields for UV mapping,

Dor Verbin (@dorverbin) 's Twitter Profile Photo

Introducing Eclipse, a method for recovering lighting and materials even from diffuse objects! The key idea is that standard "NeRF-like" data has all we need: a photographer moving around a scene to capture it causes "accidental" lighting variations. dorverbin.github.io/eclipse/ (1/3)

Jon Barron (@jon_barron) 's Twitter Profile Photo

We just finished a joint code release for CamP (camp-nerf.github.io) and Zip-NeRF (jonbarron.info/zipnerf/). As far as I know, this code is SOTA in terms of image quality (but not speed) among all the radiance field techniques out there. Have fun! github.com/jonbarron/camp…

Caltech (@caltech) 's Twitter Profile Photo

Scientists, led by a team at Caltech, used AI and telescope data to create the first 3D video of mysterious bright flares around the supermassive black hole at the center of our galaxy. caltech.edu/about/news/ai-…

Ruiqi Gao (@ruiqigao) 's Twitter Profile Photo

🌟 Create anything in 3D! 🌟 Introducing CAT3D: a new method that generates high-fidelity 3D scenes from any number of real or generated images in one minute, powered by multi-view diffusion models. w/ lovely coauthors Aleksander Holynski, Ben Poole and an amazing team!

Aleksander Holynski (@holynski_) 's Twitter Profile Photo

Videos are cool and all...but everything's more fun when it's interactive. Check out our new project, ✨CAT3D✨, that turns anything (text, image, & more) into interactive 3D scenes! Don't miss the demo!! cat3d.github.io

Philipp Henzler (@philipphenzler) 's Twitter Profile Photo

Check out CAT3D! Image(s)-to-3D in 1 minute! cat3d.github.io Given any number of real or generated images, CAT3D uses a multi-view diffusion prior to create consistent novel views. These views are used to reconstruct a 3D scene using NeRF/3DGS.

AK (@_akhaliq) 's Twitter Profile Photo

IllumiNeRF 3D Relighting without Inverse Rendering Existing methods for relightable view synthesis -- using a set of images of an object under unknown lighting to recover a 3D representation that can be rendered from novel viewpoints under a target illumination

Xiaoming Zhao (@xmzhao_) 's Twitter Profile Photo

Wondering how to easily relight an object? Inverse rendering, maybe the first thing that comes to mind, is brittle and expensive due to differentiable Monte Carlo rendering. Check out IllumiNeRF for simple, effective 3D relighting without it! illuminerf.github.io (1/n)

Philipp Henzler (@philipphenzler) 's Twitter Profile Photo

IllumiNeRF lets you relight objects in 3D. Instead of the classical inverse rendering approach — disentangling the object geometry, materials, and lighting — we use a relighting diffusion model to relight each input image and distill the relit samples into 3D by optimizing a

Dor Verbin (@dorverbin) 's Twitter Profile Photo

IllumiNeRF enables relighting without expensive inverse rendering. We use a diffusion model trained to relight a single image, and turn its samples into a consistent 3D relit NeRF. With Xiaoming Zhao (currently on the job market!) Pratul Srinivasan Keunhong Park Ricardo Martin-Brualla Philipp Henzler

Dor Verbin (@dorverbin) 's Twitter Profile Photo

I'm going to present our work at the oral session tomorrow (Wednesday), 9am at #CVPR2024. Come check it out and hang out at the poster session (ours is number 399) immediately after!

Benjamin Attal (@imarhombus) 's Twitter Profile Photo

(1/N) Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering Website: benattal.github.io/flash-cache/ tl;dr our #ECCV2024 (oral ✨) paper presents a new system for inverse rendering that is more physically accurate, and therefore less biased, than existing approaches.

Dor Verbin (@dorverbin) 's Twitter Profile Photo

We’ll be presenting NeRF-Casting at SIGGRAPH Asia next week! NeRF-Casting enables photorealistic rendering of scenes with highly reflective surfaces—something that was previously impossible with models like Zip-NeRF and 3DGS. (1/6)

Alex Trevithick (@alextrevith) 's Twitter Profile Photo

🚀 Introducing SimVS: our new method that simplifies 3D capture! 🎯 3D reconstruction assumes consistency—no dynamics or lighting changes—but reality constantly breaks this assumption. ✨ SimVS takes a set of inconsistent images and makes them consistent with a chosen frame.

Stan Szymanowicz (@stanszymanowicz) 's Twitter Profile Photo

⚡️ Introducing Bolt3D ⚡️ Bolt3D generates interactive 3D scenes in less than 7 seconds on a single GPU from one or more images. It features a latent diffusion model that *directly* generates 3D Gaussians of seen and unseen regions, without any test time optimization. 🧵👇 (1/9)

Alex Trevithick (@alextrevith) 's Twitter Profile Photo

🎥 What if 3D capture could gracefully handle moving scenes and varying illumination? 🎯Come see how video models generate exactly the data you need at our poster, SimVS! 📍CVPR, June 14th (afternoon), Poster #60.