Gene Chou (@gene_ch0u) 's Twitter Profile
Gene Chou

@gene_ch0u

CS PhD student @Cornell; previously Princeton '22

ID: 1582075953169285138

linkhttp://genechou.com calendar_today17-10-2022 18:28:12

19 Tweet

126 Followers

147 Following

Felix Heide (@_felixheide_) 's Twitter Profile Photo

Generalizable SDFs! We can make zero-shot inference work on 100+ unseen classes. Looking forward to presenting this fun work with @Gene_Ch0u at NeurIPS Conference 2022. Paper and Code: light.princeton.edu/gensdf/

Gordon Wetzstein (@gordonwetzstein) 's Twitter Profile Photo

Diffusion Models offer transformative capabilities for visual computing. In a new report, we overview the mathematical fundamentals and survey the quickly growing field of diffusion models for 2D, 3D, video, and motion generation and editing. arxiv.org/abs/2310.07204

Diffusion Models offer transformative capabilities for visual computing.

In a new report, we overview the mathematical fundamentals and survey the quickly growing field of diffusion models for 2D, 3D, video, and motion generation and editing. 

arxiv.org/abs/2310.07204
Kyle Sargent (@kylesargentai) 's Twitter Profile Photo

I’m really excited to finally share our new paper “ZeroNVS: Zero-shot 360-degree View Synthesis from a Single Real Image.” The paper, webpage, and code are all released! 📖arxiv.org/abs/2310.17994 🌐kylesargent.github.io/zeronvs/ 🖥️github.com/kylesargent/Ze… 🧵is below.

Matthias Niessner (@mattniessner) 's Twitter Profile Photo

(1/2) Check out 𝐌𝐞𝐬𝐡𝐆𝐏𝐓! MeshGPT generates triangle meshes by autoregressively sampling from a transformer model that produces tokens from a learned geometric vocabulary. As a result, we obtain clean and compact meshes :) nihalsid.github.io/mesh-gpt/ youtu.be/UV90O1_69_o

NAVER LABS Europe (@naverlabseurope) 's Twitter Profile Photo

Check-out 📢DUSt3R📢 - a new 3D reconstruction model that works with no prior info on camera calibration nor viewpoint poses! Outperforms SoA monocular & multiview depth estimation & relative pose estimation. Paper, demo, videos (& soon code!) available dust3r.europe.naverlabs.com

Jerome Revaud (@jeromerevaud) 's Twitter Profile Photo

Another convenient usage of DUSt3R (kind of anecdotal): While looking for a rental apartment on airbnb for my vacations, i noticed I had difficulty grasping the layout and space of the apartment based on photos alone. Solution: put all photos in DUSt3R and voila :)

Eric Ming Chen (@ericmchen1) 's Twitter Profile Photo

Aleksander Holynski Nice results! Do you have more videos to share of results before the NeRF step? They seem really 3D consistent already. We also tried conditioning a model on rays in a past paper! While our images were realistic we just couldnt get them to be consistent😃 ray-cond.github.io

Haian Jin (@haian_jin) 's Twitter Profile Photo

Thanks AK for sharing our work! Neural Gaffer is an end-to-end 2D relighting diffusion model that accurately relights any object in a single image under various lighting conditions. Moreover, by combining with other generative methods, our model enables many downstream 2D

Haian Jin (@haian_jin) 's Twitter Profile Photo

Check out our recent work “Neural Gaffer: Relighting Any Object via Diffusion” 📷🌈, an end-to-end 2D relighting diffusion model that accurately relights any object in a single image under various lighting conditions. 🧵1/N: Website: neural-gaffer.github.io

Jianyuan Wang (@jianyuan_wang) 's Twitter Profile Photo

(1/6) We’ve just released a HF 🤗 demo for our VGGSfM, the first differentiable Structure from Motion (SfM) pipeline that outperforms traditional algorithms across various benchmarks! Try it yourself! ⬇️ (huggingface.co/spaces/faceboo…)

Gemmechu Hassena (@gemmechuhassena) 's Twitter Profile Photo

Excited to share our work, ObjectCarver! Given multiview images and click points on one image, ObjectCarver decomposes scenes into separate objects, providing high-quality 3D surfaces while handling occlusion and close-contact objects. (1/6) website: objectcarver.github.io