Zan Gojcic (@zgojcic) 's Twitter Profile
Zan Gojcic

@zgojcic

Research manager at @NVIDIAAI working on neural reconstruction and data-driven simulation.

ID: 1845056570

calendar_today09-09-2013 12:27:09

1,1K Tweet

2,2K Takipçi

619 Takip Edilen

Two Minute Papers (@twominutepapers) 's Twitter Profile Photo

NVIDIA’s AI watched 150,000 videos… and learned to relight scenes incredibly well! No game engine. No 3D software. And it has an amazing cat demo. 🐱💡 Hold on to your papers! Full video: youtube.com/watch?v=yRk6vG…

NVIDIA’s AI watched 150,000 videos… and learned to relight scenes incredibly well! No game engine. No 3D software. And it has an amazing cat demo. 🐱💡
Hold on to your papers! Full video: youtube.com/watch?v=yRk6vG…
Florian Hahlbohm (@fhahlbohm) 's Twitter Profile Photo

Thought I'd share this WebGL viewer that uses a combination of ray tracing and depth testing to render 3D (or 2D) Gaussians. github.com/fhahlbohm/dept… Runs smoothly (>120 Hz) on an M1 MacBook Pro at 1080p. Quality is decent. Gaussians truncated at 2σ. No higher degree SH support.

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

Time to throw away the Plücker raymaps - a really elegant formulation of a camera-aware RoPE like embedding for multiview ViTs! Great work by Ruilong Li Junchen Liu and the team!

Bernhard Jaeger (@bern_jaeger) 's Twitter Profile Photo

At ICLR 2024, we proposed GTA to show that relative positional encodings are better in 3D Vision Transformers. GTA had the shortcoming that it only used extrinsics. Li et. al have now fixed this, incorporating intrinsics as well. Essential advance for 3D Vision Transformers!

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

Had a great time chatting with Sophia and the BuzzRobot community about our recent work, DiffusionRenderer, and the exciting research my team is doing at NVIDIA AI! DiffusionRenderer project page: research.nvidia.com/labs/toronto-a…

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

I strongly agree with this concern — the policy prohibiting external links, images, and videos in the rebuttal puts vision papers at a clear disadvantage. Reviewers will tend to simply reject submissions, noting that the requested visualizations were not provided. NeurIPS Conference

MrNeRF (@janusch_patas) 's Twitter Profile Photo

GSCache: Real-Time Radiance Caching for Volume Path Tracing using 3D Gaussian Splatting Contributions: • We introduce a novel radiance cache optimized for volume rendering that caches path-space radiance using multiple levels of Gaussian splats. • The cache works in real time

GSCache: Real-Time Radiance Caching for Volume Path Tracing using 3D Gaussian Splatting

Contributions:
• We introduce a novel radiance cache optimized for volume rendering that caches path-space radiance using multiple levels of Gaussian splats.

• The cache works in real time
Jiahui Huang (@huangjh_hjh) 's Twitter Profile Photo

[1/N] 🎥 We've made available a powerful spatial AI tool named ViPE: Video Pose Engine, to recover camera motion, intrinsics, and dense metric depth from casual videos! Running at 3–5 FPS, ViPE handles cinematic shots, dashcams, and even 360° panoramas. 🔗 research.nvidia.com/labs/toronto-a…

Pavlo Molchanov (@pavlomolchanov) 's Twitter Profile Photo

📢New efficient Hybrid-SLM from NVIDIA-Nemotron-Nano-v2-9B: ❗️6x faster than Qwen3-8B because of Hybrid (Mamba2+Attention) design. We tried something new: pretrain & align a 12B reasoning model → compress to 9B. First real stab at reasoning-model compression. Key takeaways

📢New efficient Hybrid-SLM from NVIDIA-Nemotron-Nano-v2-9B:
❗️6x faster than Qwen3-8B because of Hybrid (Mamba2+Attention) design.

We tried something new: pretrain & align a 12B reasoning model → compress to 9B. 

First real stab at reasoning-model compression.

Key takeaways
Zan Gojcic (@zgojcic) 's Twitter Profile Photo

Great to see 3DGUT now integrated into LichtFeld Studio — the fastest 3DGS/GUT codebase out there. Open source keeps pushing boundaries. Huge congrats to everyone involved! 🎉