Jiawei Ren (@jiawei6_ren) 's Twitter Profile
Jiawei Ren

@jiawei6_ren

Research Scientist @NVIDIA. PhD student at @MMLabNTU.

ID: 1444119166017966084

linkhttps://jiawei-ren.github.io/ calendar_today02-10-2021 01:57:45

85 Tweet

1,1K Takipçi

758 Takip Edilen

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

Reconstruct and explore monocular dynamic videos in real time! Interaction with your favorite video content is now possible without specialized capture equipment! Great work led by Hanxue Liang Jiawei Ren and Ashkan Mirzaei!

Radiance Fields (@radiancefields) 's Twitter Profile Photo

New research from NVIDIA AI Developer. Generate per-frame 3D "bullet-time" scenes from regular video in just 150ms on a single GPU. 🔗research.nvidia.com/labs/toronto-a…

Jiahui Huang (@huangjh_hjh) 's Twitter Profile Photo

📢Please check out our newest work on feed-forward reconstruction of dynamic monocular videos! With our bullet-time formulation, we reach great flexibility and state-of-the-art performance!

William Lamkin (@williamlamkin) 's Twitter Profile Photo

BTimer: Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos website: research.nvidia.com/labs/toronto-a… pdf: research.nvidia.com/labs/toronto-a…

BTimer: Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos

website: research.nvidia.com/labs/toronto-a…
pdf: research.nvidia.com/labs/toronto-a…
Ashkan Mirzaei (@ashmrz10) 's Twitter Profile Photo

🚀 Tired of waiting for your Gaussian-based scenes to fit dynamic inputs? ⏳ Wait no more! Check out our new paper and discover an instant, feed-forward approach! 🎯✨

Hanxue Liang (@hx_liang95) 's Twitter Profile Photo

🚀Excited to Introduce #BTimer: Real-Time Dynamic Scene Reconstruction from Monocular Videos! Struggling with novel view synthesis on dynamic scenes? Meet BTimer (BulletTimer) — the 1st motion-aware feed-forward model for real-time scene reconstruction at any desired time. ✅

MrNeRF (@janusch_patas) 's Twitter Profile Photo

Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos TL;DR: We present the first feed-forward reconstruction model for dynamic scenes using a bullet-time formulation; BTimer reconstructs a bullet-time scene within 150ms while reaching

Jiawei Ren (@jiawei6_ren) 's Twitter Profile Photo

🔥L4GM code and model weights are finally released! !🔥 Try it and turn your video into a 3D animation in just seconds! Code: github.com/nv-tlabs/L4GM-… Models: huggingface.co/jiawei011/L4GM

Qi Wu (@wilson_over) 's Twitter Profile Photo

Say goodbye to perfect pinhole assumptions Excited to introduce 3DGUT—a Gaussian Splatting formulation that unlocks support for distorted cameras, including time dependent effects like rolling shutter, while maintaining the benefits of rasterization, rendering at >250 FPS. 🧵

Jay Z. Wu (@jayzhangjiewu) 's Twitter Profile Photo

Excited to share our #CVPR2025 paper: Difix3D+ Difix3D+ reimagines 3D reconstruction with single-step diffusion, distilling 2D generative priors for realistic novel view synthesis from large viewpoint shifts. 📄Paper: arxiv.org/abs/2503.01774 🌐Website: research.nvidia.com/labs/toronto-a…

Xuanchi Ren (@xuanchi13) 's Twitter Profile Photo

🚀Excited to introduce GEN3C #CVPR2025, a generative video model with an explicit 3D cache for precise camera control. 🎥It applies to multiple use cases, including single-view and sparse-view NVS🖼️ and challenging settings like monocular dynamic NVS and driving simulation🚗.

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

📣 We have released the source code of 3DGRT and 3DGUT in a common 3DGRUT repo under the Apache license! Go try it out and play with our playground app!

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

📢Our team at NVIDIA AI is again looking for Research Scientists and Engineerings to help us push the boundaries of Neural Reconstruction and Generation in AV and Robotics simulation! Check out our latest work at zgojcic.github.io, and if interested, reach out directly!

Huan Ling (@huanling6) 's Twitter Profile Photo

We are excited to share Cosmos-Drive-Dreams 🚀 A bold new synthetic data generation (SDG) pipeline powered by world foundation models—designed to synthesize rich, challenging driving scenarios at scale. Models, Code, Dataset, Tookit are released. Website:

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

📢📢We have a last-minute internship opening on my team at NVIDIA AI for this summer. If you are interested and have experience with large feedforward reconstruction models or post-training image/video diffusion models, please get in touch!

Ruilong Li (@ruilong_li) 's Twitter Profile Photo

For everyone interested in precise 📷camera control 📷 in transformers [e.g., video / world model etc] Stop settling for Plücker raymaps -- use camera-aware relative PE in your attention layers, like RoPE (for LLMs) but for cameras! Paper & code: liruilong.cn/prope/

For everyone interested in precise 📷camera control 📷 in transformers [e.g., video / world model etc]

Stop settling for Plücker raymaps -- use camera-aware relative PE in your attention layers, like RoPE (for LLMs) but for cameras! 

Paper & code: liruilong.cn/prope/