Hanxue Liang (@hx_liang95) 's Twitter Profile
Hanxue Liang

@hx_liang95

PhD student at @Cambridge_Uni, looking for full-time positions, please DM if you know any openings.
3D/4D Reconstruction/Generation, On-device MoE, Point Cloud

ID: 1442676225881739268

linkhttps://hanxuel.github.io/ calendar_today28-09-2021 02:23:35

25 Tweet

61 Followers

130 Following

Andrea Tagliasacchi 🇨🇦 (@taiyasaki) 's Twitter Profile Photo

📢 As promised, a motion was submitted to PAMI TC for consideration, and most likely you will vote on this at CVPR 2024. As promised we worked in a bi-partisan way with Torsten Sattler and many others, and reached a suitable compromise.

Jiawei Ren (@jiawei6_ren) 's Twitter Profile Photo

We present #L4GM, the first 4D Large Reconstruction Model that produces animated objects from a single-view video input -- in a single feed-forward pass that takes only **seconds**! research.nvidia.com/labs/toronto-a… 1/

Zhiwen(Aaron) Fan (@wayneinr) 's Twitter Profile Photo

🚀 Our NeurIPS '24 work, Large Spatial Model (LSM), is here! LSM performs semantic 3D reconstruction in just 0.1s, processing unposed data via feed-forward 3D reconstruction. 👉It leverages large-scale 3D datasets with minimal annotations, defining a 3D latent space. We are

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

Reconstruct and explore monocular dynamic videos in real time! Interaction with your favorite video content is now possible without specialized capture equipment! Great work led by Hanxue Liang Jiawei Ren and Ashkan Mirzaei!

Radiance Fields (@radiancefields) 's Twitter Profile Photo

New research from NVIDIA AI Developer. Generate per-frame 3D "bullet-time" scenes from regular video in just 150ms on a single GPU. 🔗research.nvidia.com/labs/toronto-a…

Jiahui Huang (@huangjh_hjh) 's Twitter Profile Photo

📢Please check out our newest work on feed-forward reconstruction of dynamic monocular videos! With our bullet-time formulation, we reach great flexibility and state-of-the-art performance!

William Lamkin (@williamlamkin) 's Twitter Profile Photo

BTimer: Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos website: research.nvidia.com/labs/toronto-a… pdf: research.nvidia.com/labs/toronto-a…

BTimer: Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos

website: research.nvidia.com/labs/toronto-a…
pdf: research.nvidia.com/labs/toronto-a…
Hanxue Liang (@hx_liang95) 's Twitter Profile Photo

🚀Excited to Introduce #BTimer: Real-Time Dynamic Scene Reconstruction from Monocular Videos! Struggling with novel view synthesis on dynamic scenes? Meet BTimer (BulletTimer) — the 1st motion-aware feed-forward model for real-time scene reconstruction at any desired time. ✅

MrNeRF (@janusch_patas) 's Twitter Profile Photo

Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos TL;DR: We present the first feed-forward reconstruction model for dynamic scenes using a bullet-time formulation; BTimer reconstructs a bullet-time scene within 150ms while reaching

DeepSeek (@deepseek_ai) 's Twitter Profile Photo

🚀 DeepSeek-R1 is here! ⚡ Performance on par with OpenAI-o1 📖 Fully open-source model & technical report 🏆 MIT licensed: Distill & commercialize freely! 🌐 Website & API are live now! Try DeepThink at chat.deepseek.com today! 🐋 1/n

🚀 DeepSeek-R1 is here!

⚡ Performance on par with OpenAI-o1
📖 Fully open-source model & technical report
🏆 MIT licensed: Distill & commercialize freely!

🌐 Website & API are live now! Try DeepThink at chat.deepseek.com today!

🐋 1/n