KeBingxin (@kbingxin) 's Twitter Profile
KeBingxin

@kbingxin

CV+ML+3D/RS PhD student @ Photogrammetry and Remote Sensing, ETHZurich.

ID: 1239120315210326016

linkhttp://www.kebingxin.com calendar_today15-03-2020 09:24:50

41 Tweet

202 Followers

97 Following

Anton Obukhov (@antonobukhov1) 's Twitter Profile Photo

Come by our #CVPR2026 presentations next week! 💐 Marigold in Orals 3A on 3D from single view, Thu 20 Jun, 9:00-9:15 am. Also, drop by Poster Session 3 for more tangible matters 🗿 ⚙️ Point2CAD in Poster Session 1 on Wed 19 Jun, 10:30-12:00 am. 🎭 DGInStyle in Workshop on Synthetic

Anton Obukhov (@antonobukhov1) 's Twitter Profile Photo

Unveiling BetterDepth — a plug-and-play diffusion-based refiner for zero-shot monocular depth estimation, compatible with many established depth prediction models. 📕 Paper: huggingface.co/papers/2407.17… 🧩 Other: TBA Fantastic collaboration between ETH Zurich and Disney

Nando Metzger (@nandometzger) 's Twitter Profile Photo

Spice up your favorite SOTA monodepth network with a diffusion model! We introduce *BetterDepth*, a plug-and-play refiner for zero-shot monodepth estimation. Paper: huggingface.co/papers/2407.17…

Spice up your favorite SOTA monodepth network with a diffusion model! We introduce *BetterDepth*, a plug-and-play refiner for zero-shot monodepth estimation.

Paper: huggingface.co/papers/2407.17…
Karim Abou Zeid (@kacodes) 's Twitter Profile Photo

Check out our work on fine-tuning of image-conditional diffusion models for depth and normal estimation. Widely used diffusion models can be improved with single-step inference and task-specific fine-tuning, allowing us to gain better accuracy while being 200x faster!⚡ 🧵(1/6)

Check out our work on fine-tuning of image-conditional diffusion models for depth and normal estimation.

Widely used diffusion models can be improved with single-step inference and task-specific fine-tuning, allowing us to gain better accuracy while being 200x faster!⚡

🧵(1/6)
Gradio (@gradio) 's Twitter Profile Photo

🔥 Rolling-Depth - A new state-of-the-art depth estimator for videos in the wild! Accurately estimating depth from videos using AI is now possible. No flickering, No Temporal inconsistency 💪

Anton Obukhov (@antonobukhov1) 's Twitter Profile Photo

Introducing 🛹 RollingDepth 🛹 — a universal monocular depth estimator for arbitrarily long videos! Our paper, “Video Depth without Video Models,” delivers exactly that, setting new standards in temporal consistency. Check out more details in the thread 🧵

Anton Obukhov (@antonobukhov1) 's Twitter Profile Photo

Introducing ⇆ Marigold-DC — our training-free zero-shot approach to monocular Depth Completion with guided diffusion! If you have ever wondered how else a long denoising diffusion schedule can be useful, we have an answer for you! Details 🧵

Anton Obukhov (@antonobukhov1) 's Twitter Profile Photo

Team: Bingxin Ke (KeBingxin), Kevin Qu, Tianfu Wang (Tianfu Wang), Nando Metzger (Nando Metzger), Shengyu Huang (Shengyu Huang), Bo Li, Anton Obukhov (Anton Obukhov), Konrad Schindler. We thank Hugging Face for their sustained support. Original announcement of Marigold Depth

Jiahui Huang (@huangjh_hjh) 's Twitter Profile Photo

[1/N] 🎥 We've made available a powerful spatial AI tool named ViPE: Video Pose Engine, to recover camera motion, intrinsics, and dense metric depth from casual videos! Running at 3–5 FPS, ViPE handles cinematic shots, dashcams, and even 360° panoramas. 🔗 research.nvidia.com/labs/toronto-a…

Xuanchi Ren (@xuanchi13) 's Twitter Profile Photo

Running out of multi-view data for 3D reconstruction and generation? 🤠 We show how a camera-conditioned video model can be turned into a generative 3D (and dynamic!) Gaussian Splatting model—trained entirely through self-distillation, no real-world data needed. 🚀 Code &