Zhengqi Li (@zhengqi_li) 's Twitter Profile
Zhengqi Li

@zhengqi_li

Research Scientist @GoogleDeepMind Previously Ph.D. @cornell_tech

ID: 981313037754920966

linkhttps://zhengqili.github.io/ calendar_today03-04-2018 23:30:34

43 Tweet

1,1K Takipçi

208 Takip Edilen

Aleksander Holynski (@holynski_) 's Twitter Profile Photo

.Qianqian Wang's 🎉Best Student Paper🎉 is being presented at #ICCV2023 tomorrow (Friday)! ▶️"Tracking Everything Everywhere All At Once"◀️ w/ Yen-Yu Chang, Ruojin Cai Zhengqi Li Bharath Hariharan Noah Snavely Friday Afternoon Oral & Poster! Come say hi! omnimotion.github.io

Aleksander Holynski (@holynski_) 's Twitter Profile Photo

We posted an updated version of Generative Image Dynamics to arXiv---the biggest change is to better contextualize our method with respect to prior work in image space motion analysis, especially the great work of Abe Davis arxiv.org/abs/2309.07906

Yuxi Xiao (@yuxixiaohenry) 's Twitter Profile Photo

🚀 Excited to share our breakthrough paper "SpatialTracker: 3D Space Tracking for 2D Pixels" - selected as highlight paper at #CVPR2024! We lifted the dense pixels tracking into 3D space👇 For more details, welcom to check out: henry123-boy.github.io/SpaTracker/

Google AI (@googleai) 's Twitter Profile Photo

Congratulations to Zhengqi Li, Richard Tucker, Noah Snavely, and Aleksander Holynski. Their paper “Generative Image Dynamics” received the #CVPR2024 Best Paper Award. Read the paper: arxiv.org/pdf/2309.07906

Congratulations to <a href="/zhengqi_li/">Zhengqi Li</a>, Richard Tucker, <a href="/Jimantha/">Noah Snavely</a>, and <a href="/holynski_/">Aleksander Holynski</a>. Their paper “Generative Image Dynamics” received the #CVPR2024 Best Paper Award. Read the paper: arxiv.org/pdf/2309.07906
Boyang Deng (@boyang_deng) 's Twitter Profile Photo

Thought about generating realistic 3D urban neighbourhoods from maps, dawn to dusk, rain or shine? Putting heavy snow on the streets of Barcelona? Or making Paris look like NYC? We built a Streetscapes system that does all these. See boyangdeng.com/streetscapes. (Showreel w/ 🔊 ↓)

MrNeRF (@janusch_patas) 's Twitter Profile Photo

Shape of Motion: 4D Reconstruction from a Single Video arxiv.org/abs/2407.13764 Project: shape-of-motion.github.io Code: github.com/vye16/shape-of… Great #3DGS-based reconstruction from monocular videos! Method ⬇️ 1 I 2

Qianqian Wang (@qianqianwang5) 's Twitter Profile Photo

We present Shape of Motion, a system for 4D reconstruction from a casual video. It jointly reconstructs temporally persistent geometry and tracks long-range 3D motion. For more details, check our webpage shape-of-motion.github.io and code github.com/vye16/shape-of…!

Baráth Dániel (@majti89) 's Twitter Profile Photo

🚀 Ready to take 3D reconstruction to the next level? Whether you're working on NeRF or 3DGS, our new method, GLOMAP, is here to impress! 🌟 It's faster and more accurate than COLMAP on several datasets. 🌐 Website: lpanaf.github.io/eccv24_glomap/ Marc Pollefeys, Linfei Pan, J. Schönberger

Aleksander Holynski (@holynski_) 's Twitter Profile Photo

Check out our new paper that turns (text, sparse images, videos) => (dynamic 3D scenes)! I can't get over how cool the interactive demo is. Try it out for yourself on the project page: cat-4d.github.io

Jack Parker-Holder (@jparkerholder) 's Twitter Profile Photo

Introducing 🧞Genie 2 🧞 - our most capable large-scale foundation world model, which can generate a diverse array of consistent worlds, playable for up to a minute. We believe Genie 2 could unlock the next wave of capabilities for embodied agents 🧠.

Linyi Jin (@jin_linyi) 's Twitter Profile Photo

Introducing 👀Stereo4D👀 A method for mining 4D from internet stereo videos. It enables large-scale, high-quality, dynamic, *metric* 3D reconstructions, with camera poses and long-term 3D motion trajectories. We used Stereo4D to make a dataset of over 100k real-world 4D scenes.

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

Today, we’re announcing Veo 2: our state-of-the-art video generation model which produces realistic, high-quality clips from text or image prompts. 🎥 We’re also releasing an improved version of our text-to-image model, Imagen 3 - available to use in ImageFX through

Angjoo Kanazawa (@akanazawa) 's Twitter Profile Photo

Exciting news! MegaSAM code is out🔥 & the updated Shape of Motion results with MegaSAM are really impressive! A year ago I didn't think we could make any progress on these videos: shape-of-motion.github.io/results.html Huge congrats to everyone involved and the community 🎉

Zhengqi Li (@zhengqi_li) 's Twitter Profile Photo

Check out our new work, Self-Forcing! By addressing the training/inference mismatch, Self-Forcing enables real-time streaming video generation on a single GPU while acheiving competitive or superior performance compared to SOTA video models that run significantly slower.

Linyi Jin (@jin_linyi) 's Twitter Profile Photo

Heading to Nashville for CVPR. Looking forward to discussing Stereo4D stereo4d.github.io and MegaSaM mega-sam.github.io Feel free to reach out if you want to chat or connect! #CVPR2025

Aleksander Holynski (@holynski_) 's Twitter Profile Photo

This Saturday at CVPR, don't miss Oral Session 3A. Vision all-stars Qianqian Wang, Linyi Jin, Zhengqi Li are presenting MegaSaM, CUT3R, and Stereo4D. The posters are right after, and the whole crew will be there. It'll be fun. Drop by.