Michael Niemeyer (@mi_niemeyer) 's Twitter Profile
Michael Niemeyer

@mi_niemeyer

ML/AI Research Scientist at Google.

ID: 1134512764486070274

linkhttps://m-niemeyer.github.io/ calendar_today31-05-2019 17:31:36

289 Tweet

2,2K Takipçi

126 Takip Edilen

Boyuan Chen (@boyuanchen0) 's Twitter Profile Photo

Announcing Diffusion Forcing Transformer (DFoT), our new video diffusion algorithm that generates ultra-long videos of 800+ frames. DFoT enables History Guidance, a simple add-on to any existing video diffusion models for a quality boost. Website: boyuan.space/history-guidan… (1/7)

Jon Barron (@jon_barron) 's Twitter Profile Photo

I just pushed a new paper to arXiv. I realized that a lot of my previous work on robust losses and nerf-y things was dancing around something simpler: a slight tweak to the classic Box-Cox power transform that makes it much more useful and stable. It's this f(x, λ) here:

Angjoo Kanazawa (@akanazawa) 's Twitter Profile Photo

Exciting news! MegaSAM code is out🔥 & the updated Shape of Motion results with MegaSAM are really impressive! A year ago I didn't think we could make any progress on these videos: shape-of-motion.github.io/results.html Huge congrats to everyone involved and the community 🎉

Inception Labs (@inceptionailabs) 's Twitter Profile Photo

We are excited to introduce Mercury, the first commercial-grade diffusion large language model (dLLM)! dLLMs push the frontier of intelligence and speed with parallel, coarse-to-fine text generation.

Luma AI (@lumalabsai) 's Twitter Profile Photo

Today, we release Inductive Moment Matching (IMM): a new pre-training paradigm breaking the algorithmic ceiling of diffusion models. Higher sample quality. 10x more efficient. Single-stage, single network, stable training. Read more: lumalabs.ai/news/imm

Matthias Niessner (@mattniessner) 's Twitter Profile Photo

📢📢Want to build 𝟑𝐃 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬? 📢📢 ➡️We're looking for Diffusion/3D/ML/Infra engineers and scientists in Munich & London. Get in touch for details! #3D #GenAI #spatialintelligence #foundationmodels

Jensen (Jinghao) Zhou (@jensenzhoujh) 's Twitter Profile Photo

Hi there, 🎉 We are thrilled to introduce Stable Virtual Camera, a generalist diffusion model designed to address the exciting challenge of Novel View Synthesis (NVS). With just one or a few images, it allows you to create a smooth trajectory video from any viewpoint you desire.

Philipp Henzler (@philipphenzler) 's Twitter Profile Photo

From image(s) to 3D scenes in SECONDS! Bolt3D ⚡️ uses a latent diffusion transformer to generate both image and geometry latents from which we can directly decode 3D Gaussians - no optimization needed.

MrNeRF (@janusch_patas) 's Twitter Profile Photo

SplatVoxel: History-Aware Novel View Streaming without Temporal Training Contributions: • We propose a hybrid Splat-Voxel feed-forward reconstruction framework that leverages historical information to enable novel view streaming, without relying on multi-view video datasets for

Michael Niemeyer (@mi_niemeyer) 's Twitter Profile Photo

On my way back from 3DV in Singapore. What a blast! Thanks to all the organizers of this year's International Conference on 3D Vision as well as all the speakers and presenters, I had such a fantastic time!

On my way back from 3DV in Singapore. What a blast! Thanks to all the organizers of this year's <a href="/3DVconf/">International Conference on 3D Vision</a> as well as all the speakers and presenters, I had such a fantastic time!
Shubham Tulsiani (@shubhtuls) 's Twitter Profile Photo

Excited to share this dataset with registered aerial and ground images with dense geometry and correspondence supervision. Please see Khiem’s thread for some cool applications this enables!

Sherwin Bahmani (@sherwinbahmani) 's Twitter Profile Photo

📢Excited to be at #ICLR2025 for our paper: VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control Poster: Thu 3-5:30 PM (#134) Website: snap-research.github.io/vd3d/ Code: github.com/snap-research/… Also check out our #CVPR2025 follow-up AC3D: snap-research.github.io/ac3d/

Songyou Peng (@songyoupeng) 's Twitter Profile Photo

📢 Unposed few-view 3D reconstruction has never been so easy, and SOTA pose estimation as a byproduct! Check out our #ICLR2025 ORAL paper (top 1.8%): NoPoSplat! Catch the amazing Botao Ye at: Oral: Thu 4:18 pm Poster: Thu 10 am (#204) Website: noposplat.github.io

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

Video, meet audio. 🎥🤝🔊 With Veo 3, our new state-of-the-art generative video model, you can add soundtracks to clips you make. Create talking characters, include sound effects, and more while developing videos in a range of cinematic styles. 🧵

SpAItial AI (@spaitial_ai) 's Twitter Profile Photo

🚀🚀🚀Announcing our $13M funding round to build the next generation of AI: 𝐒𝐩𝐚𝐭𝐢𝐚𝐥 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬 that can generate entire 3D environments anchored in space & time. 🚀🚀🚀 Interested? Join our world-class team: 🌍 spaitial.ai #GenAI #3DAI

Michael Niemeyer (@mi_niemeyer) 's Twitter Profile Photo

Rendering large-scale scenes even on mobile! Make sure to check out the internship project LODGE of the rising star in computer vision, Jonas. It was such a blast having you with us! 🎉

MrNeRF (@janusch_patas) 's Twitter Profile Photo

Is Google taking initial steps to enhance Street View? For some reason, Street View seems stuck in technology that feels outdated. I wonder if we'll see such improvements on the product side. Also, note how much better it performs in all aspects compared to Zip-NeRF in their

Ben Mildenhall (@benmildenhall) 's Twitter Profile Photo

At World Labs, we built a new Gaussian splatting web renderer with all the bells and whistles we needed to make splats a first-class citizen of the incredible Three.js ecosystem. Today, we're open sourcing Forge under the MIT license.

Haofei Xu (@haofeixu) 's Twitter Profile Photo

Excited to present our #CVPR2025 paper DepthSplat next week! DepthSplat is a feed-forward model that achieves high-quality Gaussian reconstruction and view synthesis in just 0.6 seconds. Looking forward to great conversations at the conference!