Jonathan Lorraine (@jonlorraine9) 's Twitter Profile
Jonathan Lorraine

@jonlorraine9

Research scientist @NVIDIA | PhD in machine learning @UofT. Previously @Google / @MetaAI. Opinions are my own. 🤖 💻 ☕️

ID: 926859515986923520

linkhttps://www.jonlorraine.com/ calendar_today04-11-2017 17:11:43

227 Tweet

6,6K Followers

6,6K Following

Gavriel State (@gavrielstate) 's Twitter Profile Photo

I'm super excited for the future of Real2Sim2Real workflows on top of our new VoMP pipeline. There's still work ahead to integrate it into Isaac Lab through solvers in Newton and get simulation running fast enough for RL, but this is an important start. Congrats rishit dagli!

AshutoshShrivastava (@ai_for_success) 's Twitter Profile Photo

Turns out you can fine-tune open models like LLaMA and Gemma to generate 3D models. Expanded on NVIDIA's LLaMA Mesh research to train LLaMA 3.1 and Gemma 3 with large context windows and a diverse 3D dataset. Result? LLMs generating detailed furniture with actual design intent,

Or Litany (@orlitany) 's Twitter Profile Photo

Video motion and view control just became easy! Check out our new plug-and-play approach led by my brilliant students and collaborators Assaf Singer Noam Rotstein Amir Mann Ron Kimmel Technion Israel 🌐project page: time-to-move.github.io

bioRxiv Neuroscience (@biorxiv_neursci) 's Twitter Profile Photo

From Tasks to Topology: Dorsal and Ventral Streams Emerge in Optimized Neural Networks biorxiv.org/content/10.110… #biorxiv_neursci

Xuanchi Ren (@xuanchi13) 's Twitter Profile Photo

✨ ChronoEdit Paint Brush LoRA: Bring your scribbles to life! Come try this fun demo hosted by AK and see how our new LoRA turns rough sketches into polished edits 👇 huggingface.co/spaces/akhaliq…

✨ ChronoEdit Paint Brush LoRA: Bring your scribbles to life!

Come try this fun demo hosted by <a href="/_akhaliq/">AK</a> and see how our new LoRA turns rough sketches into polished edits 👇

huggingface.co/spaces/akhaliq…
Ruilong Li (@ruilong_li) 's Twitter Profile Photo

If you are interested in research at NVIDIA also welcome to DM for chat! More about our team: research.nvidia.com/labs/sil/

Xindi Wu (@cindy_x_wu) 's Twitter Profile Photo

I’m at #NeurIPS2025 from 12.2–12.7! I work on data-centric Video Generation and VLMs/VLAs recently (MOTIVE, COMPACT, ICONS, etc.), and I’m generally interested in building more scalable and capable multimodal systems. DMs open for a coffee chat! 😃 Excited to meet old and new

Or Litany (@orlitany) 's Twitter Profile Photo

Crafting 3D assets just got a massive upgrade! Why guess with text when you can guide with geometry? 🎨 Extremely proud of this work led by the brilliant Elisabetta Fedele and Francis Engelmann spacecontrol3d.github.io

Nick Sharp (@nmwsharp) 's Twitter Profile Photo

In engineering and art, geometry is often represented not as meshes or points, but as domain-specific structured *grammars*. In this work led by Milin Kodnongbua and Jack Zhang, we investigated how to optimize these grammars ML-style with SGD. 4 simple rules make a huge difference!

Or Litany (@orlitany) 's Twitter Profile Photo

🚗📡Radar is the unsung hero of AV perception: widespread in cars, yet overlooked in simulation. Introducing RadarGen: Realistic radar synthesis from cameras using diffusion. Massive kudos to my fantastic team at Technion Israel and NVIDIA AI radargen.github.io

Xindi Wu (@cindy_x_wu) 's Twitter Profile Photo

New #NVIDIA Paper We introduce Motive, a motion-centric, gradient-based data attribution method that traces which training videos help or hurt video generation. By isolating temporal dynamics from static appearance, Motive identifies which training videos shape motion in video

Kwang Moo Yi (@kwangmoo_yi) 's Twitter Profile Photo

Wu et al., "Motion Attribution for Video Generation" From which data do video models learn different types of motion? Finding this, via backtracking gradients, allows data curations and fine-tuning of models to "better" motion.

naveen manwani (@naveenmanwani17) 's Twitter Profile Photo

🚨 Paper Alert 🚨 ➡️Paper Title: Motion Attribution for Video Generation 🌟Few pointers from the paper 🎯Despite the rapid progress of video generation models, the role of data in influencing motion is poorly understood. 🎯Authors of this paper presented “Motive (MOTIon

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

📢 We introduce PPISP, a physically plausible camera module for Gaussian Splatting that reduces floaters under varying appearance and predicts camera corrections for novel views such that they closely match what a real camera would output. #CV #3DGS

Arash Vahdat (@arashvahdat) 's Twitter Profile Photo

🚀 Diffusion too slow? Fix it in a few steps. 📢 Introducing NVIDIA FastGen — a plug-and-play research library for turning slow diffusion models into high-quality few-step generators. ⚡ What’s inside: • Consistency & MeanFlow (CM, sCM, TCM, MeanFlow) • Distribution Matching

🚀 Diffusion too slow? Fix it in a few steps.

📢 Introducing NVIDIA FastGen — a plug-and-play research library for turning slow diffusion models into high-quality few-step generators.

⚡ What’s inside:
• Consistency &amp; MeanFlow (CM, sCM, TCM, MeanFlow)
• Distribution Matching
NVIDIA AI Developer (@nvidiaaidev) 's Twitter Profile Photo

How much research time would you save with one distillation playground? 💡 FastGen unifies trajectory and distribution based methods so you can benchmark, ablate, and share few-step diffusion recipes across teams. 🔗 Read: nvda.ws/3LARhFy 🔗 Codebase here:

How much research time would you save with one distillation playground? 💡

FastGen unifies trajectory and distribution based methods so you can benchmark, ablate, and share few-step diffusion recipes across teams.

🔗 Read: nvda.ws/3LARhFy
🔗 Codebase here:
NVIDIA AI Developer (@nvidiaaidev) 's Twitter Profile Photo

Most 3DGS segmentation tools either pre‑train per scene or lock errors into a feature field you can’t undo. ArtisanGS instead turns a few 2D masks into editable 3D object selections via Cutie tracking + black‑box splat aggregation, then lets you iteratively correct mistakes