Kfir Aberman (@abermankfir) 's Twitter Profile
Kfir Aberman

@abermankfir

Research Scientist @Snap | previous: Research Scientist @Google | Personalized Generative AI | DreamBooth

ID: 1161673028272607233

linkhttp://kfiraberman.github.io calendar_today14-08-2019 16:16:47

240 Tweet

1,1K Takipçi

249 Takip Edilen

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

🎉 Today at Decart we're announcing Mirage. The first generative video model that runs in real-time, to infinity. Mirage is a Live Stream Diffusion (LSD), a breakthrough that transforms any video into anything you can imagine, as it plays. 🎮 Try it: mirage.decart.ai

Daniel (@dandevai) 's Twitter Profile Photo

This is absolutely crazy I just made a game in 30 seconds with Polyworld AI Then used Decart's new MirageLSD Diffusion Model to transform it into mind bending immersive worlds *IN REAL TIME* This technology is unbelievable

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

I love Vancouver, and even more when SIGGRAPH is here! 🇨🇦 I’ll be around all week with Decart 🔜 TwitchCon! come check out our latest interactive video technology, Mirage, in the exhibition hall, and ping me if you’d like to chat!

I love Vancouver, and even more when SIGGRAPH is here! 🇨🇦
I’ll be around all week with <a href="/DecartAI/">Decart 🔜 TwitchCon</a>! come check out our latest interactive video technology, Mirage, in the exhibition hall, and ping me if you’d like to chat!
Rana Hanocka (@ranahanocka) 's Twitter Profile Photo

We’ve been building something we’re 𝑟𝑒𝑎𝑙𝑙𝑦 excited about – LL3M: LLM-powered agents that turn text into editable 3D assets. LL3M models shapes as interpretable Blender code, making geometry, appearance, and style easy to modify. 🔗 threedle.github.io/ll3m 1/

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

To truly achieve diversity in generation, instances should be produced jointly, with the process maintaining a global perspective over the evolving outputs and strategically intervening to foster diversity ✨

Gaurav Parmar (@gauravtparmar) 's Twitter Profile Photo

We added 🤗 demos for our group inference on FLUX.1 Schnell and FLUX.1 Kontext. Thanks apolinario 🌐 for helping set this up so quickly! FLUX Schnell: huggingface.co/spaces/gparmar… FLUX Kontext: huggingface.co/spaces/gparmar… GitHub: github.com/GaParmar/group…

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

We’re excited to launch Oasis 2.0! ✨ Instead of generating new worlds, transform any game world into any style, in real time 🎮

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

Today, Decart 🔜 TwitchCon we’re announcing Lucy Edit 🚀 The first open-weight model for text-guided video editing. Edit any scene with a simple prompt - swap attributes, change backgrounds and insert objects - while keeping identity & motion intact. Can’t wait to see how researchers &

Linoy Tsaban🎗️ (@linoy_tsaban) 's Twitter Profile Photo

Decart 🔜 TwitchCon open sourced LucyEdit > a video editing model based on Wan2.2 14B 🔥 > the incredible part is that it performs video-to-video manipulations at text-to-image speed! 💨 Exciting times ahead ✨

Decart (@decartai) 's Twitter Profile Photo

DecartXR: AI meets XR "welcome to the Oasis" 🥽 An open-source app that lets you transform your world in real time, just by speaking. A new way to create, build, and imagine — live. See below how to build with this tech (and a demo on Meta Quest or web) 👇🏻

Rishubh Parihar (@rishubhparihar) 's Twitter Profile Photo

“Make it red.” “No! More red!” “Ughh… slightly less red.” “Perfect!” ♥️ 🎚️Kontinuous Kontext adds slider-based control over edit strength to instruction-based image editing, enabling smooth, continuous transformations!

Shelly Golan (@shelly_golan1) 's Twitter Profile Photo

T2I models excel at realism, but true creativity means generating what doesn't exist yet. How do you prompt for something you can't describe? 🎨 We introduce VLM-Guided Adaptive Negative Prompting: inference time method that promotes creative image generation. 1/6

T2I models excel at realism, but true creativity means generating what doesn't exist yet. How do you prompt for something you can't describe? 🎨

We introduce VLM-Guided Adaptive Negative Prompting: inference time method that promotes creative image generation.

1/6
Kfir Aberman (@abermankfir) 's Twitter Profile Photo

Can’t wait to join this incredible lineup of speakers at the Personalization in Generative AI Workshop! See you all at #ICCV2025 next week 🙌✨

Gordon Guocheng Qian (@guocheng_qian) 's Twitter Profile Photo

🎉🎉🎉 Thrilled to announce that our paper ComposeMe is accepted to SIGGRAPH Asia 2025. ComposeMe is a human-centric generative model that enables disentangled control over multiple visual attributes — such as identity, hair, and garment — across multiple subjects.

🎉🎉🎉 Thrilled to announce that our paper ComposeMe is accepted to SIGGRAPH Asia 2025. ComposeMe is a human-centric generative model that enables disentangled control over multiple visual attributes — such as identity, hair, and garment — across multiple subjects.