Kfir Aberman (@abermankfir) 's Twitter Profile
Kfir Aberman

@abermankfir

Research Scientist @Snap | previous: Research Scientist @Google | Personalized Generative AI | DreamBooth

ID: 1161673028272607233

linkhttp://kfiraberman.github.io calendar_today14-08-2019 16:16:47

240 Tweet

1,1K Followers

249 Following

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

๐ŸŽ‰ Today at Decart we're announcing Mirage. The first generative video model that runs in real-time, to infinity. Mirage is a Live Stream Diffusion (LSD), a breakthrough that transforms any video into anything you can imagine, as it plays. ๐ŸŽฎ Try it: mirage.decart.ai

Daniel (@dandevai) 's Twitter Profile Photo

This is absolutely crazy I just made a game in 30 seconds with Polyworld AI Then used Decart's new MirageLSD Diffusion Model to transform it into mind bending immersive worlds *IN REAL TIME* This technology is unbelievable

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

I love Vancouver, and even more when SIGGRAPH is here! ๐Ÿ‡จ๐Ÿ‡ฆ Iโ€™ll be around all week with Decart ๐Ÿ”œ TwitchCon! come check out our latest interactive video technology, Mirage, in the exhibition hall, and ping me if youโ€™d like to chat!

I love Vancouver, and even more when SIGGRAPH is here! ๐Ÿ‡จ๐Ÿ‡ฆ
Iโ€™ll be around all week with <a href="/DecartAI/">Decart ๐Ÿ”œ TwitchCon</a>! come check out our latest interactive video technology, Mirage, in the exhibition hall, and ping me if youโ€™d like to chat!
Rana Hanocka (@ranahanocka) 's Twitter Profile Photo

Weโ€™ve been building something weโ€™re ๐‘Ÿ๐‘’๐‘Ž๐‘™๐‘™๐‘ฆ excited about โ€“ LL3M: LLM-powered agents that turn text into editable 3D assets. LL3M models shapes as interpretable Blender code, making geometry, appearance, and style easy to modify. ๐Ÿ”— threedle.github.io/ll3m 1/

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

To truly achieve diversity in generation, instances should be produced jointly, with the process maintaining a global perspective over the evolving outputs and strategically intervening to foster diversity โœจ

Gaurav Parmar (@gauravtparmar) 's Twitter Profile Photo

We added ๐Ÿค— demos for our group inference on FLUX.1 Schnell and FLUX.1 Kontext. Thanks apolinario ๐ŸŒ for helping set this up so quickly! FLUX Schnell: huggingface.co/spaces/gparmarโ€ฆ FLUX Kontext: huggingface.co/spaces/gparmarโ€ฆ GitHub: github.com/GaParmar/groupโ€ฆ

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

Weโ€™re excited to launch Oasis 2.0! โœจ Instead of generating new worlds, transform any game world into any style, in real time ๐ŸŽฎ

Decart (@decartai) 's Twitter Profile Photo

Speed kills quality? Only if you suck. Meet Lucy-14B, the fastest large I2V model youโ€™ve ever seen. Available now on fal.

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

Today, Decart ๐Ÿ”œ TwitchCon weโ€™re announcing Lucy Edit ๐Ÿš€ The first open-weight model for text-guided video editing. Edit any scene with a simple prompt - swap attributes, change backgrounds and insert objects - while keeping identity & motion intact. Canโ€™t wait to see how researchers &

Linoy Tsaban๐ŸŽ—๏ธ (@linoy_tsaban) 's Twitter Profile Photo

Decart ๐Ÿ”œ TwitchCon open sourced LucyEdit > a video editing model based on Wan2.2 14B ๐Ÿ”ฅ > the incredible part is that it performs video-to-video manipulations at text-to-image speed! ๐Ÿ’จ Exciting times ahead โœจ

Decart (@decartai) 's Twitter Profile Photo

DecartXR: AI meets XR "welcome to the Oasis" ๐Ÿฅฝ An open-source app that lets you transform your world in real time, just by speaking. A new way to create, build, and imagine โ€” live. See below how to build with this tech (and a demo on Meta Quest or web) ๐Ÿ‘‡๐Ÿป

Rishubh Parihar (@rishubhparihar) 's Twitter Profile Photo

โ€œMake it red.โ€ โ€œNo! More red!โ€ โ€œUghhโ€ฆ slightly less red.โ€ โ€œPerfect!โ€ โ™ฅ๏ธ ๐ŸŽš๏ธKontinuous Kontext adds slider-based control over edit strength to instruction-based image editing, enabling smooth, continuous transformations!

Shelly Golan (@shelly_golan1) 's Twitter Profile Photo

T2I models excel at realism, but true creativity means generating what doesn't exist yet. How do you prompt for something you can't describe? ๐ŸŽจ We introduce VLM-Guided Adaptive Negative Prompting: inference time method that promotes creative image generation. 1/6

T2I models excel at realism, but true creativity means generating what doesn't exist yet. How do you prompt for something you can't describe? ๐ŸŽจ

We introduce VLM-Guided Adaptive Negative Prompting: inference time method that promotes creative image generation.

1/6
Kfir Aberman (@abermankfir) 's Twitter Profile Photo

Canโ€™t wait to join this incredible lineup of speakers at the Personalization in Generative AI Workshop! See you all at #ICCV2025 next week ๐Ÿ™Œโœจ

Gordon Guocheng Qian (@guocheng_qian) 's Twitter Profile Photo

๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ Thrilled to announce that our paper ComposeMe is accepted to SIGGRAPH Asia 2025. ComposeMe is a human-centric generative model that enables disentangled control over multiple visual attributes โ€” such as identity, hair, and garment โ€” across multiple subjects.

๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ Thrilled to announce that our paper ComposeMe is accepted to SIGGRAPH Asia 2025. ComposeMe is a human-centric generative model that enables disentangled control over multiple visual attributes โ€” such as identity, hair, and garment โ€” across multiple subjects.