Rinon Gal (@rinongal) 's Twitter Profile
Rinon Gal

@rinongal

ID: 1339700905600544769

calendar_today17-12-2020 22:36:16

208 Tweet

1,1K Followers

96 Following

Rinon Gal (@rinongal) 's Twitter Profile Photo

We've released the code for our #SIGGRAPHAsia2024 TurboEdit paper, where we edit images in 3 steps using SDXL-Turbo 🚀 turboedit-paper.github.io If you're a fan of the name, you can also check out the concurrent TurboEdit by Adobe (betterze.github.io/TurboEdit/) or even upload your own!

Yoad Tewel (@yoadtewel) 's Twitter Profile Photo

🚀 Excited to release the code and demo for ConsiStory, our #SIGGRAPH2024 paper! No fine-tuning needed — just fast, subject-consistent image generation! Check it out here 👇 Code: github.com/NVlabs/consist… Demo: build.nvidia.com/nvidia/consist…

Yael Vinker🎗 (@yvinker) 's Twitter Profile Photo

Excited to introduce SketchAgent!👩‍🎨 We leverage the prior of pretrained multimodal LLMs for language-driven, sequential sketch generation and human-agent collaborative sketching! ✨ Try our fun interface here: github.com/yael-vinker/Sk…

Or Patashnik (@opatashnik) 's Twitter Profile Photo

Ever wondered how a SINGLE token represents all subject regions in personalization? Many methods use this token in cross-attention, meaning all semantic parts share the same single attention value. We present Nested Attention, a mechanism that generates localized attention values

Ever wondered how a SINGLE token represents all subject regions in personalization? Many methods use this token in cross-attention, meaning all semantic parts share the same single attention value. We present Nested Attention, a mechanism that generates localized attention values
Joy Hsu (@joycjhsu) 's Twitter Profile Photo

Excited to bring back the 2nd Workshop on Visual Concepts at #CVPR2025 2025, this time with a call for papers! We welcome submissions on the following topics. See our website for more info: sites.google.com/stanford.edu/w… Join us & a fantastic lineup of speakers in Tennessee!

Excited to bring back the 2nd Workshop on Visual Concepts at <a href="/CVPR/">#CVPR2025</a> 2025, this time with a call for papers!

We welcome submissions on the following topics. See our website for more info:
sites.google.com/stanford.edu/w…

Join us &amp; a fantastic lineup of speakers in Tennessee!
Hila Chefer (@hila_chefer) 's Twitter Profile Photo

VideoJAM is our new framework for improved motion generation from AI at Meta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** 👇🧵

Rotem Shalev-Arkushin (@rotemsh3) 's Twitter Profile Photo

Excited to introduce our new work: ImageRAG 🖼️✨ rotem-shalev.github.io/ImageRAG We enhance off-the-shelf generative models with Retrieval-Augmented Generation (RAG) for unknown concept generation, using a VLM-based approach that’s easy to integrate with new & existing models! [1/3]

Excited to introduce our new work: ImageRAG 🖼️✨ 
rotem-shalev.github.io/ImageRAG

We enhance off-the-shelf generative models with Retrieval-Augmented Generation (RAG) for unknown concept generation, using a VLM-based approach that’s easy to integrate with new &amp; existing models!

[1/3]
Linoy Tsaban🎗️ (@linoy_tsaban) 's Twitter Profile Photo

🔔just landed: IP Composer🎨 semantically mix & match visual concepts from images ❌ text prompts can't always capture visual nuances ❌ visual input based methods often need training / don't allow fine grained control over *which* concepts to extract from our input images So👇

🔔just landed: IP Composer🎨
semantically mix &amp; match visual concepts from images

❌ text prompts can't always capture visual nuances
❌ visual input based methods often need training / don't allow fine grained control over *which* concepts to extract from our input images

So👇
Daniel Garibi (@danielgaribi) 's Twitter Profile Photo

Excited to share that "TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space" got accepted to SIGGRAPH 2025! It tackles disentangling complex visual concepts from as little as a single image and re-composing concepts across multiple images into a coherent

Yoad Tewel (@yoadtewel) 's Twitter Profile Photo

I'm going to present Add-it at #ICLR2025 tomorrow (Thursday) @ 3pm - poster #163! Project page: research.nvidia.com/labs/par/addit/ If you're around this week, feel free to DM me - happy to chat! Details below ⬇️🧵

Sara Dorfman (@sara__dorfman) 's Twitter Profile Photo

Excited to share that "IP-Composer: Semantic Composition of Visual Concepts" got accepted to #SIGGRAPH2025!🥳 We show how to combine visual concepts from multiple input images by projecting them into CLIP subspaces - no training, just neat embedding math✨ Really enjoyed working

Pinar Yanardag (@pinguar) 's Twitter Profile Photo

ICCV decisions are out — if your paper didn’t make it, don’t worry! Submit your work to the P13N Workshop instead! Let’s push the frontier of personalized generative AI together!💡 #ICCV2025 #P13NWorkshop #Personalization #ICCV2025 More info: p13n-workshop.github.io

UriG (@uri_gadot) 's Twitter Profile Photo

Tired of manual #ComfyUI workflow design? While recent methods predict them, our new paper, FlowRL, introduces a Reinforcement Learning framework that learns to generate complex, novel workflows for you! paper [arxiv.org/abs/2505.21478]

Tired of manual #ComfyUI workflow design? While recent methods predict them, our new paper, FlowRL, introduces a Reinforcement Learning framework that learns to generate complex, novel workflows for you! 
paper [arxiv.org/abs/2505.21478]