Sigal Raab (@sigal_raab) 's Twitter Profile
Sigal Raab

@sigal_raab

ID: 783555116590923777

calendar_today05-10-2016 06:31:12

32 Tweet

159 Takipçi

87 Takip Edilen

Elica Le Bon الیکا‌ ل بن (@elicalebon) 's Twitter Profile Photo

I’m so disgusted by what I just witnessed. In Amsterdam, Israelis & Jews leaving a soccer match were beaten unconscious by mobs, thrown in the river, and forced to say “free Palestine.” This is the direct result of normalizing antisemitism post Oct. 7, where the most flagrant

Guy Tevet (@guytvt) 's Twitter Profile Photo

🚀 Meet DiP: our newest text-to-motion diffusion model! ✨ Ultra-fast generation ♾️ Creates endless, dynamic motions 🔄 Seamlessly switch prompts on the fly Best of all, it's now available in the MDM codebase: github.com/GuyTevet/motio… [1/3]

HuMoGen - CVPR Workshop 2025 (@humogen11384) 's Twitter Profile Photo

We invite you to submit your Motion Generation papers to the HuMoGen #CVPR2025 workshop! The deadline is on March 12 More details @ humogen.github.io

Jonathan Fischoff (@jfischoff) 's Twitter Profile Photo

“Tight Inversion” uses an IP-Adapter during DDIM inversion to preserve the original image better when editing. arxiv.org/abs/2502.20376

“Tight Inversion” uses an IP-Adapter during DDIM inversion to preserve the original image better when editing.

arxiv.org/abs/2502.20376
Daniel Cohen-Or (@danielcohenor1) 's Twitter Profile Photo

Vectorization into a neat SVG!🎨✨ Instead of generating a messy SVG (left), we produce a structured, compact representation (right) - enhancing usability for editing and modification. Accepted to #CVPR2025 !

Vectorization into a neat SVG!🎨✨ 
Instead of generating a messy SVG (left), we produce a structured, compact representation (right) - enhancing usability for editing and modification. Accepted to #CVPR2025 !
HuMoGen - CVPR Workshop 2025 (@humogen11384) 's Twitter Profile Photo

📢 Deadline extended! 📢 You now have an extra week to submit! New deadline: March 19. Want to submit? Find all the details here: humogen.github.io 🚀

Elad Richardson (@eladrichardson) 's Twitter Profile Photo

Ever stared at a set of shapes and thought: 'These could be something… but what?' Designed for visual ideation, PiT takes a set of concepts and interprets them as parts within a target domain, assembling them together while also sampling missing parts. eladrich.github.io/PiT/

Ever stared at a set of shapes and thought: 'These could be something… but what?'

Designed for visual ideation, PiT takes a set of  concepts and interprets them as parts within a target domain, assembling them together while also sampling missing parts.

eladrich.github.io/PiT/
Linoy Tsaban🎗️ (@linoy_tsaban) 's Twitter Profile Photo

🔔just landed: IP Composer🎨 semantically mix & match visual concepts from images ❌ text prompts can't always capture visual nuances ❌ visual input based methods often need training / don't allow fine grained control over *which* concepts to extract from our input images So👇

🔔just landed: IP Composer🎨
semantically mix & match visual concepts from images

❌ text prompts can't always capture visual nuances
❌ visual input based methods often need training / don't allow fine grained control over *which* concepts to extract from our input images

So👇
Daniel Garibi (@danielgaribi) 's Twitter Profile Photo

Excited to share that "TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space" got accepted to SIGGRAPH 2025! It tackles disentangling complex visual concepts from as little as a single image and re-composing concepts across multiple images into a coherent

Sara Dorfman (@sara__dorfman) 's Twitter Profile Photo

Excited to share that "IP-Composer: Semantic Composition of Visual Concepts" got accepted to #SIGGRAPH2025!🥳 We show how to combine visual concepts from multiple input images by projecting them into CLIP subspaces - no training, just neat embedding math✨ Really enjoyed working

Omer Dahary (@omerdahary) 's Twitter Profile Photo

Excited to share that our new work, Be Decisive, has been accepted to SIGGRAPH! We improve multi-subject generation by extracting a layout directly from noise, resulting in more diverse and accurate compositions. Website: omer11a.github.io/be-decisive/ Paper: arxiv.org/abs/2505.21488

Excited to share that our new work, Be Decisive, has been accepted to SIGGRAPH!
We improve multi-subject generation by extracting a layout directly from noise, resulting in more diverse and accurate compositions.
Website: omer11a.github.io/be-decisive/
Paper: arxiv.org/abs/2505.21488