Rotem Shalev-Arkushin (@rotemsh3) 's Twitter Profile
Rotem Shalev-Arkushin

@rotemsh3

CS PhD student @ Tel-Aviv University

ID: 1027503903351484417

calendar_today09-08-2018 10:36:35

12 Tweet

25 Followers

72 Following

Sigal Raab (@sigal_raab) 's Twitter Profile Photo

πŸ””πŸ””Thrilled to share #MoMo [SIGGRAPH Asia ➑️ Hong Kong 2024 πŸ₯³πŸŽ‰]: Exploring the attention space of #MotionDiffusionModels. Our training-free method enables cool applications like this motion transfer πŸ’πŸ’. monkeyseedocg.github.io

Guy Tevet (@guytvt) 's Twitter Profile Photo

πŸš€ Meet DiP: our newest text-to-motion diffusion model! ✨ Ultra-fast generation ♾️ Creates endless, dynamic motions πŸ”„ Seamlessly switch prompts on the fly Best of all, it's now available in the MDM codebase: github.com/GuyTevet/motio… [1/3]

Aharon Azulay (@aharonazulay) 's Twitter Profile Photo

How well LLMs are memorizing obscure details from scientific papers? I created a benchmark for that! Full code, dataset and data creation method included. tl;dr GPT4.5 is a major jump in scientific facts memorization. Thread below πŸ‘‡

Sigal Raab (@sigal_raab) 's Twitter Profile Photo

πŸ””Excited to announce that #AnyTop has been accepted to #SIGGRAPH2025!πŸ₯³ βœ… A diffusion model that generates motion for arbitrary skeletons βœ… Using only a skeletal structure as input βœ… Learns semantic correspondences across diverse skeletons 🌐 Project: anytop2025.github.io/Anytop-page

Sara Dorfman (@sara__dorfman) 's Twitter Profile Photo

Excited to share that "IP-Composer: Semantic Composition of Visual Concepts" got accepted to #SIGGRAPH2025!πŸ₯³ We show how to combine visual concepts from multiple input images by projecting them into CLIP subspaces - no training, just neat embedding math✨ Really enjoyed working

Omer Dahary (@omerdahary) 's Twitter Profile Photo

Excited to share that our new work, Be Decisive, has been accepted to SIGGRAPH! We improve multi-subject generation by extracting a layout directly from noise, resulting in more diverse and accurate compositions. Website: omer11a.github.io/be-decisive/ Paper: arxiv.org/abs/2505.21488

Excited to share that our new work, Be Decisive, has been accepted to SIGGRAPH!
We improve multi-subject generation by extracting a layout directly from noise, resulting in more diverse and accurate compositions.
Website: omer11a.github.io/be-decisive/
Paper: arxiv.org/abs/2505.21488
Elad Richardson (@eladrichardson) 's Twitter Profile Photo

Really impressive results for human-object interaction. They use a two-phase process where they optimize the diffusion noise, instead of the motion itself, to get to sub-centimeter precision while staying on manifold 🧠 HOIDiNi - hoidini.github.io

Really impressive results for human-object interaction. They use a two-phase process where they optimize the diffusion noise, instead of the motion itself, to get to sub-centimeter precision while staying on manifold 🧠

HOIDiNi - hoidini.github.io
Guy Tevet (@guytvt) 's Twitter Profile Photo

1/ Can we teach a motion model to "dance like a chicken" Or better: Can LoRA help motion diffusion models learn expressive, editable styles without forgetting how to move? Led by Haim Sawdayee, Chuan Guo, we explore this in our latest work. πŸŽ₯ haimsaw.github.io/LoRA-MDM/ πŸ§΅πŸ‘‡

Roi Bar-On (@roibar_on) 's Twitter Profile Photo

1/9 Excited to share EditP23! 🎨 Finally, a single tool for ALL your 3D editing needs: βœ… Pose & Geometry Changes βœ… Object Additions βœ… Global Style Transformations βœ… Local Modifications All driven by one simple 2D image edit. It's mask-free ✨ and works in seconds ⚑️. 🧡

1/9
Excited to share EditP23! 🎨
Finally, a single tool for ALL your 3D editing needs:
βœ… Pose & Geometry Changes
βœ… Object Additions
βœ… Global Style Transformations
βœ… Local Modifications
All driven by one simple 2D image edit. It's mask-free ✨ and works in seconds ⚑️.
🧡
Shelly Golan (@shelly_golan1) 's Twitter Profile Photo

T2I models excel at realism, but true creativity means generating what doesn't exist yet. How do you prompt for something you can't describe? 🎨 We introduce VLM-Guided Adaptive Negative Prompting: inference time method that promotes creative image generation. 1/6

T2I models excel at realism, but true creativity means generating what doesn't exist yet. How do you prompt for something you can't describe? 🎨

We introduce VLM-Guided Adaptive Negative Prompting: inference time method that promotes creative image generation.

1/6