Royi Rassin (@royirassin) 's Twitter Profile
Royi Rassin

@royirassin

PhD candidate @biunlp researching multimodality. Intern @GoogleAI

ID: 984846057409572869

linkhttps://royi-rassin.netlify.app/ calendar_today13-04-2018 17:29:32

361 Tweet

334 Followers

274 Following

Yonatan Bitton (@yonatanbitton) 's Twitter Profile Photo

Exciting to see Gemini 2.0 and Gemini-2.0-thinking taking on the Visual Riddles challenge! The leaderboard is heating up, with open-ended auto-rating accuracy currently around the mid-50s. Lots of room for improvement across all models!

Hila Chefer (@hila_chefer) 's Twitter Profile Photo

VideoJAM is our new framework for improved motion generation from AI at Meta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** 👇🧵

Vered Shwartz (@veredshwartz) 's Twitter Profile Photo

I'm excited to announce that my nonfiction book, "Lost in Automatic Translation: Navigating Life in English in the Age of Language Technologies", will be published this summer by Cambridge University Press. I can't wait to share it with you! 📖🤖 cambridge.org/core/books/los…

I'm excited to announce that my nonfiction book, "Lost in Automatic Translation: Navigating Life in English in the Age of Language Technologies", will be published this summer by Cambridge University Press. I can't wait to share it with you! 📖🤖 cambridge.org/core/books/los…
AK (@_akhaliq) 's Twitter Profile Photo

Top 3 papers submitted today on Hugging Face 1. SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators 2. Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling 3. Exploring the Limit of Outcome Reward for Learning Mathematical

Top 3 papers submitted today on Hugging Face

1. SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators

2. Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling

3. Exploring the Limit of Outcome Reward for Learning Mathematical
Shauli Ravfogel (@ravfogel) 's Twitter Profile Photo

Our paper "A Practical Method for Generating String Counterfactuals" has been accepted to the findings of NAACL 2025! a joint work with Matan Avitan , (((ل()(ل() 'yoav))))👾 and Ryan Cotterell. We propose "Coutnerfactual Lens", a technique to explain intervention in natural language. (1/6)

Our paper "A Practical Method for Generating String Counterfactuals" has been accepted to the findings of NAACL 2025! a joint work with <a href="/matan_avitan_/">Matan Avitan</a> , <a href="/yoavgo/">(((ل()(ل() 'yoav))))👾</a>  and Ryan Cotterell.  We propose "Coutnerfactual Lens", a technique to explain intervention in natural language. (1/6)
Lital Binyamin (@litalby) 's Twitter Profile Photo

🎉 I'm happy to share that our paper, Make It Count, has been accepted to #CVPR2025! A huge thanks to my amazing collaborators - Yoad Tewel, Hilit Segev , Eran Hirsch, Royi Rassin, and Gal Chechik! 🔗 Paper page: make-it-count-paper.github.io Excited to share our key findings!

Royi Rassin (@royirassin) 's Twitter Profile Photo

Diffusion models are the current go-to for image generation, but they often fail miserably in generating an accurate count of objects. Our new #CVPR paper proposes a method on top of such models to *enforce* the correct number.

Hritik Bansal (@hbxnov) 's Twitter Profile Photo

Video generative models hold the promise of being general-purpose simulators of the physical world 🤖 How far are we from this goal❓ 📢Excited to announce VideoPhy-2, the next edition in the series to test the physical likeness of the generated videos for real-world actions. 🧵

Video generative models hold the promise of being general-purpose simulators of the physical world 🤖 How far are we from this goal❓

📢Excited to announce VideoPhy-2, the next edition in the series to test the physical likeness of the generated videos for real-world actions. 🧵
Eran Hirsch (@hirscheran) 's Twitter Profile Photo

🚨 Introducing LAQuer, accepted to #ACL2025 (main conf)! LAQuer provides more granular attribution for LLM generations: users can just highlight any output fact (top), and get attribution for that input snippet (bottom). This reduces the amount of text the user has to read by 2

🚨 Introducing LAQuer, accepted to #ACL2025 (main conf)!

LAQuer provides more granular attribution for LLM generations: users can just highlight any output fact (top), and get attribution for that input snippet (bottom). This reduces the amount of text the user has to read by 2
Luca Ambrogioni (@lucaamb) 's Twitter Profile Photo

1/2) It's finally out on Arxiv: Feedback guidance of generative diffusion models! We derived an adaptive guidance methods from first principles that regulate the amount of guidance based on its current state. Complex prompts are highly guided while simplem ones are almost free

1/2) It's finally out on Arxiv: Feedback guidance of generative diffusion models!

We derived an adaptive guidance methods from first principles that regulate the amount of guidance based on its current state.

Complex prompts are highly guided while simplem ones are almost free
(((ل()(ل() 'yoav))))👾 (@yoavgo) 's Twitter Profile Photo

IRGC, no matter what you do, please do not attack the compute cluster at Bar-Ilan university. It is priceless and impossible to replace. We will be devastated if it will be destroyed. (it also has tons of super sensitive and irreplaceable military stuffs!!!!)

Aviya Maimon (@aviyamaimon) 's Twitter Profile Photo

🚨 New paper alert! 🚨 We propose an IQ Test for LLMs — a new way to evaluate models that goes beyond benchmarks and uncovers their core skills. Think: 🧠🤖 psychometrics for LLMs. 👇 (1/6)

Ron Mokady (@mokadyron) 's Twitter Profile Photo

Open-sourcing a model is not enough—it has to be accessible to be useful That’s why I’m excited our first model is natively supported in diffusers (my favorite repo 🤗) (Link below) A small step, but important prep for what’s next 😃

Open-sourcing a model is not enough—it has to be accessible to be useful
That’s why I’m excited our first model is natively supported in <a href="/diffuserslib/">diffusers</a>  (my favorite repo 🤗)
(Link below)

A small step, but important prep for what’s next 😃
Guy Dar (@guy_dar1) 's Twitter Profile Photo

A project based on the celebrated elegant vec2vec ("Harnessing the Universal Geometry")! one can get competitive results *at a fraction* of the compute, and samples, *using linear methods*! I use a cocktail of combinatorial matching algorithms, lin algbera, and some tricks >>

A project based on the celebrated elegant vec2vec ("Harnessing the Universal Geometry")!

one can get competitive results *at a fraction* of the compute, and samples, *using linear methods*!

I use a cocktail of combinatorial matching algorithms, lin algbera, and some tricks &gt;&gt;
Guy Dar (@guy_dar1) 's Twitter Profile Photo

NEW PAPER Scaling up vec2vec! vec2vec holds an amazing promise for unsupervised alignment of embedding models without matching pairs of data. But it is very costly! What can we do?!?! In this paper, I investigate a simple method to learn this alignment in just 10 minutes!!

NEW PAPER
Scaling up vec2vec! 
vec2vec holds an amazing promise for unsupervised alignment of embedding models without matching pairs of data. 

But it is very costly! 
What can we do?!?! 

In this paper, I investigate a simple method to learn this alignment in just 10 minutes!!
Shauli Ravfogel (@ravfogel) 's Twitter Profile Photo

New NeurIPS paper! 🐣Why do LMs represent concepts linearly? We focus on LMs's tendency to linearly separate true and false assertions, and provide a complete analysis of the truth circuit in a toy model. A joint work with Gilad Yehudai, Tal Linzen, Joan Bruna and Alberto Bietti.

New NeurIPS paper! 🐣Why do LMs represent concepts linearly? We focus on LMs's tendency to linearly separate true and false assertions, and provide a complete analysis of the truth circuit in a toy model. A joint work with <a href="/Giladude/">Gilad Yehudai</a>, <a href="/tallinzen/">Tal Linzen</a>, Joan Bruna and <a href="/albertobietti/">Alberto Bietti</a>.