Ron Eliav (@ron_eliav) 's Twitter Profile
Ron Eliav

@ron_eliav

Ph.D. student in NLP at @biunlp

ID: 1403103663690862592

calendar_today10-06-2021 21:36:09

18 Tweet

38 Followers

98 Following

Ayal Klein (@kleinay2) 's Twitter Profile Photo

Excited to share our work on QASem Parsing, a text-to-text modeling of QA-based semantics! 🀩 Preprint: arxiv.org/abs/2205.11413 Code: github.com/kleinay/QASem Eran Hirsch @EliavRon Valentina Pyatkin Avi Caciularu IdoDagan

Excited to share our work on QASem Parsing, a text-to-text modeling of QA-based semantics! 🀩

Preprint: arxiv.org/abs/2205.11413
Code: github.com/kleinay/QASem 

<a href="/hirscheran/">Eran Hirsch</a> @EliavRon <a href="/valentina__py/">Valentina Pyatkin</a> <a href="/clu_avi/">Avi Caciularu</a> IdoDagan
BIU NLP (@biunlp) 's Twitter Profile Photo

We are excited to share these papers by BIU NLP and collaborators accepted to #EMNLP2022! Come visit us next week EMNLP 2025 for more information, or see the thread below. cc (((Ω„()(Ω„() 'yoav))))πŸ‘Ύ Reut Tsarfaty IdoDagan

We are excited to share these papers by <a href="/biunlp/">BIU NLP</a> and collaborators accepted to #EMNLP2022! Come visit us next week <a href="/emnlpmeeting/">EMNLP 2025</a> for more information, or see the thread below.

cc <a href="/yoavgo/">(((Ω„()(Ω„() 'yoav))))πŸ‘Ύ</a> <a href="/rtsarfaty/">Reut Tsarfaty</a> IdoDagan
Ron Eliav (@ron_eliav) 's Twitter Profile Photo

🧠 New #ICLR2025 paper: "Explain Yourself, Briefly!" We introduce Sufficient Subset Training (SST)β€”a self-supervised method enabling neural networks to generate concise, faithful explanations as part of their predictions. πŸ“„ Read more: arxiv.org/abs/2502.03391

🧠 New #ICLR2025 paper: "Explain Yourself, Briefly!"
We introduce Sufficient Subset Training (SST)β€”a self-supervised method enabling neural networks to generate concise, faithful explanations as part of their predictions.
πŸ“„ Read more: arxiv.org/abs/2502.03391
Eran Hirsch (@hirscheran) 's Twitter Profile Photo

🚨 Introducing LAQuer, accepted to #ACL2025 (main conf)! LAQuer provides more granular attribution for LLM generations: users can just highlight any output fact (top), and get attribution for that input snippet (bottom). This reduces the amount of text the user has to read by 2

🚨 Introducing LAQuer, accepted to #ACL2025 (main conf)!

LAQuer provides more granular attribution for LLM generations: users can just highlight any output fact (top), and get attribution for that input snippet (bottom). This reduces the amount of text the user has to read by 2
Elias Stengel-Eskin (on the faculty job market) (@eliaseskin) 's Twitter Profile Photo

🚨 CLATTER treats entailment as a reasoning process, guiding models to follow concrete steps (decomposition, attribution/entailment, and aggregation). CLATTER improves hallucination detection via NLI, with gains on ClaimVerify, LFQA, and TofuEval especially on long-reasoning

Eran Hirsch (@hirscheran) 's Twitter Profile Photo

🚨 New preprint! We propose a reasoning process for hallucination detection: 1️⃣ Decompose the output 2️⃣ Generate fine-grained attribution (if possible), and accordingly make local entailment decisions 3️⃣ Aggregate all to a final decision We also introduce metrics to evaluate

Ori Ernst (@oriern1) 's Twitter Profile Photo

🧡 New paper at Findings #ACL2025 ACL 2025! Not all documents are processed equally well. Some consistently yield poor results across many models. But why? And can we predict that in advance? Work with Steven Koniaev and Jackie Cheung Mila - Institut québécois d'IA McGill NLP #NLProc (1/n)

🧡 New paper at Findings #ACL2025 <a href="/aclmeeting/">ACL 2025</a>!
Not all documents are processed equally well. Some consistently yield poor results across many models.
But why? And can we predict that in advance?
Work with Steven Koniaev and Jackie Cheung <a href="/Mila_Quebec/">Mila - Institut quΓ©bΓ©cois d'IA</a> 
<a href="/McGill_NLP/">McGill NLP</a> 
#NLProc  
(1/n)
Arie Cattan (@ariecattan) 's Twitter Profile Photo

🚨 RAG is a popular approach but what happens when the retrieved sources provide conflicting information?πŸ€” We're excited to introduce our paper: β€œDRAGged into CONFLICTS: Detecting and Addressing Conflicting Sources in Search-Augmented LLMsβ€πŸš€ A thread πŸ§΅πŸ‘‡

🚨 RAG is a popular approach but what happens when the retrieved sources provide conflicting information?πŸ€”

We're excited to introduce our paper: 
β€œDRAGged into CONFLICTS: Detecting and Addressing Conflicting Sources in Search-Augmented LLMsβ€πŸš€

A thread πŸ§΅πŸ‘‡
Shahaf Bassan (@shahaf_bassan) 's Twitter Profile Photo

🚨 New #ICML2025 paper! 𝐄𝐱𝐩π₯𝐚𝐒𝐧𝐒𝐧𝐠, π…πšπ¬π­ 𝐚𝐧𝐝 𝐒π₯𝐨𝐰 We generate explanations for neural networks 𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑑𝑙𝑦 and π‘€π‘–π‘‘β„Ž π‘π‘Ÿπ‘œπ‘£π‘Žπ‘π‘™π‘’ π‘”π‘’π‘Žπ‘Ÿπ‘Žπ‘›π‘‘π‘’π‘’π‘  by pruning to a much smaller model and gradually expand it to ensure provable guarantees.

Ori Malca (@orimalca) 's Twitter Profile Photo

πŸŽ‰ I am excited to present our new paper! Our paper improves personalization of text-to-image models,Β byΒ adding one special cleaning step on top of existing personalized models. With just a single gradient update (~4 seconds on an NVIDIA H100 GPU) and a single image of the

πŸŽ‰ I am excited to present our new paper!

Our paper improves personalization of text-to-image models,Β byΒ adding one special cleaning step on top of existing personalized models.
With just a single gradient update (~4 seconds on an NVIDIA H100 GPU) and a single image of the