Sara Rosenthal (@seirasto) 's Twitter Profile
Sara Rosenthal

@seirasto

NLP Research Scientist at IBM.

ID: 16284064

calendar_today14-09-2008 15:54:09

109 Tweet

90 Takipçi

67 Takip Edilen

Yotam Perlitz 👾 (@yotamperlitz) 's Twitter Profile Photo

✨ Developed a new benchmark or dataset for language models? ✨ Want the community to trust and adopt it? 🤔 So, demonstrate its validity by comparing it to established benchmarks! BenchBench makes it easy. Check it out: 👉 huggingface.co/spaces/ibm/ben…

Yikang Shen (@yikang_shen) 's Twitter Profile Photo

Granite 3.0 is our latest update for the IBM foundation models. The 8B and 2B models outperform strong competitors with similar sizes. The 1B and 3B MoE use only 400M and 800M active parameters to target the on-device use cases. Our technical report provides all the details you

Granite 3.0 is our latest update for the IBM foundation models. The 8B and 2B models outperform strong competitors with similar sizes. The 1B and 3B MoE use only 400M and 800M active parameters to target the on-device use cases. Our technical report provides all the details you
Sara Rosenthal (@seirasto) 's Twitter Profile Photo

Heading to Miami to present ClapNQ: Cohesive Long-form Answers from Passages in Natural Questions for RAG systems (accepted at TACL) at EMNLP! Who else is going to be there? #EMNLP2024 #RAG #Miami arxiv.org/abs/2404.02103

Sara Rosenthal (@seirasto) 's Twitter Profile Photo

ClapNQ oral presentation today, Wednesday Nov 13 at 10:30 in Monroe. If you are attending EMNLP I hope to see you there! #EMNLP2024 #RAG GitHub: github.com/primeqa/clapnq Paper: arxiv.org/abs/2404.02103 Avi Sil Radu Florian S Roukos

Sara Rosenthal (@seirasto) 's Twitter Profile Photo

Anyone else feel like Google scholar is missing citations lately? I have a recent paper that has 8 citations on semantic scholar and only 3 on Google scholar…. and I have two papers that are cited in one paper but only one has the citation 🤔

SemEval (@semevalworkshop) 's Twitter Profile Photo

SemEval 2025 will be held at ACL 2025 in Vienna! 2025.aclweb.org/program/worksh… The evaluation phase begins this week on January 10th. It's not too late to join! Check out our exciting tasks and participate! semeval.github.io/SemEval2025/ta… Sara Rosenthal Aiala Rosá Furman @marcos_zampieri debanjan

ACL 2025 (@aclmeeting) 's Twitter Profile Photo

📢 Have you been wondering what workshops are brewing in the *ACL venues in 2025? The list that we've been waiting for in here. Feel free to tag or repost with the organisers. Below are ACL 2025 workshops: #ACL2025NLP #NLProc #workshop 🧵

ACL 2025 (@aclmeeting) 's Twitter Profile Photo

(1) The International Conference on Spoken Language Translation (IWSLT 2025) (2) ClimateNLP: 2nd Workshop on Natural Language Processing meets Climate Change (3) BioNLP 2025 and Shared Tasks (BioNLP-ST 2025) (4) SemEval-2025 #NLProc #ACL2025NLP

Sara Rosenthal (@seirasto) 's Twitter Profile Photo

🌟Want to know more about our MTRAG benchmark? Check out the IBM blog highlighting our work! research.ibm.com/blog/conversat… IBM Research

Sara Rosenthal (@seirasto) 's Twitter Profile Photo

Excited about this collab! Come check out FeeL and help advance multilingual generation in your language! huggingface.co/spaces/feel-fl…

Sara Rosenthal (@seirasto) 's Twitter Profile Photo

Working on RAG? Come check out our InspectorRAGet DEMO presented by Siva Sankalp May 2 (Friday), 11-12:30 at Demo Session 8 in Hall 3! Looking forward to attending ACL in a few months! #NAACL2025 NAACL HLT 2025 paper: arxiv.org/abs/2404.17347 github: github.com/IBM/InspectorR…

Keshav Ramji ✈️ ICLR'25 (@keshavramji) 's Twitter Profile Photo

Excited to share our new paper on language model self-improvement! Paper: arxiv.org/abs/2505.16927 We introduce Self-Taught Principle Learning (STaPLe), a new approach for LMs to generate their own constitutions on-policy, by learning the principles that are most effective to

Excited to share our new paper on language model self-improvement!

Paper: arxiv.org/abs/2505.16927

We introduce Self-Taught Principle Learning (STaPLe), a new approach for LMs to generate their own constitutions on-policy, by learning the principles that are most effective to