Sahithya Ravi (@sahithya_ravi) 's Twitter Profile
Sahithya Ravi

@sahithya_ravi

PhDing @UBC_CS | @VectorInst I
@UBC_NLP | Fall'24 @MetaAI (FAIR) |
Summer'24 @MSFTResearch.

ID: 3871636214

calendar_today12-10-2015 17:08:31

128 Tweet

351 Takipçi

560 Takip Edilen

EunJeong Hwang (@_eunjeong_hwang) 's Twitter Profile Photo

It was fun to present our latest work accepted at ACL Findings: A Graph per Persona: Reasoning about Subjective Natural Language Descriptions! We introduce a graph-based approach to capture implicit & explicit meanings of subjective knowledge aclanthology.org/2024.findings-… 1/4

It was fun to present our latest work accepted at ACL Findings: A Graph per Persona: Reasoning about Subjective Natural Language Descriptions! We introduce a graph-based approach to capture implicit & explicit meanings of subjective knowledge aclanthology.org/2024.findings-… 1/4
EunJeong Hwang (@_eunjeong_hwang) 's Twitter Profile Photo

Check out our new dataset: SUMIE: A Synthetic Benchmark for Incremental Entity Summarization! Our dataset contains 200 entities, each paired with 7 detailed paragraphs, along with gold-standard attribute and value labels. arxiv.org/abs/2406.05079 1/4

Check out our new dataset: SUMIE: A Synthetic Benchmark for Incremental Entity Summarization! Our dataset contains 200 entities, each paired with 7 detailed paragraphs, along with gold-standard attribute and value labels. arxiv.org/abs/2406.05079 1/4
Xindi Wu (@cindy_x_wu) 's Twitter Profile Photo

How good is the compositional generation capability of current Text-to-Image models? arxiv.org/abs/2408.14339 Introducing ConceptMix, our new benchmark that evaluates how well models can generate images that accurately combine multiple visual concepts, pushing beyond simple,

How good is the compositional generation capability of current Text-to-Image models? arxiv.org/abs/2408.14339

Introducing ConceptMix, our new benchmark that evaluates how well models can generate images that accurately combine multiple visual concepts, pushing beyond simple,
Vidhisha Balachandran (@vidhisha_b) 's Twitter Profile Photo

On some personal news: I joined MSR AI Frontiers a few months ago and am very excited to share my first work with this amazing team: Eureka, an open-source framework for evaluating and understanding large foundation models! 🌟 Read our full report: arxiv.org/abs/2409.10566

Vered Shwartz (@veredshwartz) 's Twitter Profile Photo

Thank you for featuring our work! This work is led by Maksym Taranukhin 🇨🇦🇺🇦 who is currently a Vector intern and we look forward to improving this chatbot and putting it to use! 😃

UBC NLP Group (@ubc_nlp) 's Twitter Profile Photo

📢 Check out the accepted EMNLP 2024 papers from the UBC NLP group! We look forward to presenting them in Miami! 🌴 #EMNLP2024 #NLProc

📢 Check out the accepted EMNLP 2024 papers from the UBC NLP group! We look forward to presenting them in Miami! 🌴 #EMNLP2024 #NLProc
Sneha Gathani (@snehagathani) 's Twitter Profile Photo

Excited to be presenting the ✨Groot system at #IEEEVIS2024 today! Groot enables users to edit, configure, and customize automated data insights to tailor them for their needs. Link: ieeevis.org/year/2024/prog…

Shramay Palta (@paltashramay) 's Twitter Profile Photo

📜Paper Alert!! 📜📷 #EMNLP2024 #NLProc Check out our work, which will soon appear at EMNLP 2024 Findings! Work done with Sarah Wiegreffe (on faculty job market!), Peter Rankel, Nishant Balepur, Marine Carpuat and Rachel Rudinger at UMD CLIP Lab Paper: arxiv.org/abs/2410.10854 Details in 🧵(1/n) :

📜Paper Alert!! 📜📷 #EMNLP2024  #NLProc

Check out our work, which will soon appear at EMNLP 2024 Findings! Work done with <a href="/sarahwiegreffe/">Sarah Wiegreffe (on faculty job market!)</a>, Peter Rankel, <a href="/NishantBalepur/">Nishant Balepur</a>, <a href="/MarineCarpuat/">Marine Carpuat</a> and <a href="/rachelrudinger/">Rachel Rudinger</a> at <a href="/ClipUmd/">UMD CLIP Lab</a> 

Paper: arxiv.org/abs/2410.10854
Details in 🧵(1/n) :
Sumanth (@sumanthd17) 's Twitter Profile Photo

🚨 Paper Alert!! 🚨 #LLMs are evolving fast, but how do we evaluate their performance accurately across multiple languages? 🌍 Introducing CIA: Cross-lingual Auto Evaluation—a comprehensive framework designed to evaluate multilingual LLMs with HERCULE, a specialized evaluation

🚨 Paper Alert!! 🚨

#LLMs are evolving fast, but how do we evaluate their performance accurately across multiple languages? 🌍 Introducing CIA: Cross-lingual Auto Evaluation—a comprehensive framework designed to evaluate multilingual LLMs with HERCULE, a specialized evaluation
Juan Pino (@juanmiguelpino) 's Twitter Profile Photo

We just released new models and data, in particular Spirit LM, a new speech/text language model. Blog: ai.meta.com/blog/fair-news… Paper: arxiv.org/abs/2402.05755

Ananya Bhattacharjee (@ananyabha) 's Twitter Profile Photo

I'm on the job market this year! Looking for tenure-track academic and industry researcher positions, especially in human-AI interaction and digital interventions. Please share! I'm a PhD candidate in Computer Science at the University of Toronto, where I design interactive,

Shramay Palta (@paltashramay) 's Twitter Profile Photo

I will be in Miami next week, attending EMNLP 2024 EMNLP 2024 to present this work. Looking forward to meeting old friends and meeting new ones! Please DM me if you want to grab a coffee and chat!☕️ #EMNLP2024 #NLProc

Nikita Moghe (@nikita_moghe) 's Twitter Profile Photo

I am not at #EMNLP2024 but please find 1. Vilém Zouhar at EMNLP'24 presenting "Pitfalls and Outlooks in using COMET" at WMT 2. Thread on NL Standing Instructions (was going to be at) CUSTOMNLP4U #NLProc

EunJeong Hwang (@_eunjeong_hwang) 's Twitter Profile Photo

At #EMNLP I'll be presenting one paper on Nov 12th. We show that JSON significantly improves retrieval of relevant contexts from long documents under token limits. We introduce Chain-of-Key, leveraging LLM reasoning with structured representation. Feel free to sto by and say hi!

At #EMNLP I'll be presenting one paper on Nov 12th. We show that JSON significantly improves retrieval of relevant contexts from long documents under token limits. We introduce Chain-of-Key, leveraging LLM reasoning with structured representation. Feel free to sto by and say hi!
P Shravan Nayak (@pshravannayak) 's Twitter Profile Photo

Excited to be at #EMNLP2024! 🎉 Join my talk on CulturalVQA, a benchmark testing Vision Language Models’ grasp of cultural understanding. Let’s see if VLMs truly capture global perspectives—chat after! 🗓️ Nov 12 (Tue), 4:15-4:30 PM 📍 Flagler Paper: arxiv.org/abs/2407.10920

Vered Shwartz (@veredshwartz) 's Twitter Profile Photo

Mehar Bhatia @ EMNLP’24 will present our benchmark for evaluating VLMs on their multicultural understanding (co-authors Sahithya Ravi @ EMNLP 2024 and EunJeong Hwang are also here). Today, Nov 12, poster session C (4 pm) @ Riverfront Hall. aclanthology.org/2024.emnlp-mai… 2/5

<a href="/bhatia_mehar/">Mehar Bhatia @ EMNLP’24</a> will present our benchmark for evaluating VLMs on their multicultural understanding (co-authors <a href="/Sahithya_Ravi/">Sahithya Ravi @ EMNLP 2024</a> and <a href="/_eunjeong_hwang/">EunJeong Hwang</a> are also here). Today, Nov 12, poster session C (4 pm) @ Riverfront Hall. 

aclanthology.org/2024.emnlp-mai… 2/5
Mehar Bhatia (@bhatia_mehar) 's Twitter Profile Photo

I am at EMNLP'24🏖️! Come check out our poster on the GlobalRG benchmark 🌍for evaluating the multicultural understanding of VLMs. We release two challenging tasks. Test your VLMs on our benchmark 📊 Page: globalrg.github.io ⏲️ 4 pm. Poster Session C. 📌 Riverfront Hall