Sepideh Mamooler (@smamooler) 's Twitter Profile
Sepideh Mamooler

@smamooler

PhD Candidate at @ICepfl 👩🏻‍💻 Working on multi-modal AI reasoning models in scientific domains | ex DeepMind Intern smamooler.github.io

ID: 1365380853740765184

calendar_today26-02-2021 19:19:02

109 Tweet

66 Followers

671 Following

Sepideh Mamooler (@smamooler) 's Twitter Profile Photo

Check out VinaBench, our new #CVPR2025 paper. We introduce a benchmark for faithful and consistent visual narratives. Paper: arxiv.org/abs/2503.20871 Project Page: silin159.github.io/Vina-Bench/

Sepideh Mamooler (@smamooler) 's Twitter Profile Photo

Couldn't attend NAACL HLT 2025 as I didn't get a visa on time 🤷‍♀️ My colleague Mete will present PICLe on my behalf. Feel free to reach out if you want to chat more! paper: arxiv.org/abs/2412.11923 code: github.com/sMamooler/PICLe data: huggingface.co/datasets/smamo…

Akhil Arora (@akhilarora.bsky.social) (@aroraakhilcs) 's Twitter Profile Photo

Thrilled to announce that our work “Fleet of Agents” has been accepted ICML Conference. On average, FoA boosts quality by ~5% while reducing costs to ~40% of SOTA baselines. Blog post after the Neurips deadline ;) Until then: Paper: arxiv.org/abs/2405.06691… Code: github.com/au-clan/FoA

Thrilled to announce that our work “Fleet of Agents” has been accepted <a href="/icmlconf/">ICML Conference</a>. On average, FoA boosts quality by ~5% while reducing costs to ~40% of SOTA baselines.
Blog post after the Neurips deadline ;)
Until then:
Paper: arxiv.org/abs/2405.06691…
Code: github.com/au-clan/FoA
Negar Foroutan ✈️ ICLR Singapore (@negarforoutan) 's Twitter Profile Photo

Got ideas for making LLMs more inclusive and culturally aware? 🌍✨ Submit to #MELT Workshop 2025! We’re all about multilingual, multicultural, and equitable LLMs; come share your work! 🧠💬 📅 Oct 10, 2025 | Co-located with Conference on Language Modeling 🔗 melt-workshop.github.io

Dan Vahdat (@danvahdat) 's Twitter Profile Photo

با بعضی از ایرانی‌هایی که می‌شناسم صحبت کرده‌ام. اما هنوز نمی‌فهمم چرا خیلی‌ها فکر می‌کنن ما نمی‌تونیم چیزی رو عوض کنیم. چی شد که فکر کردیم فقط باید تماشا کنیم و بذاریم دیگران آینده‌مون رو بسازن؟ حقیقت اینه که دنیا رو آدم‌هایی مثل من و تو ساختن. ما می‌تونیم به هر چیزی که ممکنه

Dan Vahdat (@danvahdat) 's Twitter Profile Photo

To all my fellow countrymen living abroad, Please don’t lose faith. Stay strong. We are a people of peace… descendants of a peaceful nation: Iran. I know the situation is painful and complex. But don’t lose heart. As you wake each day, work harder. Strive for greater success.

Badr AlKhamissi (@bkhmsi) 's Twitter Profile Photo

🚨New Preprint!! Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge. 1/ 🧵👇

🚨New Preprint!!

Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge.

1/ 🧵👇
Dan Vahdat (@danvahdat) 's Twitter Profile Photo

هر کسی که مهارتی دارد، چه داخل ایران چه خارج، و فکر می‌کند می‌تواند کمکی بکند، لطفاً ثبت‌نام کند. با شما در تماس خواهیم بود. docs.google.com/forms/d/1PJAbc…

Damien Teney (@damienteney) 's Twitter Profile Photo

Both are equally important, and the execution takes much more time and skills. Good-looking ideas and theories come at a dime a dozen (in AI at least), so they're usually worthless until you can confront them with reality through empirical experiments.

Silin Gao (@silin_gao) 's Twitter Profile Photo

NEW PAPER ALERT: Recent studies have shown that LLMs often lack robustness to distribution shifts in their reasoning. Our paper proposes a new method, AbstRaL, to augment LLMs’ reasoning robustness, by promoting their abstract thinking with granular reinforcement learning.

NEW PAPER ALERT: Recent studies have shown that LLMs often lack robustness to distribution shifts in their reasoning. Our paper proposes a new method, AbstRaL, to augment LLMs’ reasoning robustness, by promoting their abstract thinking with granular reinforcement learning.
Sepideh Mamooler (@smamooler) 's Twitter Profile Photo

Excited to attend #ACL2025 in Vienna next week! I’ll be presenting our #NAACL2025 paper, PICLe, at the first workshop on Large Language Models and Structure Modeling (XLLM) on Friday. Come by our poster if you’re into NER and ICL with pseudo-annotation. arxiv.org/abs/2412.11923

Sepideh Mamooler (@smamooler) 's Twitter Profile Photo

Also: not the usual #ACL topic, but if you're working on genetic AI models, I’d love to connect! I'm exploring the intersection of behavioral genomics and multi-modal AI for behavior understanding.

Negar Foroutan ✈️ ICLR Singapore (@negarforoutan) 's Twitter Profile Photo

🚀 Excited to be at #ACL2025NLP in Vienna! ACL 2025 #ACL2025 Let’s connect and chat about multilingual LLMs & long-context inference! 👋 I’ll be presenting:

🚀 Excited to be at #ACL2025NLP  in Vienna! <a href="/aclmeeting/">ACL 2025</a>  #ACL2025 
Let’s connect and chat about multilingual LLMs &amp; long-context inference! 👋
I’ll be presenting:
Nikita Saxena (she/her) (@nikitasaxena02) 's Twitter Profile Photo

Heading to Conference on Language Modeling in Montreal? So is WiML! 🎉 We are organizing our first ever event at #CoLM2025 and we want you to choose the format! What excites you the most? Have a different idea? Let us know in the replies! 👇 RT to spread the word! ⏩

Antoine Bosselut (@abosselut) 's Twitter Profile Photo

The EPFL NLP lab is looking to hire a postdoctoral researcher on the topic of designing, training, and evaluating multilingual #LLMs. (link below) Come join our dynamic group in beautiful Lausanne!

Sara Beery (@sarameghanbeery) 's Twitter Profile Photo

The submission portal is now OPEN to take part in this interesting NeurIPS Conference 2025 data curation competition!! This is the first open-data, open-models, open-source competition for data curation in vision-language reasoning -- learn more 👇 dcvlr-neurips.github.io

Ahmad Beirami @ ICLR 2025 (@abeirami) 's Twitter Profile Photo

The leaderboards lack the needed granularity! When we say model A is better than model B at text, that is too coarse. Even when we say A > B in instruction following, that is still too coarse. I am apparently a tail user as I often find that my ranking of the models for my