Mohsen Fayyaz (@mohsen_fayyaz) 's Twitter Profile
Mohsen Fayyaz

@mohsen_fayyaz

PhD Student @ UCLA

#NLProc #MachineLearning

ID: 984716101211828224

linkhttp://mohsenfayyaz.github.io calendar_today13-04-2018 08:53:08

17 Tweet

172 Takipçi

385 Takip Edilen

TeIAS-Data Science (@datateias) 's Twitter Profile Photo

Student interns at TeIAS, M. Fayyaz, and E. Aghazadeh, had a paper accepted at EMNLP 2021 (BlackboxNLP), with H. Mohebbi, A Modaresi, and M. T. Pilehvar. The paper reports an in-depth analysis of the distribution of encoded knowledge across layers in BERToid representations.

Student interns at TeIAS, M. Fayyaz, and E. Aghazadeh, had a paper accepted at EMNLP 2021 (BlackboxNLP), with H. Mohebbi, A Modaresi, and M. T. Pilehvar. The paper reports an in-depth analysis of the distribution of encoded knowledge across layers in BERToid representations.
Mohsen Fayyaz (@mohsen_fayyaz) 's Twitter Profile Photo

#EMNLP2021 Presenting "Not All Models Localize Linguistic Knowledge in the Same Place" BlackboxNLP poster session 3 (Gather.town) Nov 11 14:45 Punta Cana (UTC-4) 📝Paper: aclanthology.org/2021.blackboxn… with Ehsan Aghazade Ali Modarressi Hosein Mohebbi Taher Pilehvar #BlackboxNLP

#EMNLP2021 Presenting "Not All Models Localize Linguistic Knowledge in the Same Place"

BlackboxNLP poster session 3 (Gather.town) Nov 11
14:45 Punta Cana (UTC-4)

📝Paper: aclanthology.org/2021.blackboxn…

with <a href="/AghazadeEhsan/">Ehsan Aghazade</a> <a href="/AModarressi/">Ali Modarressi</a> <a href="/hmohebbi75/">Hosein Mohebbi</a> <a href="/tpilehvar/">Taher Pilehvar</a>

#BlackboxNLP
Mohsen Fayyaz (@mohsen_fayyaz) 's Twitter Profile Photo

#ACL2022 Excited to share our latest work on "Metaphors in Pre-Trained Language Models" at ACL 2025. 📝Paper: aclanthology.org/2022.acl-long.… 🎬Video: youtube.com/watch?v=UKWFZS… with @aghazadeh_ehsan Yadollah Yaghoobzadeh #acl2022nlp

#ACL2022 Excited to share our latest work on "Metaphors in Pre-Trained Language Models" at <a href="/aclmeeting/">ACL 2025</a>. 

📝Paper: aclanthology.org/2022.acl-long.…
🎬Video: youtube.com/watch?v=UKWFZS…

with @aghazadeh_ehsan <a href="/yyaghoobzadeh/">Yadollah Yaghoobzadeh</a> 
#acl2022nlp
Ali Modarressi @ ICLR2025 (@amodarressi) 's Twitter Profile Photo

🎉I‘m delighted to announce that our (w/ Mohsen Fayyaz, Ehsan Aghazadeh, Yadollah Yaghoobzadeh & Taher Pilehvar) paper “DecompX: Explaining Transformers Decisions by Propagating Token Decomposition” has been accepted to the #ACL2023 🥳🥳 Preprint coming soon📄⏳

Ali Modarressi @ ICLR2025 (@amodarressi) 's Twitter Profile Photo

Check out our (w/ Mohsen Fayyaz, Ehsan Aghazadeh, Yadollah Yaghoobzadeh & Taher Pilehvar) #ACL2023 paper “DecompX: Explaining Transformers Decisions by Propagating Token Decomposition” 📽️ Video, 💻 Code, Demo & 📄 Paper: github.com/mohsenfayyaz/D… arxiv.org/abs/2306.02873 (🧵1/4)

Pan Lu (@lupantech) 's Twitter Profile Photo

Spent a fantastic weekend at Lake Arrowhead with the uclanlp group! ❄️🏔️⬆️ Enjoyed scenic drives, delicious meals, engaging conversations, and brainstorming sessions. Truly inspiring! 🚗🥘😋💬 🖼️🧠💡

Spent a fantastic weekend at Lake Arrowhead with the <a href="/uclanlp/">uclanlp</a> group! ❄️🏔️⬆️ Enjoyed scenic drives, delicious meals, engaging conversations, and brainstorming sessions. Truly inspiring! 🚗🥘😋💬 🖼️🧠💡
Wenbo Hu@ICLR🇸🇬 (@gordonhu608) 's Twitter Profile Photo

🚀Introducing MRAG-Bench: How do Large Vision-Language Models utilize vision-centric multimodal knowledge? 🤔Previous multimodal knowledge QA benchmarks can mainly be solved by retrieving text knowledge.💥We focus on scenarios where retrieving knowledge from image corpus is more

🚀Introducing MRAG-Bench: How do Large Vision-Language Models utilize vision-centric multimodal knowledge?  🤔Previous multimodal knowledge QA benchmarks can mainly be solved by retrieving text knowledge.💥We focus on scenarios where retrieving knowledge from image corpus is more
Wenbo Hu@ICLR🇸🇬 (@gordonhu608) 's Twitter Profile Photo

Excited to share MRAG-Bench is accepted at #ICLR2025 🇸🇬. The image corpus is a rich source of information, and extracting knowledge from it can often be more advantageous than from a text corpus. We study how MLLMs can utilize vision-centric multimodal knowledge. More in our

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

Dense retrieval models in Retrieval Augmented Generation systems often prioritize superficial document features, overlooking actual answer relevance. This inefficiency arises from biases in retrievers. This paper addresses this by using controlled experiments based on Re-DocRED

Dense retrieval models in Retrieval Augmented Generation systems often prioritize superficial document features, overlooking actual answer relevance.

This inefficiency arises from biases in retrievers.

This paper addresses this by using controlled experiments based on Re-DocRED
Violet Peng (@violetnpeng) 's Twitter Profile Photo

Mohsen Fayyaz's recent work showed several critical issues of dense retrievers favoring spurious correlations over knowledge, which makes RAG particularly vulnerable to adversarial examples. Check out more details 👇