Dr. Pedro Rodriguez @par@sigmoid.social (@entilzhapr) 's Twitter Profile
Dr. Pedro Rodriguez @[email protected]

@entilzhapr

Researcher @MetaAI FAIR
CS PhD: UMD 🐢, @clipumd
UGrad: Berkeley CS 🐻
Natural Language Processing - QA+Retrieval LMs+Eval
He/Him 🏳️‍🌈

ID: 255401464

linkhttps://www.pedro.ai calendar_today21-02-2011 08:38:59

1,1K Tweet

707 Followers

584 Following

Adina Williams (@adinamwilliams) 's Twitter Profile Photo

✨A leaderboard & competition for 🧑-💻 adversarial QA Jordan Boyd-Graber Yoo Yeon Sung (UMD CLIP) ✨Creating adversarial examples for retrieval systems Danqi Chen Zexuan Zhong A. Wettig (Alexander Stewart) ✨Towards massively multilingual visually grounded reasoning data Desmond Elliott (UCPH)

Dr. Pedro Rodriguez @par@sigmoid.social (@entilzhapr) 's Twitter Profile Photo

Small nit on SoftConf for eaclmeeting, it doesn't seem like you can (1) make a placeholder submission to prepare it early (yes, early, how novel!), and (2) then add additional authors, i.e., those whose softconf username I didn't have on hand when making the placeholder.

Dr. Pedro Rodriguez @par@sigmoid.social (@entilzhapr) 's Twitter Profile Photo

Why is author response short (5 days) for *ACL (vs 2 wks)? I liked another conf w/response period of ~3-4 weeks, giving time to draft response/update paper/dialog. Also, shortness makes it hard to accommodate un-moveable events (e.g., EMNLP, which I'm not at in any case ) #nlproc

Dr. Pedro Rodriguez @par@sigmoid.social (@entilzhapr) 's Twitter Profile Photo

I've been doing video editing for my violin performances (pedro.ai/violin) and started w/Davinci Resolv (blackmagicdesign.com/products/davin…). It's a huge usability/power upgrade, and strongly recommend for #nlp #nlproc conference videos over imovie/premiere.

Dr. Pedro Rodriguez @par@sigmoid.social (@entilzhapr) 's Twitter Profile Photo

I'm working on a polishing bib entries for camera ready. Hats off to ACL Anthology, updating ACL papers is significantly easier than papers from other areas. Search is good, bib entries are accurate/have DOI, and papers have easily accessed main pages (not just PDF). ❤️ ACL Anth.

Jordan Boyd-Graber (@boydgraber) 's Twitter Profile Photo

Whoo hoo! Go buffs! #acl2023nlp Martha Palmer receives the ACL Lifetime Achievement Award. Well deserved. Very proud to have Martha as a friend and mentor.

Dr. Pedro Rodriguez @par@sigmoid.social (@entilzhapr) 's Twitter Profile Photo

It was great going to ACL and seeing many old friends! With my less-than-2-month-old camera and photography skills, I took photos of awards plenaries that you can see here! Congrats to all the winners! #ACL2023NLP adobe.ly/3pJ2GYJ

Srini Iyer (@sriniiyer88) 's Twitter Profile Photo

New paper! How to train LLMs to effectively answer questions on new documents? Introducing *pre-instruction-tuning* - instruction-tuning *before* continued pre-training — significantly more effective than traditional instruction-tuning after PT. arxiv.org/abs/2402.12847

New paper! How to train LLMs to effectively answer questions on new documents?

Introducing *pre-instruction-tuning* - instruction-tuning *before* continued pre-training — significantly more effective than traditional instruction-tuning after PT.

arxiv.org/abs/2402.12847
AI at Meta (@aiatmeta) 's Twitter Profile Photo

Newly published work from FAIR, Chameleon: Mixed-Modal Early-Fusion Foundation Models. This research presents a family of early-fusion token-based mixed-modal models capable of understanding & generating images & text in any arbitrary sequence. Paper ➡️ go.fb.me/7rb19n

Newly published work from FAIR, Chameleon: Mixed-Modal Early-Fusion Foundation Models.

This research presents a family of early-fusion token-based mixed-modal models capable of understanding & generating images & text in any arbitrary sequence.

Paper ➡️ go.fb.me/7rb19n
Armen Aghajanyan (@armenagha) 's Twitter Profile Photo

I’m excited to announce our latest paper, introducing a family of early-fusion token-in token-out (gpt4o….), models capable of interleaved text and image understanding and generation. arxiv.org/abs/2405.09818

Chunting Zhou (@violet_zct) 's Twitter Profile Photo

🚀 Excited to introduce Chameleon, our work in mixed-modality early-fusion foundation models from last year! 🦎 Capable of understanding and generating text and images in any sequence. Check out our paper to learn more about its SOTA performance and versatile capabilities!

AI at Meta (@aiatmeta) 's Twitter Profile Photo

Today is a good day for open science. As part of our continued commitment to the growth and development of an open ecosystem, today at Meta FAIR we’re announcing four new publicly available AI models and additional research artifacts to inspire innovation in the community and