Scott Yih
@scottyih
Research Scientist at Facebook AI Research (FAIR)
ID: 52565429
http://scottyih.org 30-06-2009 23:53:20
149 Tweet
1,1K Takipçi
787 Takip Edilen
Excited to release our work from last year showcasing a stable training recipe for fully token-based multi-modal early-fusion auto-regressive models! arxiv.org/abs/2405.09818 Huge shout out to Armen Aghajanyan Ramakanth Luke Zettlemoyer Gargi Ghosh and other co-authors. (1/n)
Super excited to introduce HippoRAG, a method I enjoyed developing the most in 2024. It’s led by my amazing student Bernal Bernal Jiménez and joint with Yiheng Shu Yu Gu Michi Yasunaga. Bernal’s thread gives a good technical account, so I’ll just share some personal thoughts
🚀💡We're hiring interns for 2025 at FAIR @ AI at Meta Work on cutting-edge projects: social reasoning, alignment, interaction, multi-agent communication & more with text/multimodal LLMs. Apply now! 🔗metacareers.com/jobs/119904986…
🔍 How do we teach an LLM to 𝘮𝘢𝘴𝘵𝘦𝘳 a body of knowledge? In new work with AI at Meta, we propose Active Reading 📙: a way for models to teach themselves new things by self-studying their training data. Results: * 𝟔𝟔% on SimpleQA w/ an 8B model by studying the wikipedia