
ML Alignment & Theory Scholars
@matsprogram
MATS empowers researchers to advance AI safety
ID: 1721734429041971200
https://matsprogram.org 07-11-2023 03:44:16
48 Tweet
1,1K Takipçi
129 Takip Edilen

MATS Research Winter 2024-25 mentors include researchers from Anthropic Google DeepMind @AISafetyInst redwoodresearch.org CNAS Center for Human-Compatible AI Algorithmic Alignment Group FAR.AI Center for AI Safety @apolloaisafety Krueger AI Safety Lab MIRI and more! Apply by Oct 6. matsprogram.org/mentors

MATS Winter 2024-25 applications close Oct 6! Come and kick-start your AI safety research career. Mentors include Owain Evans Buck Shlegeris Evan Hubinger Cas (Stephen Casper) and more! matsprogram.org

I’m taking applications for collaborators via ML Alignment & Theory Scholars! It’s a great way for new or experienced researchers outside AI safety research labs to work with me/others in these groups: Neel Nanda, Evan Hubinger, mrinank 🍂, Nina, Fabien Roger, Rylan Schaeffer, ...🧵

MATS Research is holding application office hours on Fri Sep 27, at 11 am and 6 pm PT. We will discuss how to apply to MATS (due Oct 6!) and answer your Qs. Register here: airtable.com/appnMboxg76F1Q…

MATS Research Alumni Impact Analysis published! 78% of alumni are still working on AI alignment/control and 7% are working on AI capabilities. 68% have published alignment research lesswrong.com/posts/jeBkx6ag…





ML Alignment & Theory Scholars Summer 2025 applications close Apr 18! Come help advance the fields of AI alignment, security, and governance with mentors including Neel Nanda Ethan Perez Owain Evans Evan Hubinger Buck Shlegeris Dawn Song David Krueger Richard Ngo and more!



Incredible work by 3x MATS Research alumni and a great example of applied Mech Interp beating black box baselines and making significant progress on critical real-world problems:







