
Daniel Filan
@dfrsrchtwts
Want to usher in an era of human-friendly superintelligence, don't know how.
Podcast: axrp.net
Apply to MATS: matsprogram.org/apply
ID: 1276310243123720192
26-06-2020 00:24:21
931 Tweet
1,1K Takipçi
159 Takip Edilen

Katja Grace 🔍
@katjagrace
Thinking about whether AI will destroy the world at aiimpacts.org. DM or email for media requests. Feedback: admonymous.co/googolplex
Anders Sandberg
@anderssandberg
Academic jack-of-all-trades.
Jack Clark
@jackclarksf
@AnthropicAI, ONEAI OECD, co-chair @indexingai, writer @ importai.net Past: @openai, @business @theregister. Neural nets, distributed systems, weird futures
Evan Hubinger
@evanhub
Head of Alignment Stress-Testing @AnthropicAI. Opinions my own. Previously: MIRI, OpenAI, Google, Yelp, Ripple. (he/him/his)
Catherine Olsson
@catherineols
Hanging out with Claude, improving its behavior, and building tools to support that @AnthropicAI 😁 prev: @open_phil @googlebrain @openai (@microcovid)
Lewis Hammond
@lrhammond
Research Director @coop_ai / DPhil Candidate @CompSciOxford / Affiliate @GovAI_ / Fellow @TheWilsonCenter
Sam Bowman
@sleepinyourhat
AI alignment + LLMs at Anthropic. On leave from NYU. Views not employers'. No relation to @s8mb. I think you should join @givingwhatwecan.
jessicat
@jessi_cata
Preserving episodic memory and logic through ontology shifts. Integrating cyborg layers. Learning the place of knowing in the all. Disjuncting on unknowns.
Benjamin Hilton
@benjamin_hilton
Semi-informed about economics, physics and governments. views my own
David Krueger
@davidskrueger
AI professor. Deep Learning, AI alignment, ethics, policy, & safety. Formerly Cambridge, Mila, Oxford, DeepMind, ElementAI, UK AISI. AI is a really big deal.
Jacob Steinhardt
@jacobsteinhardt
Assistant Professor of Statistics and EECS, UC Berkeley // Co-founder and CEO, @TransluceAI
nostalgebraist
@nostalgebraist

Shreyas Kapur
@shreyaskapur
PhD student @berkeley_ai. Prev. undergrad @MIT, intern @Waymo @GoogleDeepMind
Miles Brundage
@miles_brundage
Independent AI policy researcher, wife guy in training, fan of cute animals and sci-fi. I have a Substack.
Robert Long
@rgblong
executive director of @eleosai
Joe Carlsmith
@jkcarlsmith
Philosophy, futurism, AI. Opinions my own.
Haydn Belfield
@haydnbelfield
Research Scientist (Frontier Planning) at @GoogleDeepMind. Research Affiliate @Cambridge_Uni @CSERCambridge & @LeverhulmeCFI. All views my own.
Liam Carroll
@lemmykc
Mathemusician that constantly finds himself on mountains, working on Developmental Interpretability
Leon Lang
@lang__leon
PhD student at the intersection of information theory and deep learning. Two master's degrees in maths and AI. Interested in AI existential safety
Daniel Tan
@danielchtan97
AI safety researcher | MATS 7.0, Owain Evans | PhD candidate, UCL | A*STAR scholar
Séb Krier
@sebkrier
🪼 policy dev & strategy @GoogleDeepMind | rekkid junkie, dimensional glider, deep ArXiv dweller, interstellar fugitive, uncertain | 🛸
Tyler Tracy
@tylertracy321
AI Control @ Redwood Research | views my own | MATS 6.0 | paperclip minimizer
Matthew Wearden
@justabitwearden
MATS Extension Lead
Cameron Holmes
@cameronholmes92
AI Alignment Research Manager @MATSprogram Market participant, EA. Parenting like Dr Louise Banks
Rory Greig
@rorygreig1
Research Engineer at Google DeepMind, interested in AI Alignment and Complexity Science.
Sunishchal Dev
@sunishchaldev

Ethan Caballero is busy
@ethancaballero
ML @Mila_Quebec ; previously @GoogleDeepMind
Shujaat Mirza
@shujaatmirzaa
Research Manager @MATSprogram | PhD in AI from NYU | Previously @SpotifyResearch @NYU_Courant @CCS_NYUAD
Usman Anwar
@usmananwar391
Deep Learning & AI Safety @Cambridge_uni
Aidan Homewood
@adnhw
AI risk is a policy choice
Julian
@mealreplacer
thinking about how to make AI go well @open_phil
Amanda Askell
@amandaaskell
Philosopher & ethicist trying to make AI be good @AnthropicAI. Personal account. All opinions come from my training data.
Matthew Barnett
@matthewjbar
I share things. Married to @natalia__coelho
Jan Leike
@janleike
ML Researcher @AnthropicAI. Previously OpenAI & DeepMind. Optimizing for a post-AGI future where humanity flourishes. Opinions aren't my employer's.
Joshua Achiam
@jachiam0
Human. Head of Mission Alignment at @openai. Main author of spinningup.openai.com
Owain Evans
@owainevans_uk
Runs an AI Safety research group in Berkeley (Truthful AI) + Affiliate at UC Berkeley. Past: Oxford Uni, TruthfulQA, Reversal Curse. Prefer email to DM.
Ethan Perez
@ethanjperez
Large language model safety
Tamay Besiroglu
@tamaybes
Recently started @MechanizeWork, and previously @EpochAIResearch
Leo Gao
@nabla_theta
working on AGI alignment. prev: GPT-Neo, the Pile, LM evals, RL overoptimization, scaling SAEs to GPT-4. EleutherAI cofounder.
Google DeepMind
@googledeepmind
We’re a team of scientists, engineers, ethicists and more, committed to solving intelligence, to advance science and benefit humanity.
Jan Brauner
@janmbrauner
Technical staff member at EU AI Office, Previously: RAND, ML PhD at Oxford (@OATML_Oxford), and, once upon a time, medical doctor.
george
@georgeyw_
existential crisis enthusiast, research lead @ Timaeus
alex lawsen
@lxrjl
AI Grantmaking @ Open Philanthropy Previously 80,000 Hours, teaching, forecasting, poker. Views my 🐒's
Lewis Ho
@_lewisho
Research Scientist at Google DeepMind
Caleb Withers
@calebwithersdc
AI & natsec @CNASdc @CNAStech. @GeorgetownCSS alum. Views my own.
Jason Hausenloy
@jasonhausenloy
AI Policy. Prev: @CHAI_Berkeley, @ConjectureAI, @IMDAsg.
Tomek Korbak
@tomekkorbak
senior research scientist @AISecurityInst | previously @AnthropicAI @nyuniversity @SussexUni
James Chua (ICLR Singapore!)
@jameschua_sg
Alignment Researcher at Truthful AI (Owain Evans' Org) Views my own.
Max Nadeau
@maxnadeau_
Advancing AI honesty, control, safety at @open_phil. Prev Harvard AISST (haist.ai), Harvard '23.
Govind Pimpale
@govindpimpale
Student
Clément Dumas (at ICLR)
@butanium_
MSc at @ENS_ParisSaclay prev research intern at DLAB @EPFL MATS Winter 2025 Scholar w/ Neel Nanda AI safety research / improv theater
Sharan
@_maiush
everyone on here is a bot except me and you
Juan Gil
@heartbulbous
I post about AI safety, rationalist-adjacent self-improvement, and nonsense. Leave me anonymous feedback here: bit.ly/juanfeedback
Eli Lifland
@eli_lifland
Writing AI scenarios @AI_Futures_. Also @aidigest_, @SamotsvetyF. Prev @oughtinc
Jacob Pfau
@jacob_pfau
Alignment at UKAISI and PhD student at NYU
Agus 🔎 🔸
@austinc3301
maximizing the benefits while minimizing the risks
Daniel Kokotajlo
@dkokotajlo

ML Alignment & Theory Scholars
@matsprogram
MATS empowers researchers to advance AI safety
Dwarkesh Patel
@dwarkesh_sp
Host of Dwarkesh Podcast youtube.com/DwarkeshPatel spoti.fi/3MFtqBR apple.co/3ujLQkZ
Redwood Research
@redwood_ai
Pioneering threat mitigation and assessment for AI agents.
James Campbell
@jam3scampbell
compute optimal everything | ML PhD at CMU
Adam Karvonen
@a_karvonen
ML Researcher, mostly focused on interpretability. I prefer email to DM.
AI Safety Papers
@safe_paper
Sharing the latest in AI safety and interpretability research.
Daniel Filan 🔎
@freed_dfilan
This is my personal / non-professional account. My professional account is @dfrsrchtwts.
Michael Dennis
@michaeld1729
Open-Endedness RS @GoogleDeepMind. Building for an unspecifiable world | Unsupervised Environment Design, Game&Decision Theory, RL, AIS. prev @CHAI_Berkeley
Apollo Research
@apolloaievals
We are an AI evals research organisation
Henry is cleaning up my knowledge base 🔄
@sleight_henry
AI Safety Research Manager @ Constellation, Anthropic, prev-MATS Working out how to Do Big Good, but sanely! Apologetically myself | He/Him, 26
Timaeus
@timaeusresearch
Timaeus is an AI Safety Research Organisation working on Singular Learning Theory and Developmental Interpretability.
Fabien Roger
@fabiendroger
AI Safety Researcher