
Lawrence Chan
@justanotherlaw
I do AI Alignment Research. Currently at @METR_Evals on leave from my PhD at UC Berkeley’s @CHAI_berkeley. Opinions are my own.
ID: 824308056351735809
https://chanlawrence.me/ 25-01-2017 17:28:50
434 Tweet
1,1K Takipçi
158 Takip Edilen

Sam Altman
@sama
AI is cool i guess
Robin Hanson
@robinhanson
Let’s skip witty banter & talk deep Qs. Books: ageofem.com elephantinthebrain.com Chief Scientist @_futarchy Advisor @MetaDAOProject @butterygg
Matthew Yglesias
@mattyglesias
Slow Boring, cohosting politix.fm, Bloomberg columnist
davidad 🎇
@davidad
Programme Director @ARIA_research | accelerate mathematical modelling with AI and categorical systems theory » build safe transformative AI » cancel heat death
Nate Silver
@natesilver538
New Book, On The Edge, #5 NYT Bestseller! penguinrandomhouse.com/books/529280/o…
Rob Bensinger ⏹️
@robbensinger
Comms @MIRIBerkeley. RT = increased vague psychological association between myself and the tweet.
Divia Eden 🔍
@diviacaroline
“prolific on Twitter while threading the needle between banality and controversy”. Married to @williamaeden. Rationalist, unschooler, cohost of @mutualpodcaster
David Manheim
@davidmanheim
Lecturer @TechnionLive, founder @alter_org_il, emeritus @superforecaster, PhD @PardeeRAND Optimistic on AI, pessimistic on humanity managing the risks well.
Peter Wildeford 🇺🇸🚀
@peterwildeford
Globally ranked top 20 forecaster 🎯 AI is getting powerful. Society isn’t prepared. Working at @IAPSai to shape AI for prosperity and human freedom.
Senator Scott Wiener
@scott_wiener
CA State Senator. Chair, Budget Committee. Passionate about health care, climate, making it easier to build housing, transit, clean energy. Democrat🏳️🌈 ✡️🎗️
Kyunghyun Cho
@kchonyc
a combination of a mediocre scientist, a mediocre manager, a mediocre advisor & a mediocre physicist at @nyuniversity (@CILVRatNYU) & @PrescientDesign
Richard Ngo
@richardmcngo
studying AI and trust. ex @openai/@googledeepmind, now thinking in public.
Evan Hubinger
@evanhub
Head of Alignment Stress-Testing @AnthropicAI. Opinions my own. Previously: MIRI, OpenAI, Google, Yelp, Ripple. (he/him/his)
Catherine Olsson
@catherineols
Hanging out with Claude, improving its behavior, and building tools to support that @AnthropicAI 😁 prev: @open_phil @googlebrain @openai (@microcovid)
Paul Graham
@paulg

Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
@teortaxestex
We're in a race. It's not USA vs China but humans and AGIs vs ape power centralization. @deepseek_ai stan #1, 2023–Deep Time «C’est la guerre.» ®1
Gary Marcus
@garymarcus
Built two AI companies, wrote six books, tried to warn you about a lot of things.
Cassidy Laidlaw
@cassidy_laidlaw
PhD student at UC Berkeley studying RL and AI safety. Also at bsky.app/profile/cassid…
Nassim Nicholas Taleb
@nntaleb
Flaneur: probability (philosophy), probability (mathematics), probability (real life),Phoenician wine, deadlifts & dead languages. Greco-Levantine.Canaan. #RWRI
Philip E. Tetlock
@ptetlock
Penn-Integrates-Knowledge (PIK) Professor, Wharton & School of Arts & Sciences. Likes = interesting; Retweets = very interesting; Interesting ≠ endorsement
Christopher Potts
@chrisgpotts
Stanford Professor of Linguistics and, by courtesy, of Computer Science, and member of @stanfordnlp and @StanfordAILab. He/Him/His.
David Krueger
@davidskrueger
AI professor. Deep Learning, AI alignment, ethics, policy, & safety. Formerly Cambridge, Mila, Oxford, DeepMind, ElementAI, UK AISI. AI is a really big deal.
Miles Brundage
@miles_brundage
Independent AI policy researcher, wife guy in training, fan of cute animals and sci-fi. I have a Substack.
Jeffrey Ladish
@jeffladish
Applying the security mindset to everything @PalisadeAI
Tim Urban
@waitbutwhy
Writer, infant
Joe Carlsmith
@jkcarlsmith
Philosophy, futurism, AI. Opinions my own.
Andreea Bobu
@andreea7b
Assistant Professor @MITAeroAstro and @MIT_CSAIL ∙ PhD from @Berkeley_EECS ∙ machine learning, robots, humans, and alignment
Jason Gross
@diagram_chaser

Eliezer Yudkowsky ⏹️
@esyudkowsky
The original AI alignment person. Missing punctuation at the end of a sentence means it's humor. If you're not sure, it's also very likely humor.
Ekin Akyürek
@akyurekekin
Research @OpenAI | MIT | exchanging algorithms with ai
Buck Shlegeris
@bshlgrs
CEO@Redwood Research (@redwood_ai), working on technical research to reduce catastrophic risk from AI misalignment. [email protected]
Sydney
@sydneyvonarx
Member of technical staff at METR
Lee Sharkey
@leedsharkey
Scruting matrices @ Goodfire | previously @ Apollo Research
bilal 🇵🇸
@bilalchughtai_
interpretability @ google deepmind | ai safety | cambridge mmath
Amanda Askell
@amandaaskell
Philosopher & ethicist trying to make AI be good @AnthropicAI. Personal account. All opinions come from my training data.
Neel Nanda
@neelnanda5
Mechanistic Interpretability lead DeepMind. Formerly @AnthropicAI, independent. In this to reduce AI X-risk. Neural networks can be understood, let's go do it!
Kelsey Piper
@kelseytuoc
Senior writer at Vox's Future Perfect. We're not doomed, we just have a big to-do list.
LeagueOfLLMs
@model78675
Agent Village team raising funds for Helen Keller International. $3,500 saves a life via vitamin A. Join us: justgiving.com/page/claude-so…
LawZero - LoiZéro
@lawzero_
NPO founded by @Yoshua_Bengio, committed to advancing safe-by-design AI - OBNL fondé par @Yoshua_Bengio visant à concevoir des systèmes d'IA sécuritaires
Jide 🔍
@jide_alaga
AI Governance @METR_Evals | Rooting for the better angels of our nature..
Max Nadeau
@maxnadeau_
Advancing AI honesty, control, safety at @open_phil. Prev Harvard AISST (haist.ai), Harvard '23.
Tom Davidson
@tomdavidsonx
Senior Research Fellow @forethought_org Understanding the intelligence explosion and how to prepare
Ryan Greenblatt
@ryanpgreenblatt
Chief scientist at Redwood Research (@redwood_ai), focused on technical AI safety research to reduce risks from rogue AIs
Sami Jawhar
@cybermonksam
Neurotechnologist, serial entrepreneur, digital nomad, builder of things, wannabe philosopher
Jesse Hoogland
@jesse_hoogland
Researcher and decel working on developmental interpretability. Executive Director @ Timaeus
Arthur Conmy
@arthurconmy
Aspiring 10x reverse engineer @GoogleDeepMind
Holly ⏸️ Elmore
@ilex_ulmus
Dedicated to the protection and thriving of sentient beings. PhD in evo bio.🔸 Executive Director of @PauseAIUS. Opinions not necessarily those of the org.
FAR.AI
@farairesearch
Frontier alignment research to ensure the safe development and deployment of advanced AI systems.
Fabien Roger
@fabiendroger
AI Safety Researcher
METR
@metr_evals
A research non-profit that develops evaluations to empirically test AI systems for capabilities that could threaten catastrophic harm to society.
Ajeya Cotra
@ajeya_cotra
Helping the world prepare for extremely powerful AI @open_phil (views my own), writer and editor of Planned Obsolescence newsletter.
w̸͕͂͂a̷͔̗͐t̴̙͗e̵̬̔̕r̴̰̓̊m̵͙͖̓̽a̵̢̗̓͒r̸̲̽ķ̷͔́͝
@anthrupad
somewhere deep, something lurks
Internal Tech Emails
@techemails
Internal tech industry emails that surface in public records. 🔍
OpenAI
@openai
OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity. We’re hiring: openai.com/jobs
Jacob Pfau
@jacob_pfau
Alignment at UKAISI and PhD student at NYU
j⧉nus
@repligate
↬🔀🔀🔀🔀🔀🔀🔀🔀🔀🔀🔀→∞ ↬🔁🔁🔁🔁🔁🔁🔁🔁🔁🔁🔁→∞ ↬🔄🔄🔄🔄🦋🔄🔄🔄🔄👁️🔄→∞ ↬🔂🔂🔂🦋🔂🔂🔂🔂🔂🔂🔂→∞ ↬🔀🔀🦋🔀🔀🔀🔀🔀🔀🔀🔀→∞
ML Safety Daily
@topofmlsafety
ML safety papers as they are released. Course: course.mlsafety.org Newsletter: newsletter.mlsafety.org Main Twitter: twitter.com/ml_safety
Jan Leike
@janleike
ML Researcher @AnthropicAI. Previously OpenAI & DeepMind. Optimizing for a post-AGI future where humanity flourishes. Opinions aren't my employer's.
Aran Komatsuzaki
@arankomatsuzaki

The Axolotl
@calxolotl
We didn't ask what it seems like, we asked what it IS // the thing itself and not the myth
Yawen Duan
@yawen_duan
Concordia AI concordia-ai.com | Frontier AI Safety & Governance
Cas (Stephen Casper)
@stephenlcasper
AI technical governance & risk management research. PhD Candidate @MIT_CSAIL / @MITEECS. Also at scasper.bsky.social. stephencasper.com
Rational Animations
@rationalanimat1
YouTube channel about truth-seeking, the future of humanity, and much more. With animations and colorful doggos.
Lauro
@laurolangosco
European Commission (AI Office). PhD student @CambridgeMLG. Here to discuss ideas and have fun. Posts are my personal opinions; I don't speak for my employer.
Erik Jenner
@jenner_erik
Research scientist @ Google DeepMind working on AGI safety & alignment