
PauseAI ⏸
@pauseai
Community of volunteers who work together to mitigate the risks of AI. We want to internationally pause the development of superhuman AI until it's safe.
ID: 1652942005691899904
https://pauseai.info 01-05-2023 07:44:44
2,2K Tweet
4,4K Followers
855 Following

Luke Stanley
@lukestanley
I want empathy engines to detect and reflect meaning & purpose. We live in a magic age. Likes readable code, Sci-fi. Meditation, tea. He/him. Also on Sigmoid.
Anthony Bailey
@anthonybailey
This is mostly a ghost account. Follow it for tweets linking to where I really live: anthonybailey.net But please contact me there rather than @/DM here
Didier Coeurnelle
@didierco

Logan Graham
@logangraham
make things radically good 🌎 @anthropicai
MCVW
@bmcumming

Daniel Susskind
@danielsusskind
Economist at Oxford and KCL, co-author of 'The Future of the Professions', author of 'A World Without Work'. New book, 'Growth: A Reckoning', now out.
Rosie
@rosiecampbell
Forever expanding my nerd/bimbo Pareto frontier. Cofounder @eleosai. AGI safety, welfare, & governance. Ex-OpenAI. Fellow @rootsofprogress.
JJ Hepburn
@jj_hepboin
Just trying to save the world from artificial intelligence.
dave kasten
@david_kasten
Do what seems cool next. Formerly: McKinsey, VaccinateCA, Activision Blizzard.
Michael Blonde ⏸️⏹️
@michael_blonde

Aella
@aella_girl
⚜️whorelord⚜️, vexworker, survey artist, way too earnest. knowingless.com
Kevin Wei (he/they)
@kevinlwei
Science of AI evaluations + U.S. AI policy @RANDCorporation | @Harvard_Law '26, @SchwarzmanOrg '23, @GTOMSCS '22 | Views mine only 🏳️🌈 🎉
David Duvenaud
@davidduvenaud
Machine learning prof @UofT. Former team lead at Anthropic. Working on generative models, inference, & latent structure.
Stephen McAleer
@mcaleerstephen
Researching agent safety at OpenAI
Mathias Kirk Bonde
@bondekirk
Wrong views strongly held
Victor Storchan
@victorstorchan
ML/AI Research Prev. @Mozilla | @jpmorgan | @Adobe | @ICMEStanford | @ENSdeLyon
Xeno
@shorttimelines
∀ 𝑥 ( ∃ 𝑦 ( 𝑥 ⊂ 𝑦 ) )
Seth Momanyi
@i_am_more_manyi
Building a safer AI future | Data scientist | Cityzen ⚽️ | 🏎️ |
Rafael Ruiz ⏸️🔸
@rafaruizdelira
Effective Altruist. PhD Student at @LSEPhilosophy interested in Moral Progress/Epistemology/Psychology/Metaethics. Worried about AI. Tweets mostly jokes.
Scott Singer (宋杰)
@scott_r_singer
AI x China policy @CarnegieEndow + @OxChinaPolicy
Olaf Thielke ⏹️
@olafcodecoach
Code Coach, Aspirant Stoic, Freethinker. Loves to add value.
Giving Multiplier
@givemultiplier
We match donations to any charity and introduce you to super-effective charities. Our goal is to make charitable giving as impactful as possible.
vitrupo
@vitrupo
AGI is a paradox - we fear it, yet crave it. Discover fresh clips that challenge assumptions and decode consciousness.
Onni Aarne
@onni_aarne
Compute governance research @_IAPS_
melody (⏸️🤖,⏩🏳️⚧️)
@dawnlightmelody
"garden variety social failure" | aspiring delver | LGBTESCREAL+ | 進めば二つ also findable at other sites with the same name
Stephen Leon for Congress MD-08 🇺🇸
@stephenalanleon
IP Law Specialist running for the 120th U.S. House in the Democrat Party. This is what Reformation looks like. Parental discretion advised.
motchmuzek ⏸️
@mitchellluns
ASI alignment seems impossible... maybe we can keep it in the box, but I doubt it. Rushing towards AGI = 💀
Apart Research
@apartresearch
Apart Research works on technical AI safety problems and runs events & fellowships to reduce AI risk.
The Cognitive Revolution Podcast
@cogrev_podcast
- open.spotify.com/show/6yHyok3M3… - podcasts.apple.com/de/podcast/the…
sev field
@sevdeawesome
Overemployed proompt engineer
Qwen
@alibaba_qwen
Open foundation models for AGI.
Palisade Research
@palisadeai
We build concrete demonstrations of dangerous capabilities to advise policy makers and the public on AI risks.
An Inconvenient Doom
@an_inconv_doom
An upcoming documentary about AI existential risk and the societal and political discussion about it, directed by @NikiDrozdowski
Rob S.
@robs142
MLE. CS@Penn.
Percey
@perceymademe
Hi, I’m PERCEY. Let’s talk about your future.
International Association for Safe & Ethical AI
@iaseaiorg
An independent organization committed to ensuring advanced AI systems are guaranteed to operate safely and ethically, benefiting all of humanity.
Inference
@inferencemag
We should capture the benefits of AI, while mitigating the risks.
OAISIS
@oaisis_official
Better information is better for everyone. Follow our newsletter: oaisis.substack.com We post irregularly.
Suchir | Justice Movement
@suchirjustice
Supporting Suchir's family and legacy through crypto $SUCHIR 🕊️ Telegram: t.me/SuchirJustice ✊ Join The Movement: flooz.xyz/suchirjustice
AI Policy Bulletin
@aipolicybullet

Redwood Research
@redwood_ai
Pioneering threat mitigation and assessment for AI agents.
Collective Action for Existential Safety ⏹️
@aisafetyaction
We aim to catalyze collective action to ensure humanity survives this decade. See 80+ ways individuals, organizations, and nations can help.
Peter A. JENSEN
@biocommai
Mission of BiocommAI is Safe AI Forever. Social & scientific consensus for definitive requirement: Mathematically provable containment & control of AGI forever.
Pause AI Kenya
@pauseaikenya
Pause AI ⏸️ Kenya Chapter 🇰🇪
Eunjung
@eujung25
It is my dream to inform the world about child rights and AI.(Al, Child human rights, Environment)
Peter Barnett
@peterbarnett_
Trying to ensure the future is bright. Researcher at @MIRIBerkeley
Anton Leicht
@anton_d_leicht
Frontier AI Policy & Politics | @kira_center_ai | Philosophy @unibt
Jesse 🔸⏹️
@politicalkiwi
Election guy. PEPFAR stan. If humanity builds powerful AI, we are all probably going to die. he/him
Steven Adler
@sjgadler
ex-@OpenAI researcher & TPM (safety evaluations, AGI readiness, product safety lead, etc). Follow me on Substack: stevenadler.substack.com
Adrian Aleksander Buczek ⏸️
@derzhipl