Dr. Peter Slattery (@peterslattery1) 's Twitter Profile
Dr. Peter Slattery

@peterslattery1

Lead at the AI Risk Repository | Researcher @MITFutureTech

ID: 1113170874

linkhttps://www.pslattery.com/ calendar_today23-01-2013 02:07:29

706 Tweet

699 Followers

922 Following

Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

This is a cool opportunity: Future Impact Group (FIG) is now accepting applications for their upcoming twelve-week research fellowship. Interested individuals are encouraged to apply and share this opportunity within their networks. FIG runs a part-time, remote-first research

This is a cool opportunity: Future Impact Group (FIG) is now accepting applications for their upcoming twelve-week research fellowship. Interested individuals are encouraged to apply and share this opportunity within their networks. 

FIG runs a part-time, remote-first research
Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

An accessible overview of some of the drivers, risks, bottlenecks, and ethical dilemmas associated with AI progress. The first few chapters are particularly useful for understanding of why many experts are deeply uncertain, and concerned, about AI. Even if you don't find the

Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

The Office of the UN Secretary-General’s Envoy on Technology (United Nations Envoy on Technology) Advisory Body on AI recently released this very comprehensive and engaging report. The report if full of interesting content - much of which is in the appendices! In particular, the analysis of expert

The Office of the UN Secretary-General’s Envoy on Technology (<a href="/UNTechEnvoy/">United Nations Envoy on Technology</a>)  Advisory Body on AI recently released this very comprehensive and engaging report.

The report if full of interesting content - much of which is in the appendices! In particular, the analysis of expert
Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

What are expert insiders saying about risks from AI? Helen Toner (Helen Toner), William Saunders, David Evan Harris(David Evan Harris linkedin.com/in/davidevanharris), and Margaret Mitchell (MMitchell) recently presented to the hearing on Oversight of AI: Insiders’ Perspectives at the Senate Judiciary

Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

The Dutch Data Protection Authority recently released an AI & Algorithmic Risks Report (in English). It covers: ➡ Information provision in democracy threatened by AI systems ➡ Challenges in democratic control of AI systems ➡ Profiling and selecting AI systems ➡ Policies and

Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

❓ What are the risks from AI? Framework #6: “A framework for ethical AI at the United Nations” by Lambert Hogenhout. This paper systematically lists 13 issues from AI: 1️⃣ Incompetence AI systems may fail at their tasks, leading to serious consequences such as accidents or

❓ What are the risks from AI? Framework #6: “A framework for ethical AI at the United Nations” by Lambert Hogenhout. 

This paper systematically lists 13 issues from AI: 

1️⃣ Incompetence
AI systems may fail at their tasks, leading to serious consequences such as accidents or
Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

Last week, I travelled down to New York to speak about the MIT AI Risk Repository at a High-Level meeting on “Shaping the Future of the AI – Cyber Nexus” I had the pleasure of meeting and learning from several experts. Thanks to Pablo Rice and the Paris Peace Forum for the

Last week, I travelled down to New York to speak about the MIT AI Risk Repository at a High-Level meeting on “Shaping the Future of the AI – Cyber Nexus”

I had the pleasure of meeting and learning from several experts.

Thanks to Pablo Rice and the Paris Peace Forum for the
Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

Another striking example of how AI, while overhyped in some capacities, is improving very rapidly. Nature recently interviewed several researchers about the impact of OpenAI's o1 model, which they called: 'The first large language model to beat PhD-level scholars on the hardest

Another striking example of how AI, while overhyped in some capacities, is improving very rapidly. Nature recently interviewed several researchers about the impact of OpenAI's o1 model, which they called: 'The first large language model to beat PhD-level scholars on the hardest
Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

Important: The National Institute of Standards and Technology (NIST) "released a RFI seeking insight from stakeholders regarding the responsible development and use of chemical and biological (chem-bio) AI models. Input from a broad range of experts in this field will help us

Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

What research are AI companies doing into safe AI development? What research might they do in the future? A new Institute for AI Policy and Strategy (Institute for AI Policy and Strategy (IAPS)) paper by Oscar Delaney(Oscar Delaney), Oliver Guest(Oliver Guest) and Zoe Williams (Zoe Williams) addresses this.

Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

What are some important current issues in AI Safety? I enjoyed this related discussion between David Krueger (David Krueger), Victoria Krakovna (Victoria Krakovna), Robert Trager, and Gillian K. Hadfield (Gillian Hadfield), and moderated by Adam Gleave (Adam Gleave). I particularly liked this

Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

How quickly are we adopting generative AI? Faster than PCs or the internet: "Figure 4 shows that so far, generative AI has been adopted at a faster pace than PCs or the internet... We find an adoption rate of 28 percent in year two for generative AI, compared with a 25 percent

How quickly are we adopting generative AI? Faster than PCs or the internet:
"Figure 4 shows that so far, generative AI has been adopted at a faster pace than PCs or the internet... We find an adoption rate of 28 percent in year two for generative AI, compared with a 25 percent
Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

Anthropic CEO Dario Amodei: "I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI