Peter Cihon (@pcihon) 's Twitter Profile
Peter Cihon

@pcihon

senior advisor @ U.S. AI safety institute | personal | 🚀

ID: 3419461990

linkhttp://linkedin.com/in/pcihon calendar_today13-08-2015 03:59:55

632 Tweet

1,1K Followers

694 Following

NTIA (@ntiagov) 's Twitter Profile Photo

#NEWS: Our AI Open Model Weights report is here! NTIA recommends embracing openness while the government builds capacity to monitor for emerging risks. Learn more here: ntia.gov/press-release/…

Oege de Moor (@oegerikus) 's Twitter Profile Photo

At the RSA conference in May, I asked every CISO whether they’d use an AI web pentester if it matched a skilled human. They said: “Ha, ha! That’d be amazing! See you in five years!” It’s here now.

Sayash Kapoor (@sayashk) 's Twitter Profile Photo

Agents are an active research area. But to be useful in the real world, they must be accurate, reliable, and cheap. Join our workshop on August 29 to learn from the creators of LangChain, DSPy, SWE-Bench, lm-eval-harness, Reflexion, SPADE and more. RSVP: sites.google.com/princeton.edu/…

Agents are an active research area. But to be useful in the real world, they must be accurate, reliable, and cheap.

Join our workshop on August 29 to learn from the creators of LangChain, DSPy, SWE-Bench, lm-eval-harness, Reflexion, SPADE and more.

RSVP: sites.google.com/princeton.edu/…
Deb Raji (@rajiinio) 's Twitter Profile Photo

This reveals so much about how little we meaningfully discuss data choices in computer science education. Data are at the locus of pretty much every tech policy issue - labor, bias, environmental, copyright, privacy, security, toxicity, safety, etc. It is literally politics!

Arvind Narayanan (@random_walker) 's Twitter Profile Photo

I think about these environmental regulation horror stories a lot in the context of tech regulation. There's a wave of regulation happening now with impact assessment requirements for AI / algorithmic systems. What caused NEPA compliance to become so burdensome and weaponized?

Peter Cihon (@pcihon) 's Twitter Profile Photo

Initial members of the International Network of AI Safety Institutes are: Australia, Canada, the EU, France, Japan, Kenya, Korea, Singapore, UK, and US

Tsarathustra (@tsarnick) 's Twitter Profile Photo

Joe Biden tells the UN that we will see more technological change in the next 2-10 years than we have seen in the last 50 and AI will change our ways of life, work and war so urgent efforts are needed on AI safety

Sayash Kapoor (@sayashk) 's Twitter Profile Photo

How can we enable independent safety and security research on AI? Join our October 28 virtual workshop to learn how technical, legal, and policy experts conduct independent evaluation. - RSVP to receive zoom link: bit.ly/3p-ai-evals - More details: sites.google.com/view/thirdpart…

How can we enable independent safety and security research on AI? 

Join our October 28 virtual workshop to learn how technical, legal, and policy experts conduct independent evaluation.

- RSVP to receive zoom link: bit.ly/3p-ai-evals
- More details: sites.google.com/view/thirdpart…
Kevin Klyman (@kevin_klyman) 's Twitter Profile Photo

The US AI Safety Institute is hiring! Looking for experts in designing/implementing evaluations for the capabilities/safety/security of advanced AI systems + research engineers with experience in cyber, bio, or adversarial ML. The app closes tonight usajobs.gov/search/results…

Peter Cihon (@pcihon) 's Twitter Profile Photo

Grateful today to be cancer free. It was surprisingly reassuring to have systems like Claude to help me interpret medical reports along the way. Happy Thanksgiving!

Peter Cihon (@pcihon) 's Twitter Profile Photo

I’ll be at NeurIPS Thursday through Sunday. I’m presenting a poster on agent measurement at the SoLaR workshop on Saturday. Please reach out if you want to chat!

Ben Edelman (@edelmanben) 's Twitter Profile Photo

1/ Excited to share a new blog post from the U.S. AI Safety Institute! AI agents are becoming increasingly capable. But they are vulnerable to prompt injections in external content – an agent may be given task A, but then be “hijacked” and perform malicious task B instead.

1/ Excited to share a new blog post from the U.S. AI Safety Institute!

AI agents are becoming increasingly capable. But they are vulnerable to prompt injections in external content – an agent may be given task A, but then be “hijacked” and perform malicious task B instead.
Dean W. Ball (@deanwball) 's Twitter Profile Photo

How should the federal government prioritize its AI R&D investments over the next 3-5 years? If you have thoughts, we want to hear them! New 30-day comment period just opened for feedback on the National AI R&D Strategic Plan.

How should the federal government prioritize its AI R&D investments over the next 3-5 years?

If you have thoughts, we want to hear them! New 30-day comment period just opened for feedback on the National AI R&D Strategic Plan.
Vatican News (@vaticannews) 's Twitter Profile Photo

Pope Leo XIV explains his choice of name: "... I chose to take the name Leo XIV. There are different reasons for this, but mainly because Pope Leo XIII in his historic Encyclical Rerum Novarum addressed the social question in the context of the first great industrial revolution.

U.S. Commerce Dept. (@commercegov) 's Twitter Profile Photo

RELEASE: Statement from U.S. Secretary of Commerce Howard Lutnick on Transforming the U.S. AI Safety Institute into the Pro-Innovation, Pro-Science U.S. Center for AI Standards and Innovation commerce.gov/news/press-rel…

RELEASE: Statement from U.S. Secretary of Commerce Howard Lutnick on Transforming the U.S. AI Safety Institute into the Pro-Innovation, Pro-Science U.S. Center for AI Standards and Innovation

commerce.gov/news/press-rel…