
Victoria Krakovna
@vkrakovna
Research scientist in AI alignment at Google DeepMind. Co-founder of Future of Life Institute @flixrisk. Views are my own and do not represent GDM or FLI.
ID: 2541954109
http://vkrakovna.wordpress.com 02-06-2014 18:12:22
1,1K Tweet
9,9K Followers
457 Following




So excited and so very humbled to be stepping in to head AI Safety and Alignment at Google DeepMind. Lots of work ahead, both for present-day issues and for extreme risks in anticipation of capabilities advancing.


I’m super excited to release our 100+ page collaborative agenda - led by Usman Anwar - on “Foundational Challenges In Assuring Alignment and Safety of LLMs” alongside 35+ co-authors from NLP, ML, and AI Safety communities! Some highlights below...


Big new paper on the Ethics of Advanced AI Assistants led by Iason Gabriel Arianna Manzini Geoff Keeling in collaboration with many authors! A broad study encompassing many aspects of AI ethics and safety. Was an honour to write the chapter on Safety, thanks to my co-authors 1/5


We are looking for an AGI Safety Manager to support Google DeepMind 's AGI Safety Council: please encourage excellent people to apply! This role will work closely with my team, Scalable Alignment and Safety, and Responsible Development and Innovation. boards.greenhouse.io/deepmind/jobs/…


Announcing the first Mechanistic Interpretability workshop, held at ICML 2024! We have a fantastic speaker line-up Chris Olah Jacob Steinhardt David Bau Asma Ghandeharioun, $1,750 in best paper prizes, and a lot of recent progress to discuss! Paper deadline: May 29, either 8 or 4 pages


As we push the boundaries of AI, it's critical that we stay ahead of potential risks. I'm thrilled to announce Google DeepMind's Frontier Safety Framework - our approach to analyzing and mitigating future risks posed by advanced AI models. 1/N deepmind.google/discover/blog/…





Are you excited about Chris Olah-style mechanistic interpretability research? I'm looking to mentor scholars via MATS - apply by Aug 30! I'm impressed by the work from past scholars, and love mentoring promising talent. You don't need to be in a big lab to do good mech interp work!

