
Sam Winter-Levy
@samwinterlevy
Fellow @CarnegieEndow, Technology + International Affairs. Previously poli sci PhD @ Princeton, @ForeignAffairs, @TheEconomist.
ID: 372488514
http://www.samwinterlevy.com 12-09-2011 21:30:02
1,1K Tweet
984 Followers
1,1K Following

In Foreign Affairs, Nikita Lalwani and I write about the idea that winning the AI race will give one state unchallenged global dominance. To do so, we argue, it would have to undercut nuclear deterrence—no small feat.


“Even if intelligence is a powerful asset, it isn’t magic, and states seeking to use AI to disarm their adversaries will confront real physical, practical, and institutional limits,” write Sam Winter-Levy and Nikita Lalwani. foreignaffairs.com/united-states/…

“So long as systems of nuclear deterrence remain in place, the economic and military advantages produced by AI will not allow states to fully impose their political preferences on one another,” argue Sam Winter-Levy and Nikita Lalwani. foreignaffairs.com/united-states/…

“Even if it does not challenge nuclear deterrence, AI may encourage mistrust and dangerous actions among nuclear-armed states,” write Sam Winter-Levy and Nikita Lalwani. foreignaffairs.com/united-states/…

A really important article. Congratulations to the authors. Apart from AI, nuclear tripolarity (US, China & Russia) is a catalyst. Recent academic research (including by Ankit Panda): amazon.com.au/Artificial-int… amazon.com.au/New-Nuclear-Ag… amazon.com.au/Deterrence-Und…

“Even if it does not challenge nuclear deterrence, AI may encourage mistrust and dangerous actions among nuclear-armed states,” write Sam Winter-Levy and Nikita Lalwani. foreignaffairs.com/united-states/…

“So long as systems of nuclear deterrence remain in place, the economic and military advantages produced by AI will not allow states to fully impose their political preferences on one another,” argue Sam Winter-Levy and Nikita Lalwani. foreignaffairs.com/united-states/…

“Even if it does not challenge nuclear deterrence, AI may encourage mistrust and dangerous actions among nuclear-armed states,” write Sam Winter-Levy and Nikita Lalwani. foreignaffairs.com/united-states/…



“It remains possible that countries will develop significantly more powerful AI systems that could threaten methods of nuclear deterrence in ways that cannot yet be anticipated,” write Sam Winter-Levy and Nikita Lalwani. foreignaffairs.com/united-states/…


"The data reveal that national security officials’ intuitions are overwhelmingly overconfident... when study participants estimated that statements had a ninety percent chance of being true, those statements were true just fifty-eight percent of the time" bpb-us-e1.wpmucdn.com/sites.dartmout…