
Existential Risk Observatory ⏸
@xrobservatory
Reducing AI x-risk by informing the public. We propose a Conditional AI Safety Treaty: time.com/7171432/condit…
ID: 1371763785762013189
https://www.existentialriskobservatory.org/ 16-03-2021 10:02:35
1,1K Tweet
1,1K Followers
656 Following





If the US government would pause frontier AI development in the face of an existential risk, but doesn't because of China, our Conditional AI Safety Treaty is the perfect solution. Read more on it in the South China Morning Post: scmp.com/opinion/china-… Or in TIME: x.com/XRobservatory/…


"Bad news. Recent studies in the last few months show that these most advanced AIs have tendencies for deception, cheating, and maybe the worst: self-preservation behavior." — Yoshua Bengio "they would have an incentive to get rid of us."




What happens if we build smarter-than-human AIs with goals not aligned to ours? "Poof." "We are blindly driving into a fog." — Yoshua Bengio

As humans, we think it's immensely brave that Yoshua Bengio and others keep making the case to try everything we can to keep humanity in control.
