Existential Risk Observatory ⏸ (@xrobservatory) 's Twitter Profile
Existential Risk Observatory ⏸

@xrobservatory

Reducing AI x-risk by informing the public. We propose a Conditional AI Safety Treaty: time.com/7171432/condit…

ID: 1371763785762013189

linkhttps://www.existentialriskobservatory.org/ calendar_today16-03-2021 10:02:35

1,1K Tweet

1,1K Followers

656 Following

SCMP Opinion (@scmp_opinion) 's Twitter Profile Photo

Otto Barten writes that the US-China trade talks should pave the way for an AI safety treaty. AI could become too powerful for human beings to control. The US and #China must lead the way in ensuring safe, responsible AI development. scmp.com/opinion/china-…

Existential Risk Observatory ⏸ (@xrobservatory) 's Twitter Profile Photo

Everyone always said AI was going to be spiky: being great at some things humans are bad at, while being bad at other things humans are good at. Repeating the latter is not an argument. What we should do instead is raise awareness and implement timeline-independent regulation!

Existential Risk Observatory ⏸ (@xrobservatory) 's Twitter Profile Photo

Usually, communicating progress cautiously is good. The worst case is that you fail to deliver progress. In existential risk, however, the worst case is that your invention does better than expected. Risk management dictates worst case communication: maximum progress.

ControlAI (@ai_ctrl) 's Twitter Profile Photo

AIs improving AIs leads to artificial superintelligence. Superintelligence would be more capable than every single human put together, and could easily become more powerful too. Nobody knows how to control superintelligence, so this could end terribly, if we don't avoid it.

Existential Risk Observatory ⏸ (@xrobservatory) 's Twitter Profile Photo

If the US government would pause frontier AI development in the face of an existential risk, but doesn't because of China, our Conditional AI Safety Treaty is the perfect solution. Read more on it in the South China Morning Post: scmp.com/opinion/china-… Or in TIME: x.com/XRobservatory/…

If the US government would pause frontier AI development in the face of an existential risk, but doesn't because of China, our Conditional AI Safety Treaty is the perfect solution.

Read more on it in the <a href="/SCMPNews/">South China Morning Post</a>: scmp.com/opinion/china-…

Or in TIME:
x.com/XRobservatory/…
ControlAI (@ai_ctrl) 's Twitter Profile Photo

"Bad news. Recent studies in the last few months show that these most advanced AIs have tendencies for deception, cheating, and maybe the worst: self-preservation behavior." — Yoshua Bengio "they would have an incentive to get rid of us."

Existential Risk Observatory ⏸ (@xrobservatory) 's Twitter Profile Photo

Those working towards AGI should realize that: 1) AGI may not be controllable, possibly leading to nothing short of human extinction. 2) Even if it can be controlled, unilateral AGI will be used to subvert and suppress the vast majority of people.

Existential Risk Observatory ⏸ (@xrobservatory) 's Twitter Profile Photo

Hacking, next to domains such as persuasion, agency, and weapon construction, is seen as an important domain an AI would need to master to be able to cause human extinction. These results are therefore worrying.

Existential Risk Observatory ⏸ (@xrobservatory) 's Twitter Profile Photo

There is some absurdity in worrying about job loss, but not worrying about human extinction, even though timelines and probabilities may be roughly comparable. Still, prominent voices of the societal debate seem to increasingly think about AGI, which is on balance hopeful.

ControlAI (@ai_ctrl) 's Twitter Profile Photo

What happens if we build smarter-than-human AIs with goals not aligned to ours? "Poof." "We are blindly driving into a fog." — Yoshua Bengio

Existential Risk Observatory ⏸ (@xrobservatory) 's Twitter Profile Photo

If an AI can persuade humans to bypass security, there are few things it cannot do. We should try hard to make sure this will never happen.