Otto Barten⏸ (@bartenotto) 's Twitter Profile
Otto Barten⏸

@bartenotto

Existential Risk Observatory founder, effective regulationist. Read our latest piece in TIME: time.com/6978790/how-to…

ID: 1240746561191260160

linkhttp://existentialriskobservatory.org calendar_today19-03-2020 21:06:57

1,1K Tweet

424 Followers

434 Following

Otto Barten⏸ (@bartenotto) 's Twitter Profile Photo

A major problem for offense defense balance is that we're going to hold defense to pretty high standards (understandably!), while offense faces no constraints.

Otto Barten⏸ (@bartenotto) 's Twitter Profile Photo

I'm not sure I'm actually personally concerned about this LLM behaviour. I'm concerned that AGI, with a good enough world model, will realize that humans might switch it off and will therefore rationally use any opportunity to take over power. Having said that, amazing comms!

Otto Barten⏸ (@bartenotto) 's Twitter Profile Photo

Last Tuesday, I visited the first academic AI Safety conference I've ever been to at KU Leuven. Both me and the organizer didn't really know other academic AI Safety conferences. Is this an unlikely omission in the AI Safety landscape? kuleuven.be/ethics-kuleuve…

Otto Barten⏸ (@bartenotto) 's Twitter Profile Photo

"Give European settlers a stake in the future, then they'll respect our property rights" was not a winning strategy in 1492.

Existential Risk Observatory ⏸ (@xrobservatory) 's Twitter Profile Photo

Our Otto Barten⏸ joined a panel with Alexander von Janovski (TÜV AI lab) and MEP Axel Voss MdEP. Good to be able to discuss the need to regulate AGI, and particularly the Conditional AI Safety Treaty! x.com/XRobservatory/…

Our <a href="/BartenOtto/">Otto Barten⏸</a> joined a panel with Alexander von Janovski (TÜV AI lab) and MEP <a href="/AxelVossMdEP/">Axel Voss MdEP</a>. Good to be able to discuss the need to regulate AGI, and particularly the Conditional AI Safety Treaty!

x.com/XRobservatory/…
Otto Barten⏸ (@bartenotto) 's Twitter Profile Photo

Could you guarantee any safety level 1:S if you prompt a model S times, same distribution, and it doesn't do anything stupid?

Otto Barten⏸ (@bartenotto) 's Twitter Profile Photo

I think embodied AGI would be pretty safe. Can't multiply, can't travel at the speed of light. Mostly local sensors and buttons to press. Any dangers are obvious and will be regulated.

Otto Barten⏸ (@bartenotto) 's Twitter Profile Photo

Crazy that WIRED worries about digital rights but is happy to quote a pro-human extinction campaigner at length without asking any critical questions. wired.com/story/ai-risk-…