Tom (@tomdaavid) 's Twitter Profile
Tom

@tomdaavid

🇫🇷 PRISM Eval, Cofounder and Dir Governance & Standardization | GPAI Policy Lab, President

ID: 946337425

calendar_today13-11-2012 19:01:35

556 Tweet

147 Takipçi

222 Takip Edilen

Tom (@tomdaavid) 's Twitter Profile Photo

Waiting for AI to demonstrate dangerous advanced capabilities to take it seriously is like waiting for a dam to burst before moving to higher ground when you already know the cracks are there.

Tom (@tomdaavid) 's Twitter Profile Photo

If you have money, invest it now in building strong AI governance and international coordination. Without it, your wealth might not matter much in the not-so-distant future.

Tom (@tomdaavid) 's Twitter Profile Photo

One of the most frustrating things in AI governance discussions is when decision-makers fake understanding instead of asking questions. It’s not about intelligence (some are smart) but breaking that cycle of surface-level perspectives could unlock so much progress.

Tom (@tomdaavid) 's Twitter Profile Photo

Thinking an ASI will be aligned by default because today’s models seem fine is like predicting a tsunami’s behavior by watching waves in a bath: the basic principles are the same, but the scale fundamentally changes the phenomenon.

Tom (@tomdaavid) 's Twitter Profile Photo

Assuming we’ll control future techno because we’ve controlled past ones is flawed: it generalizes from successes while ignoring that, if control had been lost entirely, we wouldn’t be here to observe it. It’s like declaring yourself unsinkable because you’ve never hit an iceberg.

Tom (@tomdaavid) 's Twitter Profile Photo

If you think "there’s no alternative but to race for building AGI as soon-as-possible because rivals will do it if we don’t," you’re probably confusing inevitability with a lack of imagination.

Tom (@tomdaavid) 's Twitter Profile Photo

A few months ago, a colleague told me "AI posed no systemic risks". Now, it’s obvious to anyone who thinks for 30 seconds. Funny how people who don’t want to see a problem will make up stories and actually believe them. (See assets.publishing.service.gov.uk/media/679a0c48…)

A few months ago, a colleague told me "AI posed no systemic risks". Now, it’s obvious to anyone who thinks for 30 seconds. Funny how people who don’t want to see a problem will make up stories and actually believe them.

(See assets.publishing.service.gov.uk/media/679a0c48…)
METR (@metr_evals) 's Twitter Profile Photo

When will AI systems be able to carry out long projects independently? In new research, we find a kind of “Moore’s Law for AI agents”: the length of tasks that AIs can do is doubling about every 7 months.

When will AI systems be able to carry out long projects independently?

In new research, we find a kind of “Moore’s Law for AI agents”: the length of tasks that AIs can do is doubling about every 7 months.
Tom (@tomdaavid) 's Twitter Profile Photo

Ursula von der Leyen: "When the current budget was negotiated, we thought AI would only approach human reasoning around 2050. Now we expect this to happen already next year." ec.europa.eu/commission/pre…

Tom (@tomdaavid) 's Twitter Profile Photo

In his first official address to cardinals, the Pope warned of the dangers of AI to “human dignity, justice and labor.” politico.eu/article/pope-l…

Tom (@tomdaavid) 's Twitter Profile Photo

The US Select Committee on the CCP realize the stakes related to AGI and the lack of control over these models. youtube.com/live/GDNrUZBZD…

Palisade Research (@palisadeai) 's Twitter Profile Photo

📟 We show OpenAI o3 can autonomously breach a simulated corporate network. Our agent broke into three connected machines, moving deeper into the network until it reached the most protected server and extracted sensitive system data.

Steven Adler (@sjgadler) 's Twitter Profile Photo

“If you’re going to work on export controls, make sure your boss is prepared to have your back,” one staffer told me. For months, I’ve heard about widespread fear among think tank researchers who publish work against NVIDIA’s interests. Here’s what I’ve learned:🧵

“If you’re going to work on export controls, make sure your boss is prepared to have your back,” one staffer told me.

For months, I’ve heard about widespread fear among think tank researchers who publish work against NVIDIA’s interests. Here’s what I’ve learned:🧵