Tim Bakker🔸 (@timbbakker) 's Twitter Profile
Tim Bakker🔸

@timbbakker

Senior ML researcher at Qualcomm. Previously PhD ML with Max Welling at AMLab, UvA. AI safety, effective altruism, and everything Bayesian. Sings a lot.

ID: 1326525380082208768

linkhttp://tbbakker.nl calendar_today11-11-2020 14:01:21

146 Tweet

752 Followers

429 Following

Max Welling (@wellingmax) 's Twitter Profile Photo

Well, Yoshua Bengio, Geoffrey Hinton and Stuart Russell actively support the bill. Hardly a outlandishly fringe AI doomsday cult I would say: computing.co.uk/news/4344580/b….

Tim Bakker🔸 (@timbbakker) 's Twitter Profile Photo

🚨With my PhD coming to an end, it's time for a life update! I've recently joined Qualcomm AI Research as a Senior Machine Learning Researcher. Looking forward to working with this great team!🧠✨

Andrew Critch (🤖🩺🚀) (@andrewcritchphd) 's Twitter Profile Photo

Using "speculative" as a pejorative is part of an anti-epistemic pattern that suppresses reasoning under uncertainty. If you disagree with someone's reasoning, just point out the flaw, or the premise you disagree with. If someone disparages an argument as "speculative", you

Marius Hobbhahn (@mariushobbhahn) 's Twitter Profile Photo

Oh man :( We tried really hard to neither over- nor underclaim the results in our communication, but, predictably, some people drastically overclaimed them, and then based on that, others concluded that there was nothing to be seen here (see examples in thread). So, let me try

Existential Risk Observatory ⏸ (@xrobservatory) 's Twitter Profile Photo

Today, our Otto Barten⏸ and Tim Bakker🔸 are publishing a new oped in Dutch @Nrc Handelsblad, in which we argue that we should prepare for AGI. What are the risks that we see, and what do we propose to mitigate them? nrc.nl/nieuws/2025/02…

Cas (Stephen Casper) (@stephenlcasper) 's Twitter Profile Photo

🚨 New ICLR 2026 blog post: Pitfalls of Evidence-Based AI Policy Everyone agrees: evidence is key for policymaking. But that doesn't mean we should postpone AI regulation. Instead of "Evidence-Based AI Policy," we need "Evidence-Seeking AI Policy." arxiv.org/abs/2502.09618…

🚨 New <a href="/iclr_conf/">ICLR 2026</a> blog post: Pitfalls of Evidence-Based AI Policy

Everyone agrees: evidence is key for policymaking. But that doesn't mean we should postpone AI regulation.

Instead of "Evidence-Based AI Policy," we need  "Evidence-Seeking AI Policy."

arxiv.org/abs/2502.09618…
Dylan HadfieldMenell (@dhadfieldmenell) 's Twitter Profile Photo

If you pretend that xrisk from ASI misalignment is some novel, incredibly complex failure mode (instead of a simple extrapolation of established theories of incentive design), you blind people to the evidence for, and predictive power of, the theories that motivate the risk.

Yoshua Bengio (@yoshua_bengio) 's Twitter Profile Photo

I recommend reading this paper ⬇️ It makes an evidence-based case that, without substantial effort to prevent it, AGIs trained like today’s top models could learn misaligned goals, hide them from their developers, and pursue them by seeking power.

Aella (@aella_girl) 's Twitter Profile Photo

Thread of photos from families in each quartile of income in the world: first photo is from the poorest 25%, last photo is richest 25%. Based on these photos, which income bracket are you in? First up: Toilets

Thread of photos from families in each quartile of income in the world: first photo is from the poorest 25%, last photo is richest 25%.
Based on these photos, which income bracket are you in?
First up: Toilets
Elizabeth Barnes (@bethmaybarnes) 's Twitter Profile Photo

Benchmarks saturate quickly, but don’t translate well to real-world impact. *Something* is going up very fast, but not clear what it means. Thus the wide range of expert opinion, from “superintelligence in a few years”, to “we’ve already hit a wall”. Our results shed some light:

Captain Pleasure, Andrés Gómez Emilsson (@algekalipso) 's Twitter Profile Photo

Nick Nick, I respect you greatly but feel compelled to push back here as I did a year ago. Your scale and reasoning fundamentally misrepresents reality: When you evaluate "funding happy farm animals" at 10/10, you're ignoring crucial factors that make this position dubious. First,

Neel Nanda (@neelnanda5) 's Twitter Profile Photo

I feel like a distressing amount of AI policy discourse stems from some people saying "without strong evidence for how risky powerful AI is, assume it's 100% safe and sit tight and assess", while I say "both are plausible, let's take actions that are reasonable in both worlds"

Leon Lang (@lang__leon) 's Twitter Profile Photo

One sad fact about the current state of discourse is that whenever someone tries their hardest to do the impossible and predict the future, they're stigmatized by parts of the mainstream research community as doing "crackpottery". This causes reasonable people to not engage.

Michael Nielsen (@michael_nielsen) 's Twitter Profile Photo

New essay exploring why experts so strongly disagree about existential risk from ASI, and why focusing on alignment as a primary goal may be a fundamental mistake

New essay exploring why experts so strongly disagree about existential risk from ASI, and why focusing on alignment as a primary goal may be a fundamental mistake
Christian A. Naesseth @ ICLR, AABI 🇸🇬 (@canaesseth) 's Twitter Profile Photo

Come check out our TMLR-to-ICLR poster this afternoon "E-Valuating Classifier Two-Sample Tests". Time: 15-17:30 Where: Hall 3 + Hall 2B #437 Teodora Pandeva UvA AMLab #ICLR2025 #ML #Stats openreview.net/forum?id=dwFRo…

Come check out our TMLR-to-ICLR poster this afternoon "E-Valuating Classifier Two-Sample Tests".

Time: 15-17:30
Where: Hall 3 + Hall 2B #437

<a href="/PandevaTeodora/">Teodora Pandeva</a> <a href="/AmlabUva/">UvA AMLab</a> 
#ICLR2025 #ML #Stats

openreview.net/forum?id=dwFRo…
Toby Ord (@tobyordoxford) 's Twitter Profile Photo

I'm increasingly concerned about the scenario of humans being gradually disempowered by AI, which could lead towards tyranny (if some small number of humans remain in charge) or even to humanity losing control of its future, all without a shot being fired. 1/2