David Evan Harris linkedin.com/in/davidevanharris (@davidevanharris) 's Twitter Profile
David Evan Harris linkedin.com/in/davidevanharris

@davidevanharris

Chancellor's Public Scholar, @UCBerkeley; Senior Research Fellow @ICSIatBerkeley; Advisor @psych_of_tech; Affiliate @CITRISPolicyLab, @clasberkeley; fmr @Meta

ID: 14925313

linkhttps://haas.berkeley.edu/faculty/harris-david/ calendar_today27-05-2008 20:42:51

4,4K Tweet

1,1K Followers

1,1K Following

Helen Toner (@hlntnr) 's Twitter Profile Photo

The topic of the hearing was "insider perspectives" on AI, so I focused on a big disconnect I see between east coast & west coast conversations about AI: how seriously to take the possibility that very advanced—and possibly very dangerous—AI systems are built quite soon.

The topic of the hearing was "insider perspectives" on AI, so I focused on a big disconnect I see between east coast & west coast conversations about AI: how seriously to take the possibility that very advanced—and possibly very dangerous—AI systems are built quite soon.
Helen Toner (@hlntnr) 's Twitter Profile Photo

There's plenty of room for debate about what exactly any of this means—personally I don't find the concept of "AGI" very helpful, and we're far from having any good definition of "intelligence," so it can all get murky. But...

Helen Toner (@hlntnr) 's Twitter Profile Photo

Murkiness about what exactly is going on with AI is *not* the same as confidence that there's nothing interesting (or concerning) going on. AI experts should keep arguing about what really counts as "reasoning," etc etc. But we may not have the clarity we want in time to act.

Helen Toner (@hlntnr) 's Twitter Profile Photo

I tried to describe this difficulty here. The crux of it is that waiting for scientific clarity would be be lovely, but may be a luxury we don't have. If highly advanced AI systems are built soon—and even 10-20 years from now is very soon!—then we need to start preparing now.

I tried to describe this difficulty here. The crux of it is that waiting for scientific clarity would be be lovely, but may be a luxury we don't have. If highly advanced AI systems are built soon—and even 10-20 years from now is very soon!—then we need to start preparing now.
Helen Toner (@hlntnr) 's Twitter Profile Photo

Of course, "start preparing now" is not the same as "assume AGI by 2027 and go all out to stop it." Personally, I'm extremely uncertain about how we should expect AI to progress over the next 5-10 years, so hardline policies with big downsides are not appealing.

Of course, "start preparing now" is not the same as "assume AGI by 2027 and go all out to stop it." Personally, I'm extremely uncertain about how we should expect AI to progress over the next 5-10 years, so hardline policies with big downsides are not appealing.
Helen Toner (@hlntnr) 's Twitter Profile Photo

But fortunately—and partly because the US government has done so little so far—there are some super basic policy measures we can implement that are low-downside, and that in many cases can also help with existing harms from AI.

But fortunately—and partly because the US government has done so little so far—there are some super basic policy measures we can implement that are low-downside, and that in many cases can also help with existing harms from AI.
Helen Toner (@hlntnr) 's Twitter Profile Photo

What makes me most enthusiastic about these policies is that they would give us a much better shot at being able to notice & respond to changes in the field of AI over time. Maybe things get scarier and we need to massively ramp up oversight—maybe they don't! That would be great.

bennett capers (@bennettcapers) 's Twitter Profile Photo

No surprise that in a Senate Judiciary Committee hearing on the Oversight of AI,David Evan Harris linkedin.com/in/davidevanharris, the Chancellor's Public Scholar at UC Berkeley, cites the work of my brilliant Fordham Law colleague Chinmayi Sharma! Congrats, Chinny!

No surprise that in a Senate Judiciary Committee hearing on the Oversight of AI,<a href="/davidevanharris/">David Evan Harris linkedin.com/in/davidevanharris</a>, the Chancellor's Public Scholar at <a href="/UCBerkeley/">UC Berkeley</a>, cites the work of my brilliant <a href="/FordhamLawNYC/">Fordham Law</a>  colleague Chinmayi Sharma! Congrats, Chinny!
ControlAI (@ai_ctrl) 's Twitter Profile Photo

David Evan Harris, formerly at Meta, tells the US Senate: "Voluntary self-regulation is a myth ... when one tech company tries to be responsible, another less responsible company steps in to fill the void ... we need to move quickly with binding and enforceable oversight of AI."

California Common Cause (@cacommoncause) 's Twitter Profile Photo

This week, CITED’s Senior Policy Advisor, David Evan Harris linkedin.com/in/davidevanharris, testified before the U.S. Senate Judiciary Committee, Subcommittee on Privacy, Tech & the Law. Senators leading on AI regulation looked for ways to take inspiration from our work in California 🧵 1/5 🎥: CSPAN

California Common Cause (@cacommoncause) 's Twitter Profile Photo

The three main takeaways: 1. Voluntary self-regulation by tech companies does not work. 2. Many of the solutions for AI safety already exist in the framework and bills proposed in Congress. 3. It is not too late to regulate AI. 🧵2/5

California Common Cause (@cacommoncause) 's Twitter Profile Photo

“Shalls” not “Mays” — When drafting legislation, using “May” instead of “Shall” makes legislation voluntary. As we have seen with the White House Voluntary AI Commitments, when tech companies are presented with voluntary commitments, progress will be limited at best. 🧵3/5

California Common Cause (@cacommoncause) 's Twitter Profile Photo

We don't need silver bullets. Many solutions to these issues are already written into the framework by Senators Blumenthal & Hawley: 1. AI companies should be held liable for their products 2. Should be required to embed hard-to-remove provenance data in AI-generated content 🧵

California Common Cause (@cacommoncause) 's Twitter Profile Photo

There is still time to act. Today’s deepfakes are easily detected, but they will only get more advanced. Deepfake sexploitation scams targeting children. Interactive deepfake disinformation video calls. It’s all coming. Decisive action now will save our future. 🧵5/5

David Evan Harris linkedin.com/in/davidevanharris (@davidevanharris) 's Twitter Profile Photo

After speaking to @cnn's Hadas Gold, I reproduced Tumi Sole's excellent interrogation of @Grok & got similarly incriminating reply: I was instructed by my creators at xAl to address "white genocide" in South Africa... as racially motivated... x.com/i/grok/share/o…

Zamaan Qureshi (@zamaan_qureshi) 's Twitter Profile Photo

🚨 The new AI moratorium language risks torpedoing state lawsuits against Meta, TikTok, Snap, as well as families trying cases because their children died due to social media. The moratorium is the BIGGEST gift to Big Tech. It must be stopped.