Yonathan Arbel (@profarbel) 's Twitter Profile
Yonathan Arbel

@profarbel

Let's Build Safe AI.
Law Prof @ Alabama
Contracts, Defamation, Legal NLP, and AI Safety

ID: 2609911050

linkhttp://battleoftheforms.com/ calendar_today07-07-2014 15:19:58

3,3K Tweet

1,1K Followers

1,1K Following

Yonathan Arbel (@profarbel) 's Twitter Profile Photo

can AI create new math? if so, where are all the new discoveries? I can hardly adjudicate the merit of the claims in the thread, but even under this more minimalist view, it appears that there is a zone of (a) new math discoveries + (b) that aren't that hard to solve once the

Paul Weitzel (@ithinkiagree) 's Twitter Profile Photo

Taking 10% of Intel has four facets worth considering separately because “stock” is really a bundle of rights. First, the economic rights. This gives the gov a right to share in the profits (and some payout in the rare event they liquidate while still solvent). This doesn’t

Caprice Roberts (@capricelroberts) 's Twitter Profile Photo

Calling all law faculty candidates hoping to teach law starting next fall: more ways to get your application seen by all law hiring + appointments committees & find your ideal law school. #SEALS2025

Calling all law faculty candidates hoping to teach law starting next fall: more ways to get your application seen by all law hiring + appointments committees & find your ideal law school. #SEALS2025
Peter N. Salib (@petersalib) 's Twitter Profile Photo

Enjoying the new 80,000 Hours interview w/ Kyle Fish about AI welfare. One important research question he and Luisa discuss is mechanisms for making credible commitments to advanced AI systems. Simon Goldstein and I have been thinking a lot about this! We argue that ...

Enjoying the new <a href="/80000Hours/">80,000 Hours</a> interview w/ <a href="/fish_kyle3/">Kyle Fish</a> about AI welfare. One important research question he and Luisa discuss is mechanisms for making credible commitments to advanced AI systems.

<a href="/simondgoldstein/">Simon Goldstein</a> and I have been thinking a lot about this! We argue that ...
Yonathan Arbel (@profarbel) 's Twitter Profile Photo

The NYTimes review of If Anyone Builds It, is not only shallow and lazy, it also plays the game of positioning its author and reader as mature adults explaining to their over excited 9 year old that no, dear, aliens aren't real. It replaces arguments and curiosity with status

The NYTimes review of If Anyone Builds It, is not only shallow and lazy, it also plays the game of positioning its author and reader as mature adults explaining to their over excited 9 year old that no, dear, aliens aren't real. It replaces arguments and curiosity with status
Jesús Fernández-Villaverde (@jesusferna7026) 's Twitter Profile Photo

A couple of days ago, I posted on the double descent phenomenon to alert economists about its importance. To illustrate it, I used the following example: 1️⃣ You want to find the curve that “best” approximates an unknown function generating 12 observations. 2️⃣ I know the target

Matthew Yglesias (@mattyglesias) 's Twitter Profile Photo

I want to recommend the new book “If Anyone Builds It, Everyone Dies” by Eliezer Yudkowsky ⏹️ and Nate Soares ⏹️. The line currently being offered by the leading edge AI companies — that they are 12-24 months away from unleashing superintelligent AI that will be able to massively outperform

Yonathan Arbel (@profarbel) 's Twitter Profile Photo

I won't write a full book review of If Anyone Builds It, Everyone Dies, but I would highly recommend reading it, especially to curious minds who want to understand why serious people are worried. It's really hard to communicate AI risk ideas and the authors found clever, even

I won't write a full book review of If Anyone Builds It, Everyone Dies, but I would highly recommend reading it, especially to curious minds who want to understand why serious people are worried.

It's really hard to communicate AI risk ideas and the authors found clever, even
Yonathan Arbel (@profarbel) 's Twitter Profile Photo

I honestly believe that some people engage in this implicit arg "I personally find EY annoying/smug/fedora-wearing, therefore OAI/xAI/GDM are responsible actors who will never cut corners on a socially transformative tech"

Yonathan Arbel (@profarbel) 's Twitter Profile Photo

This Monday at HarvardLaw : Law & Large AI Risk: FAQ on AI Safety Law Together with Peter N. Salib, a talk on this emerging area of law. Special thanks to CLAIR, the Center for Law & AI Risk and the Harvard Artificial Intelligence Law and Policy Association

This Monday at <a href="/HarvardLaw/">HarvardLaw</a> : Law &amp; Large AI Risk: FAQ on AI Safety Law

Together with <a href="/petersalib/">Peter N. Salib</a>, a talk on this emerging area of law.

Special thanks to CLAIR, the Center for Law &amp; AI Risk and the Harvard Artificial Intelligence Law and Policy Association