mark williams (@markjwill) 's Twitter Profile
mark williams

@markjwill

Co-Director Vanderbilt AI Law Lab

ID: 53155572

calendar_today02-07-2009 19:06:21

119 Tweet

153 Followers

1,1K Following

wordgrammer (@wordgrammer) 's Twitter Profile Photo

You’re worried about o3’s performance on Arc-AGI and you can’t even put two plates up? My brother in Christ you are worried about benching the wrong weights

Ethan Mollick (@emollick) 's Twitter Profile Photo

Banning AI use in a class is fine, especially for subjects where humans need to develop a skill that overlaps with AI. It took a few years to figure out how to use calculators in math education (and they are still banned when learning math) Just remember AI detectors don’t work.

Ethan Mollick (@emollick) 's Twitter Profile Photo

Question for AI policy folks: Grok 3 is the first model that passes the EU’s 10^25 FLOP limit for systemic risk, and it has few safety guardrails. Looks like it still will be released in Europe, is that because of the delay before full implementation? Or have the rules changed?

Zach Posner (@zposner) 's Twitter Profile Photo

Legaltech used to lag 5 years behind fintech, proptech, and edtech in software adoption. But not anymore. 🚀 For years, software was about workflow automation and calculations—industries like finance and real estate thrived on it. But law? Law is logic, structure, and rules—the

Ethan Mollick (@emollick) 's Twitter Profile Photo

The thing about AI policy is that even if you think the labs are 99% likely to be wrong in their belief that AGI is achievable soon, the implications are big enough that you want serious government effort going into thinking through contingencies. Tail risks are a government job.

Ethan Mollick (@emollick) 's Twitter Profile Photo

I warned about the Homework Apocalypse in 2023. It happened as predicted. There is a world where AI & traditional education get along very well (mixes of active in-class learning, AI-assisted assignments & tutors, blue books), but it needs to be built. oneusefulthing.org/p/the-homework…

I warned about the Homework Apocalypse in 2023.

It happened as predicted. There is a world where AI & traditional education get along very well (mixes of active in-class learning, AI-assisted assignments & tutors, blue books), but it needs to be built. oneusefulthing.org/p/the-homework…
Kevin Roose (@kevinroose) 's Twitter Profile Photo

I'm sympathetic to the professors quoted in this, but at a certain point if your students can cheat their way through your class with AI, you probably need to redesign your class. nymag.com/intelligencer/…

Ethan Mollick (@emollick) 's Twitter Profile Photo

This is a prime example of how AI chatbots are harder to use than the first seem. 4o & other models hallucinate wrong but completely plausible-seeming citations. Deep Research does citations well, but needs to be activated. None of this is documented or explained by the models.

Kevin Frazier (@kevintfrazier) 's Twitter Profile Photo

AI bills can’t be assessed on language alone. Every analysis must answer the following: who will enforce it? What relevant expertise do they have? And, critically, what resources are available? via ⁦Bloomberg Law⁩ / ⁦Titus Wu

AI bills can’t be assessed on language alone. Every analysis must answer the following: who will enforce it? What relevant expertise do they have? And, critically, what resources are available?

via ⁦<a href="/BLaw/">Bloomberg Law</a>⁩ / ⁦<a href="/tituswu100/">Titus Wu</a>⁩
John B. Holbein (@johnholbein1) 's Twitter Profile Photo

If PhD programs are, truly, going to prepare students for non-academic jobs, they should, at minimum, stop sending signals—overtly or covertly—that non-academic jobs are inferior.

Ethan Mollick (@emollick) 's Twitter Profile Photo

This study is being massively misinterpreted. College students who wrote an essay with LLM help engaged less with the essay & thus were less engaged when (a total of 9 people) were asked to do similar work weeks later. LLMs do not rot your brain. Being lazy & not learning does.

Ethan Mollick (@emollick) 's Twitter Profile Photo

Many firms built around the limitations & cost assumptions of GPT-3.5 class models, and are now stuck with complex solutions that are more expensive & worse than a reasoner without any scaffolding You need to build solutions with an eye towards riding the cost/performance curve.

Ethan Mollick (@emollick) 's Twitter Profile Photo

So every major model is already exceeding or will soon exceed the EU's systemic risk FLOP limit when it comes into effect next year.

So every major model is already exceeding or will soon exceed the EU's systemic risk FLOP limit when it comes into effect next year.
Ethan Mollick (@emollick) 's Twitter Profile Photo

The problem is not just the proliferation of devices that let you record people without their knowledge, but the fact that multimodal LLM let you use recordings in ways that neither law not society anticipated. Everyone has an easy way to mine hours of footage. No forgetting.

The problem is not just the proliferation of devices that let you record people without their knowledge, but the fact that multimodal LLM let you use recordings in ways that neither law not society anticipated.

Everyone has an easy way to mine hours of
footage. No forgetting.