Professor of Technology and Regulation, founder of @OxGovTech, University of Oxford, [email protected]
ID: 823519451756855296
https://www.oii.ox.ac.uk/people/sandra-wachter/ 23-01-2017 13:15:12
5,5K Tweet
17,17K Followers
327 Following

Still time to apply to work with me & Brent Mittelstadt @bmittelstadt.bsky.social & Chris Russell Oxford Internet Institute

"Large language models do not distinguish between fact and fiction. … They are not, strictly speaking, designed to tell the truth," Sandra Wachter [email protected] Oxford Internet Institute told The Washington Post. AI is more persuasive than a human in a debate, study finds washingtonpost.com/technology/202…


LLMs do not distinguish between fact & fiction. They are not designed to tell the truth, but designed to persuade. Yet they are implemented in sectors where truth matters e.g. education, science, health, media, law, & finance. My interview The Washington Post tinyurl.com/2e7253ja

For more on this see my work with Chris Russell and Brent Mittelstadt @bmittelstadt.bsky.social "Do large language models have a legal duty to tell the truth? royalsocietypublishing.org/doi/full/10.10… Oxford Internet Institute

LLMs don't "distinguish between fact and fiction." "They are not... designed to tell the truth. Yet they are implemented in many sectors where truth and detail matter, such as education, science, health, the media, law, and finance” Sandra Wachter [email protected] washingtonpost.com/technology/202…

With a politician or sales person we understand their motivation. But chatbots have no intentionality & are optimised for plausibility & engagement, not truthfulness. They will invent facts for no purpose. Thanks John Thornhill Financial Times for feat our paper tinyurl.com/ycxyskba

Full paper is here: Do large language models have a legal duty to tell the truth? tinyurl.com/3kzs777b Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell Oxford Internet Institute The Institute for Ethics in AI Faculty of Law Berkman Klein Center for Internet & Society HIIG Berlin Algorithmic Justice League AlgorithmWatch OxfordSocialSciences Reuters Institute Governance of Emerging Technologies



The Financial Times's John Thornhill covers OII research by Sandra Wachter [email protected], Brent Mittelstadt @bmittelstadt.bsky.social and Chris Russell on the propensity for LLMs to generate "careless speech". Read the full piece here: ft.com/content/55c08f…

"chatbots have no intentionality and are optimised for plausibility and engagement, not truthfulness. They will invent facts for no purpose. They can pollute the knowledge base of humanity in unfathomable ways". feat. Sandra Wachter [email protected] ft.com/content/55c08f…

Are you interested in the governance of emergent tech? Come & work with me Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell We are hiring 3 Post Docs Law: tinyurl.com/4rbhcndp Ethics: tinyurl.com/yc2e2km4 Computer Science/AI/ML: tinyurl.com/yr5bvnn5 Application deadline is June 15, 2025.



“That doesn’t mean we shouldn’t use it, but it is to say that there are trade-offs and we need to decide if convenience is worth the loss of privacy,” Sandra Wachter [email protected] Oxford Internet Institute commenting on the risks of using AI to learn a second language in her interview with New Scientist.

Last call: Are you interested in the governance of emergent tech? Come & work with me Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell We are hiring 3 Post Docs Law: tinyurl.com/4rbhcndp Ethics: tinyurl.com/yc2e2km4 CS/AI/ML: tinyurl.com/yr5bvnn5 Application deadline is June 15, 2025.

Interesting opinion piece by Heather Stewart, The Guardian citing Sandra Wachter [email protected] Oxford Internet Institute’s perspective on flawed #LLMs and the proliferation of invented content. bit.ly/442g8XH