Sandra Wachter -@swachter.bsky.social (@sandrawachter5) 's Twitter Profile
Sandra Wachter [email protected]

@sandrawachter5

Professor of Technology and Regulation, founder of @OxGovTech, University of Oxford, [email protected]

ID: 823519451756855296

linkhttps://www.oii.ox.ac.uk/people/sandra-wachter/ calendar_today23-01-2017 13:15:12

5,5K Tweet

17,17K Followers

327 Following

Oxford Internet Institute (@oiioxford) 's Twitter Profile Photo

"Large language models do not distinguish between fact and fiction. … They are not, strictly speaking, designed to tell the truth," Sandra Wachter [email protected] Oxford Internet Institute told The Washington Post. AI is more persuasive than a human in a debate, study finds washingtonpost.com/technology/202…

Sandra Wachter -@swachter.bsky.social (@sandrawachter5) 's Twitter Profile Photo

LLMs do not distinguish between fact & fiction. They are not designed to tell the truth, but designed to persuade. Yet they are implemented in sectors where truth matters e.g. education, science, health, media, law, & finance. My interview The Washington Post tinyurl.com/2e7253ja

Dorothea Baur (Dr.) (@dorotheabaur) 's Twitter Profile Photo

LLMs don't "distinguish between fact and fiction." "They are not... designed to tell the truth. Yet they are implemented in many sectors where truth and detail matter, such as education, science, health, the media, law, and finance” Sandra Wachter [email protected] washingtonpost.com/technology/202…

Sandra Wachter -@swachter.bsky.social (@sandrawachter5) 's Twitter Profile Photo

With a politician or sales person we understand their motivation. But chatbots have no intentionality & are optimised for plausibility & engagement, not truthfulness. They will invent facts for no purpose. Thanks John Thornhill Financial Times for feat our paper tinyurl.com/ycxyskba

@funkygiesing.bsky.social (@funkygiesing) 's Twitter Profile Photo

"#LLMs inevitably produce #carelessspeech due to their design. Matters of fact are decided not by appeal to ground truth but predominantly on the frequency of statements in training data" 1

Dorothea Baur (Dr.) (@dorotheabaur) 's Twitter Profile Photo

"chatbots have no intentionality and are optimised for plausibility and engagement, not truthfulness. They will invent facts for no purpose. They can pollute the knowledge base of humanity in unfathomable ways". feat. Sandra Wachter [email protected] ft.com/content/55c08f…

Sandra Wachter -@swachter.bsky.social (@sandrawachter5) 's Twitter Profile Photo

Are you interested in the governance of emergent tech? Come & work with me Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell We are hiring 3 Post Docs Law: tinyurl.com/4rbhcndp Ethics: tinyurl.com/yc2e2km4 Computer Science/AI/ML: tinyurl.com/yr5bvnn5 Application deadline is June 15, 2025.

Oxford Internet Institute (@oiioxford) 's Twitter Profile Photo

“That doesn’t mean we shouldn’t use it, but it is to say that there are trade-offs and we need to decide if convenience is worth the loss of privacy,” Sandra Wachter [email protected] Oxford Internet Institute commenting on the risks of using AI to learn a second language in her interview with New Scientist.

Sandra Wachter -@swachter.bsky.social (@sandrawachter5) 's Twitter Profile Photo

Last call: Are you interested in the governance of emergent tech? Come & work with me Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell We are hiring 3 Post Docs Law: tinyurl.com/4rbhcndp Ethics: tinyurl.com/yc2e2km4 CS/AI/ML: tinyurl.com/yr5bvnn5 Application deadline is June 15, 2025.