noahdgoodman (@noahdgoodman) 's Twitter Profile
noahdgoodman

@noahdgoodman

Professor of natural and artificial intelligence @Stanford. Research Scientist at @GoogleDeepMind.
(@StanfordNLP @StanfordAILab etc)

ID: 1193894314566307841

calendar_today11-11-2019 14:12:49

208 Tweet

2,2K Followers

113 Following

Kanishk Gandhi (@gandhikanishk) 's Twitter Profile Photo

Language models struggle to search, not due to an architecture problem, but a data one! They rarely see how to search or backtrack. We show how LLMs can be taught to search by representing the process of search in language as a flattened string, a stream of search (SoS)!

Philipp Fränken (@jphilippfranken) 's Twitter Profile Photo

Excited to share OffTheRails: A moral reasoning benchmark beyond trolley problems! We present a simple prompting pipeline for generating moral reasoning evaluations with language models using causal templates 🔵→🟠

Excited to share OffTheRails: A moral reasoning benchmark beyond trolley problems!

We present a simple prompting pipeline for generating moral reasoning evaluations with language models using causal templates 🔵→🟠
noahdgoodman (@noahdgoodman) 's Twitter Profile Photo

Base language models already know a lot about good behavior. Here we bring out that latent knowledge by enhancing the connection between principles and responses — no preferences required!

Eric Zelikman (@ericzelikman) 's Twitter Profile Photo

Charles Sutton @ ✈️ ICML 2024 🥐 Kensen Shi Thanks for sharing! I'm curious if you ever tried comparing to Parsel from 2022 (i.e. decompose algorithmic tasks into hierarchical subtasks, search over combinatorial implementations of subprograms w/ tests), and if so, if you have an intuition for where improvements came from!

Michael C. Frank (@mcxfrank) 's Twitter Profile Photo

People are really good at creating conventions - new ways of talking - during dialogues. But what happens in larger groups? And what about when people can only respond using 😁?! New paper by Veronica Boyce Robert Hawkins noahdgoodman and me, now out: pnas.org/doi/10.1073/pn…

Zac Kenton (@zackenton1) 's Twitter Profile Photo

Eventually, humans will need to supervise superhuman AI - but how? Can we study it now? We don't have superhuman AI, but we do have LLMs. We study protocols where a weaker LLM uses stronger ones to find better answers than it knows itself. Does this work? It’s complicated: 🧵👇

Eventually, humans will need to supervise superhuman AI - but how? Can we study it now?

We don't have superhuman AI, but we do have LLMs. We study protocols where a weaker LLM uses stronger ones to find better answers than it knows itself.

Does this work? It’s complicated: 🧵👇