Griffiths Computational Cognitive Science Lab (@cocosci_lab) 's Twitter Profile
Griffiths Computational Cognitive Science Lab

@cocosci_lab

Tom Griffiths' Computational Cognitive Science Lab. Studying the computational problems human minds have to solve.

ID: 1291487042921168898

linkhttp://cocosci.princeton.edu/ calendar_today06-08-2020 21:31:29

144 Tweet

4,4K Followers

131 Following

Griffiths Computational Cognitive Science Lab (@cocosci_lab) 's Twitter Profile Photo

New preprint uses methods from psychology to explore implicit biases in large language models. Using simple prompts that probe associations between social categories, models that have been trained to be explicitly unbiased show systematic biases.

Griffiths Computational Cognitive Science Lab (@cocosci_lab) 's Twitter Profile Photo

New preprint showing that pretraining language models on arithmetic results in surprisingly good performance in predicting human decisions. This kind of focused pretraining can be a useful tool for figuring out why LLMs predict aspects of human behavior.

Griffiths Computational Cognitive Science Lab (@cocosci_lab) 's Twitter Profile Photo

New preprint translates a method cognitive scientists have used to elicit human priors into a method for studying the implicit knowledge used by Large Language Models

Griffiths Computational Cognitive Science Lab (@cocosci_lab) 's Twitter Profile Photo

New preprint shows large language models inaccurately predict humans will make rational decisions. Using chain of thought prompts result in predictions based on expected value. However, this assumption of rationality aligns with how humans make inferences from others' decisions.

Griffiths Computational Cognitive Science Lab (@cocosci_lab) 's Twitter Profile Photo

Embeddings are often analyzed to see what neural networks represent about the world. This new preprint explores what they *should* represent, showing that autoregressive models (like LLMs) should (and do) embed predictive sufficient statistics, including Bayesian posteriors.

Katie Collins (@katie_m_collins) 's Twitter Profile Photo

[New preprint!] What does it take to build machines that **meet our expectations** and **compliment our limitations**? In this Perspective, we chart out a vision, which engages deeply with computational cognitive science, to design truly human-centric AI “thought partners” 1/

[New preprint!] What does it take to build machines that **meet our expectations** and **compliment our limitations**? In this Perspective, we chart out a vision, which engages deeply with computational cognitive science, to design truly human-centric AI “thought partners” 1/
Griffiths Computational Cognitive Science Lab (@cocosci_lab) 's Twitter Profile Photo

In this new preprint we use data from millions of online chess games to show that mechanisms of reinforcement learning and social learning that are normally studied in the lab influence complex decisions that are learned over months, such as how to start a game of chess.

Griffiths Computational Cognitive Science Lab (@cocosci_lab) 's Twitter Profile Photo

Thinking about distributed systems isn't just a useful way to understand human collaboration, it might give you a new argument (or start one) next time you don't help clean up after dinner.

Kerem Oktar (@keremoktar) 's Twitter Profile Photo

My latest paper is out at Psych Sci❗️ w/ Tania Lombrozo & Griffiths Computational Cognitive Science Lab We built a Bayesian model that captures people's inferences from opinions* in a game-show paradigm. *e.g., if 12 people think 'X' is True, and 2 think 'X' is False, P(X=T) = 81% journals.sagepub.com/doi/10.1177/09…

Griffiths Computational Cognitive Science Lab (@cocosci_lab) 's Twitter Profile Photo

How should you take into account what other people think? Our new paper shows that a simple Bayesian model can capture how people make inferences from aggregated opinions.

Griffiths Computational Cognitive Science Lab (@cocosci_lab) 's Twitter Profile Photo

How do people teach abstractions? Our new preprint shows that people tend to focus on even simpler examples than Bayesian models of teaching suggest.

Griffiths Computational Cognitive Science Lab (@cocosci_lab) 's Twitter Profile Photo

New paper explores the factors contributing to the success of chain-of-thought reasoning in large language models using a fun and carefully controlled task: solving shift ciphers