Jane Wang
@janexwang
Staff research scientist at DeepMind. AI and neuro. Former physicist, current human.
https://t.co/jVR7w1pUOQ
ID:22385548
http://janexwang.com 01-03-2009 17:55:32
1,6K Tweets
35,4K Followers
429 Following
Interesting discussion with Ian Osband about what it means for a large model to be 'grounded'. I had no idea my take was controversial! Language is very useful for abstraction, but a world made only of language, and constrained only by human subjectivity can't be that grounded?
Ever wondered how your LLM splits numbers into tokens? and how that might affect performance? Check out this cool project I did with DJ Strouse: Tokenization counts: the impact of tokenization on arithmetic in frontier LLMs.
Read on 🔎⏬
Sadly I am not going to #NeurIPS2023 , but if you are, you should definitely check out Julian Coda-Forno's paper on Meta-in-context Learning in LLMs!
neurips.cc/virtual/2023/p…
on Thursday at 5:15 pm
x.com/cpilab/status/…
Can we teach AIs to be good partners to humans and each other?
Emotions and playful courtship seem to be key to successful partnerships in humans and animals. We built an evolutionary model to try to understand why. With Edgar Duenez-Guzman & Joel Z Leibo
pnas.org/doi/10.1073/pn…