POURCEL Guillaume (@guillaumeap) 's Twitter Profile
POURCEL Guillaume

@guillaumeap

PhD student @univgroningen, intern @InriaScool, @FlowersINRIA. CogScI, AI. Inspired by brains (make RNNs behave like autograd) and behavior (open-ended goals)

ID: 942422904276488195

linkhttps://guillaumepourcel.github.io/ calendar_today17-12-2017 15:55:04

248 Tweet

116 Followers

796 Following

Khurram Javed (@khurramjaved_96) 's Twitter Profile Photo

The correct parallel to Transformers is not RNNs. It is agents parameterized as RNNs that can choose to look back. Comparable to a human rereading a previous paragraph/equation/code to get the right context. A transformer does this naively by looking back at everything.

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

The (true) story of development and inspiration behind the "attention" operator, the one in "Attention is All you Need" that introduced the Transformer. From personal email correspondence with the author 🇺🇦 Dzmitry Bahdanau @ NeurIPS ~2 years ago, published here and now (with permission) following

The (true) story of development and inspiration behind the "attention" operator, the one in "Attention is All you Need" that introduced the Transformer. From personal email correspondence with the author <a href="/DBahdanau/">🇺🇦 Dzmitry Bahdanau @ NeurIPS</a> ~2 years ago, published here and now (with permission) following
Chris Olah (@ch402) 's Twitter Profile Photo

So if you’re an academic considering industry research roles, I’d offer the following questions and frame: (1) Would you enjoy working in the team science / focused bet model? (2) What bets would you be excited to be a part of? I’ll talk through these below.

Jason Wei (@_jasonwei) 's Twitter Profile Photo

An underrated but occasionally make-or-break skill in AI research (that didn’t really exist ten years ago) is the ability to find a dataset that actually exercises a new method you are working on. Back in the day when the bottleneck in AI was learning, many methods were

Shane Legg (@shanelegg) 's Twitter Profile Photo

I've been saying this within DeepMind for at least 10 years, with the additional clarification that it's about cognitive problems that regular people can do. By this criteria we're not there yet, but I think we might get there in the coming years.

Michael Dennis (@michaeld1729) 's Twitter Profile Photo

Joel Lehman So when I first worked on unsupervised environment design, I was hoping to mitigate KU. There's a section in that paper dealing with the connection to "decisions under ignorance" (KU under another name). arxiv.org/abs/2012.02096 the open-ended complexity surprised me!

noahdgoodman (@noahdgoodman) 's Twitter Profile Photo

Congrats to OAI on producing a reasoning model! Their opaque tweets demonstrate that they’ve (independently) found some of the core ideas that we did on our way to STaR.

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

For friends of open source: imo the highest leverage thing you can do is help construct a high diversity of RL environments that help elicit LLM cognitive strategies. To build a gym of sorts. This is a highly parallelizable task, which favors a large community of collaborators.

prof-g (@robertghrist) 's Twitter Profile Photo

so, i tried out OpenAI's new Deep Research w/o3-mini to do a literature search on network sheaves (something i'm the expert at), and it told me things i did not know about (with accurate links). literature search for theses/papers is now practically automated.

Cédric (@cedcolas) 's Twitter Profile Photo

learning progress is slow to compute empirically, but we can learn to predict it! the agent can then organize its curriculum towards goals it hasn't tried yet but where it expects strong progress

Artificial Analysis (@artificialanlys) 's Twitter Profile Photo

DeepSeek takes the lead: DeepSeek V3-0324 is now the highest scoring non-reasoning model This is the first time an open weights model is the leading non-reasoning model, a milestone for open source. DeepSeek V3-0324 has jumped forward 7 points in Artificial Analysis

DeepSeek takes the lead: DeepSeek V3-0324 is now the highest scoring non-reasoning model

This is the first time an open weights model is the leading non-reasoning model, a milestone for open source.

DeepSeek V3-0324 has jumped forward 7 points in Artificial Analysis
Richard Sutton (@richardssutton) 's Twitter Profile Photo

I’ve changed so little. From my 1978 Bachelor’s thesis: “The adult human mind is very complex, but the question remains open whether the learning processes that constructed it in interaction with the environment are similarly complex. Much evidence and many peoples’ intuitions

Thomas Wolf (@thom_wolf) 's Twitter Profile Photo

we've seen nothing yet! hosted a 9-13 yo vibe-coding event w. Robert Keus 👨🏼‍💻 this w-e (h/t Anton Osika – eu/acc Lovable Build) takeaway? AI is unleashing a generation of wildly creative builders beyond anything I'd have imagined and they grow up *knowing* they can build anything!

Pourcel Julien (@pourceljulien) 's Twitter Profile Photo

Introducing SOAR 🚀, a self-improving framework for prog synth that alternates between search and learning (accepted to #ICML!) It brings LLMs from just a few percent on ARC-AGI-1 up to 52% We’re releasing the finetuned LLMs, a dataset of 5M generated programs and the code. 🧵

Introducing SOAR 🚀, a self-improving framework for prog synth that alternates between search and learning (accepted to #ICML!)

It brings LLMs from just a few percent on ARC-AGI-1 up to 52%

We’re releasing the finetuned LLMs, a dataset of 5M generated programs and the code.

🧵
David Pfau (@pfau) 's Twitter Profile Photo

You know you've got a big deal on your hands when you overwrite an existing acronym in your field en.m.wikipedia.org/wiki/Soar_(cog…

ARC Prize (@arcprize) 's Twitter Profile Photo

Self-Improving Language Models for Evolutionary Program Synthesis: A Case Study on ARC-AGI by Pourcel Julien @ICML, Cédric and Pierre-Yves Oudeyer Another example of ARC-AGI as a research playground that has general applicability

Loris Gaven (@lorisgaven) 's Twitter Profile Photo

I’m attending #ICML this week! We’ll be presenting MAGELLAN during the poster session on Thursday with Carta Thomas & Clément ROMAC @ ICML 2025 If you’re not in Vancouver, we recorded a talk presenting the paper last week, it’s available on YouTube (link below)

I’m attending #ICML this week! We’ll be presenting MAGELLAN during the poster session on Thursday with <a href="/CartaThomas2/">Carta Thomas</a> &amp; <a href="/ClementRomac/">Clément ROMAC @ ICML 2025</a> 

If you’re not in Vancouver, we recorded a talk presenting the paper last week, it’s available on YouTube (link below)
Murray Shanahan (@mpshanahan) 's Twitter Profile Photo

Very sad to learn of the death on 18th July of Margaret (Maggie) Boden, a titan of cognitive science and AI. I met her many times, and respected her greatly. theargus.co.uk/memorials/deat…

Very sad to learn of the death on 18th July of Margaret (Maggie) Boden, a titan of cognitive science and AI. I met her many times, and respected her greatly.
theargus.co.uk/memorials/deat…