charles blundell (@blundellcharles) 's Twitter Profile
charles blundell

@blundellcharles

ID: 781751678613659648

calendar_today30-09-2016 07:04:59

16 Tweet

354 Takipçi

423 Takip Edilen

Berkeley AI Research (@berkeley_ai) 's Twitter Profile Photo

#TransferLearning is crucial for general #AI, and understanding what transfers to what is crucial for #TransferLearning. Taskonomy (#CVPR18 oral) is one step towards understanding transferability among #perception tasks. Live demo and more: taskonomy.vision

Richard Socher (@richardsocher) 's Twitter Profile Photo

Very excited to announce the natural language decathlon benchmark and the first single joint deep learning model to do well on ten different nlp tasks including question answering, translation, summarization, sentiment analysis, ++ einstein.ai/research/the-n…

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

Out-of-sample generalisation of memory in deep reinforcement learning agents: we demonstrate how two different kinds of memory help out-of-distribution generalisation, and propose a memory task suite. arxiv.org/abs/1910.13406

James Whittington (@jcrwhittington) 's Twitter Profile Photo

Hello! Very happy to share that The Tolman-Eichenbaum Machine (TEM) has been published. Many thanks to Tim Behrens and my other co-authors Tim Muller,Shirley Mark,Guifen Chen,@caswellcaswell, Neil Burgess for their help and support along the way! 1/15 cell.com/cell/fulltext/…

Jason Scott (@textfiles) 's Twitter Profile Photo

I think the way that David Crane fit 255 room configurations into an Atari 2600 cartridge is a story that should be told every Christmas github.com/johnidm/asm-at…

Tim Behrens (@behrenstimb) 's Twitter Profile Photo

Oh dear. Look at this ridiculous attempt at summarising all the recent hippocampal state space models. Laughable. (👀James Whittington, mccaffary,Jacob Bakermans) [2202.01682] How to build a cognitive map: insights from models of the hippocampal formation arxiv.org/abs/2202.01682

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

Today in @nature, with EPFL, the first deep reinforcement learning system that can keep nuclear fusion plasma stable inside its tokamaks, opening new avenues to advance nuclear fusion research. Paper: dpmd.ai/fusion-paper

Allison Tam (@allisontam_) 's Twitter Profile Photo

New paper! Language and large foundation models come together to drive semantically meaningful exploration. This idea helps RL agents learn faster in 3D environments, even when language annotations are unavailable (arxiv.org/abs/2204.05080) Read on 🔎⬇️

Ian Osband (@ianosband) 's Twitter Profile Photo

Excited to share some of our recent work! Fine-Tuning Language Models via Epistemic Neural Networks arxiv.org/abs/2211.01568 TL;DR: prioritise getting labels for your most *uncertain* inputs, match performance in 2x less data & better final performance Discussion (1/n)

Bernhard Schölkopf (@bschoelkopf) 's Twitter Profile Photo

Another AI paradox: people are excited about LLMs, some even think that AGI is just around the corner. But some students are depressed how they can still get a PhD. Is it becoming pointless? Some personal notes on this. (1/8)

Tivadar Danka (@tivadardanka) 's Twitter Profile Photo

I described some of the most beautiful and famous mathematical theorems to Midjourney. Here is how it imagined them: 1. "The set of real numbers is uncountably infinite."

I described some of the most beautiful and famous mathematical theorems to Midjourney.

Here is how it imagined them:

1. "The set of real numbers is uncountably infinite."
Tim Rocktäschel (@_rockt) 's Twitter Profile Photo

I am really excited to reveal what Google DeepMind's Open Endedness Team has been up to 🚀. We introduce Genie 🧞, a foundation world model trained exclusively from Internet videos that can generate an endless variety of action-controllable 2D worlds given image prompts.

Theo Weber (@theophaneweber) 's Twitter Profile Photo

Interested in a project at the intersection of large language models, self-improvement, reasoning + tool use, and computer security / code vulnerability? Our project is looking for a student researcher in that area (position is in London)! Please reach out and apply at

Ivan Fratric 💙💛 (@ifsecure) 's Twitter Profile Photo

New Project Zero blog post by Sergei Glazunov and Mark Brand: Project Naptime: Evaluating Offensive Security Capabilities of Large Language Models googleprojectzero.blogspot.com/2024/06/projec…

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

Join our VP (Drastic) Research, Gemini co-Tech Lead Oriol Vinyals and our podcast host Hannah Fry as they discuss the evolution of our AI models, from AlphaGo to Gemini. They also cover agentic capabilities and why giving AI access to tools could lead to a new era of

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

Human generated data has fueled incredible AI progress, but what comes next? 📈 On the latest episode of our podcast, Hannah Fry and David Silver, VP of Reinforcement Learning, talk about how we could move from the era of relying on human data to one where AI could learn for