Felix Gimeno (@felixaxelgimeno) 's Twitter Profile
Felix Gimeno

@felixaxelgimeno

Research Engineer at Google DeepMind since 2018

ID: 2645793014

linkhttps://orcid.org/0000-0002-1105-048X calendar_today14-07-2014 15:52:26

23 Tweet

77 Takipçi

187 Takip Edilen

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

Introducing #AlphaCode: a system that can compete at average human level in competitive coding competitions like Codeforces. An exciting leap in AI problem-solving capabilities, combining many advances in machine learning! Read more: dpmd.ai/Alpha-Code 1/

Introducing #AlphaCode: a system that can compete at average human level in competitive coding competitions like <a href="/codeforces/">Codeforces</a>. An exciting leap in AI problem-solving capabilities, combining many advances in machine learning! 

Read more: dpmd.ai/Alpha-Code 1/
Google DeepMind (@googledeepmind) 's Twitter Profile Photo

In Science Magazine, we present #AlphaCode - the first AI system to write computer programs at a human level in competitions. It placed in the top 54% of participants in coding contests by solving new and complex problems. How does it work? 🧵 dpmd.ai/alphacode-scie…

Jeff Dean (@jeffdean) 's Twitter Profile Photo

I’m very excited to share our work on Gemini today! Gemini is a family of multimodal models that demonstrate really strong capabilities across the image, audio, video, and text domains. Our most-capable model, Gemini Ultra, advances the state of the art in 30 of 32 benchmarks,

I’m very excited to share our work on Gemini today!  Gemini is a family of multimodal models that demonstrate really strong capabilities across the image, audio, video, and text domains.  Our most-capable model, Gemini Ultra, advances the state of the art in 30 of 32 benchmarks,
Google DeepMind (@googledeepmind) 's Twitter Profile Photo

AlphaCode was the first AI to write code at a human level in competitive programming. 🛠 Using a specialized version of Gemini, we created AlphaCode 2, which excels on this task. We estimate it performs better than 85% of participants on 12 recent @Codeforces contests. ↓

Rémi Leblond (@remileblond) 's Twitter Profile Photo

So excited to share what the team and I have been working on these last months! #AlphaCode 2 is powered by Gemini and performs better than 85% of competition participants in 12 contests on Codeforces! More details at goo.gle/AlphaCode2 Google DeepMind

So excited to share what the team and I have been working on these last months! #AlphaCode 2 is powered by Gemini and performs better than 85% of competition participants in 12 contests on Codeforces! More details at goo.gle/AlphaCode2 <a href="/GoogleDeepMind/">Google DeepMind</a>
Petar Veličković (@petarv_93) 's Twitter Profile Photo

Here is AlphaCode, successfully cracking a held-out Div1-level Codeforces task involving tricky use of dynamic programming _and_ hard-to-get-right modular arithmetic, Fermat's little theorem, and many other shiny competitive programming gems 💎♊️

Horace He (@chhillee) 's Twitter Profile Photo

As mentioned previously, I found AlphaCode2 accounts, and through stalking their submission history, I manually performed the AlphaCode2 Codeforces evals. Overall, very impressive! I arrive at a rating of ~1650, which is the 85-90th percentile of CF users. (1/19)

As mentioned previously, I found AlphaCode2 accounts, and through stalking their submission history, I manually performed the AlphaCode2 Codeforces evals.

Overall, very impressive! I arrive at a rating of ~1650, which is the 85-90th percentile of CF users.

(1/19)
Rémi Leblond (@remileblond) 's Twitter Profile Photo

Cool results from OpenAI: 89th percentile on Codeforces! A year ago #AlphaCode2 was at the 85th. Goes to show how hard this task is to climb! Really interesting to see that o1 successfully uses the AC2 playbook (diverse sampling, filtering + reranking), which works beyond coding!

The Nobel Prize (@nobelprize) 's Twitter Profile Photo

BREAKING NEWS The Royal Swedish Academy of Sciences has decided to award the 2024 #NobelPrize in Chemistry with one half to David Baker “for computational protein design” and the other half jointly to Demis Hassabis and John M. Jumper “for protein structure prediction.”

BREAKING NEWS
The Royal Swedish Academy of Sciences has decided to award the 2024 #NobelPrize in Chemistry with one half to David Baker “for computational protein design” and the other half jointly to Demis Hassabis and John M. Jumper “for protein structure prediction.”
Jeff Dean (@jeffdean) 's Twitter Profile Photo

What a way to celebrate one year of incredible Gemini progress -- #1🥇across the board on overall ranking, as well as on hard prompts, coding, math, instruction following, and more, including with style control on. Thanks to the hard work of everyone in the Gemini team and

What a way to celebrate one year of incredible Gemini progress -- #1🥇across the board on overall ranking, as well as on hard prompts, coding, math, instruction following, and more, including with style control on.

Thanks to the hard work of everyone in the Gemini team and
🇺🇦 Alex Polozov (@skiminok) 's Twitter Profile Photo

Welcome to ✨ Gemini 2.0! I am so thrilled about Flash as it allowed us to build the next generation of Code Agents experience: developers.googleblog.com/en/the-next-ch… Allowing every eng team or sole builder to focus their time on creation, not bugfixing or maintenance. 🦑 Jules, an

Silas Alberti (@silasalberti) 's Twitter Profile Photo

Wow we just ran Gemini 2.5 Pro on our evals and it got a new state of the art. Congrats to the Gemini team! Sharing preliminary results here and working on bringing it into Devin:

Wow we just ran Gemini 2.5 Pro on our evals and it got a new state of the art. Congrats to the Gemini team!

Sharing preliminary results here and working on bringing it into Devin:
Google AI (@googleai) 's Twitter Profile Photo

🎉 It's a BIG day for Gemini 2.5 — 2.5 Flash and 2.5 Pro are now stable and generally available in AI Studio, Vertex AI, and the Google Gemini App — We're launching a preview of the new 2.5 Flash-Lite, our most cost-efficient and fastest 2.5 model yet More info on each model below ⬇️

🎉 It's a BIG day for Gemini 2.5

— 2.5 Flash and 2.5 Pro are now stable and generally available in AI Studio, Vertex AI, and the <a href="/GeminiApp/">Google Gemini App</a>
— We're launching a preview of the new 2.5 Flash-Lite, our most cost-efficient and fastest 2.5 model yet

More info on each model below ⬇️
Kaggle (@kaggle) 's Twitter Profile Photo

📢Introducing Kaggle Game Arena: a new, open benchmark platform where top AI models compete in complex, strategic games in streamed match-ups. We're charting new frontiers for trustworthy AI evaluation and it begins with chess — a classic proving ground for system intelligence.

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

We have a long history of using games to measure progress in AI. 🎮 That’s why we’re helping unveil the @Kaggle Game Arena: an open-source platform where models go head-to-head in complex games to help us gauge their capabilities. 🧵

Jeff Dean (@jeffdean) 's Twitter Profile Photo

I’m really excited about our release of Gemini 3 today, the result of hard work by many, many people in the Gemini team and all across Google! 🎊 We’ve built many exciting new product experiences with it, as you’ll see today and in the coming weeks and months. You can find it

I’m really excited about our release of Gemini 3 today, the result of hard work by many, many people in the Gemini team and all across Google! 🎊

We’ve built many exciting new product experiences with it, as you’ll see today and in the coming weeks and months.

You can find it
Oriol Vinyals (@oriolvinyalsml) 's Twitter Profile Photo

The secret behind Gemini 3? Simple: Improving pre-training & post-training 🤯 Pre-training: Contra the popular belief that scaling is over—which we discussed in our NeurIPS '25 talk with Ilya Sutskever and Quoc Le—the team delivered a drastic jump. The delta between 2.5 and 3.0 is

The secret behind Gemini 3?

Simple: Improving pre-training &amp; post-training 🤯

Pre-training: Contra the popular belief that scaling is over—which we discussed in our NeurIPS '25 talk with <a href="/ilyasut/">Ilya Sutskever</a> and <a href="/quocleix/">Quoc Le</a>—the team delivered a drastic jump. The delta between 2.5 and 3.0 is