Thomas Mahier (@thomasmahier) 's Twitter Profile
Thomas Mahier

@thomasmahier

CTO & cofondateur de flint.media & generationia.flint.media

ID: 1431274718506721282

linkhttps://generationia.flint.media/subscribe?utm_source=twitter&utm_medium=profiletoma calendar_today27-08-2021 15:17:51

334 Tweet

76 Takipçi

84 Takip Edilen

Eleanor Berger (@intellectronica) 's Twitter Profile Photo

📋 My current AI-assisted coding toolset 👇 ▨ IDE ▫️ Visual Studio Code + GitHub Copilot — this is my happy place, my workhorse. A system I know well, that's open and honest, that keeps improving, the gives me all the features and versatility I need for interactive coding. I mostly use

finbarr (@finbarrtimbers) 's Twitter Profile Photo

The canonical reinforcement learning textbook is available online for free and contains 80% of what you need to do RL as a practitioner: incompleteideas.net/book/the-book-… (The remaining 20% is reading the VLLM docs)

François Chollet (@fchollet) 's Twitter Profile Photo

Saying that deep learning is "just a bunch of matrix multiplications" is about as informative as saying that computers are "just a bunch of transistors" or that a library is "just a lot of paper and ink." It's true, but the encoding substrate is the least important part here.

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

In era of pretraining, what mattered was internet text. You'd primarily want a large, diverse, high quality collection of internet documents to learn from. In era of supervised finetuning, it was conversations. Contract workers are hired to create answers for questions, a bit

François Chollet (@fchollet) 's Twitter Profile Photo

When a model gives you the right answer to a reasoning question, you can't tell whether it was via memorization or via reasoning. A simple way to tell between the two is to tweak your question in a way that 1. changes the answer, 2. requires some reasoning to adapt to the

When a model gives you the right answer to a reasoning question, you can't tell whether it was via memorization or via reasoning.

A simple way to tell between the two is to tweak your question in a way that 1. changes the answer, 2. requires some reasoning to adapt to the
Chris Barber (@chrisbarber) 's Twitter Profile Photo

I asked Jeremy Howard, anonymous, Jaime Sevilla, and tylercowen: "How can people get good at using AI?" Jeremy Howard (@AnswerDotAI, @FastDotAI) "Using AI tools correctly takes months and months of diligent study and practice. When you start doing it, you will be shit and you

I asked <a href="/jeremyphoward/">Jeremy Howard</a>, anonymous, <a href="/Jsevillamol/">Jaime Sevilla</a>, and <a href="/tylercowen/">tylercowen</a>: "How can people get good at using AI?"

<a href="/jeremyphoward/">Jeremy Howard</a> (@AnswerDotAI, @FastDotAI)
"Using AI tools correctly takes months and months of diligent study and practice. When you start doing it, you will be shit and you
Simon Willison (@simonw) 's Twitter Profile Photo

François Chollet I use this line about LLMs a lot - that they're a bunch of matrix multiplication - because I think it demystifies them A multiple GB file of floating point numbers you can download and run matrix multiplications against on your own machine is a bit less weird and frightening

Charlie O'Neill (@charles0neill) 's Twitter Profile Photo

Today, we’re launching Parsed. We are incredibly lucky to live in a world where we stand on the shoulders of giants, first in science and now in AI. Our heroes have gotten us to this point, where we have brilliant general intelligence in our pocket. But this is a local minima. We

Hamel Husain (@hamelhusain) 's Twitter Profile Photo

I wish I could have the CoPilot he is using. Because mine has never worked on the most basic tasks. I just tried it again now.

I wish I could have the CoPilot he is using.  Because mine has never worked on the most basic tasks.  I just tried it again now.
Andrej Karpathy (@karpathy) 's Twitter Profile Photo

Transforming human knowledge, sensors and actuators from human-first and human-legible to LLM-first and LLM-legible is a beautiful space with so much potential and so much can be done... One example I'm obsessed with recently - for every textbook pdf/epub, there is a perfect

Transforming human knowledge, sensors and actuators from human-first and human-legible to LLM-first and LLM-legible is a beautiful space with so much potential and so much can be done...

One example I'm obsessed with recently - for every textbook pdf/epub, there is a perfect
Dimitris Papailiopoulos (@dimitrispapail) 's Twitter Profile Photo

Language models should be trained to create their own tools! Language models can use tools, but they can't create their own tools at a complexity level similar to humans. What are tools? Tools are computational shortcuts for patterns you see repeatedly, or for tasks that just

Deb Raji (@rajiinio) 's Twitter Profile Photo

OpenAI was started because its founders didn't trust Google/DeepMind to safely build AGI.. Anthropic was founded because its founders didn't trust OpenAI to safely build AGI... SSI was founded because its founders didn't trust OpenAI or Anthropic to safely build AGI.. What if...

Andriy Burkov (@burkov) 's Twitter Profile Photo

I don't know if you've noticed that if an LLM solves a coding problem you asked it to solve on the first attempt, it remains "smart" if you continue to ask for more solutions in the same session. On the other hand, if it does something unexpected in response to your first

Jeremy Howard (@jeremyphoward) 's Twitter Profile Photo

Maybe this is part of why I find GPT-5 on ChatGPT so annoying -- apparently its system prompt is explicitly set to *not* ask clarifying questions?!? I find it really annoying the way it just goes off and tries to solve the world in one shot. I really want to iterate!

Maybe this is part of why I find GPT-5 on ChatGPT so annoying -- apparently its system prompt is explicitly set to *not* ask clarifying questions?!?

I find it really annoying the way it just goes off and tries to solve the world in one shot. I really want to iterate!
Philipp Schmid (@_philschmid) 's Twitter Profile Photo

Gemini 2.5 Flash Image (Nano Banana) best practices 🍌🍌🍌 - Be hyper-specific: The more detail you provide, the more control you have. Instead of "fantasy armor," describe it "ornate elven plate armor, etched with silver leaf patterns, with a high collar and pauldrons shaped

Gemini 2.5 Flash Image (Nano Banana) best practices 🍌🍌🍌

- Be hyper-specific: The more detail you provide, the more control you have. Instead of "fantasy armor," describe it "ornate elven plate armor, etched with silver leaf patterns, with a high collar and pauldrons shaped
jason liu - vacation mode (@jxnlco) 's Twitter Profile Photo

I published a brain dump of all my thoughts on contact engine engineering so far What else should I read about? jxnl.co/writing/2025/0…

Simon Willison (@simonw) 's Twitter Profile Photo

beknighted Laughing Louder Jeffrey Emanuel Mitchell Hashimoto The entire web industry has been making this same mistake for about a decade now, it's sad to see GitHub fall into a trap they had previously avoided See also: infrequently.org/series/reckoni…

rob🏴 (@rob_mcrobberson) 's Twitter Profile Photo

people who think all jobs are about to become obsolete have no idea how hard it is to actually integrate an LLM into a typical normie business workflow or any kind of ML for that matter. its a huge last mile problem that only a 14 yo who doesnt work in the industry could ignore