Delphine (@delphinel) 's Twitter Profile
Delphine

@delphinel

ID: 18331726

calendar_today23-12-2008 11:57:38

2,2K Tweet

899 Followers

1,1K Following

Mario Nawfal’s Roundtable (@roundtablespace) 's Twitter Profile Photo

EVERYTHING CLAUDE CODE JUST OPEN SOURCED A FULL AI ENGINEERING SYSTEM. 28 agents, 116 skills, 59 commands, MCP integrations, hooks, rules, and even a built in security scanner.

EVERYTHING CLAUDE CODE JUST OPEN SOURCED A FULL AI ENGINEERING SYSTEM.

28 agents, 116 skills, 59 commands, MCP integrations, hooks, rules, and even a built in security scanner.
Robert Youssef (@rryssf_) 's Twitter Profile Photo

everyone's building multi-agent systems right now. multiple llms collaborating, checking each other's work, splitting tasks researchers tested whether this actually helps across 180 controlled configurations. matched token budgets. multiple model families. four different task

everyone's building multi-agent systems right now. multiple llms collaborating, checking each other's work, splitting tasks

researchers tested whether this actually helps across 180 controlled configurations. matched token budgets. multiple model families. four different task
Ryan Hart 🚀 (@thisdudelikesai) 's Twitter Profile Photo

🚨BREAKING: Sam Altman's OpenAI engineers just leaked their internal context management framework. No courses. No paywalls. No BS. Your agents are burning 4x the tokens they need and this kills that cold. Here are the 7 principles they use internally that nobody's talking

🚨BREAKING: Sam Altman's OpenAI engineers just leaked their internal context management framework.

No courses. No paywalls. No BS.

Your agents are burning 4x the tokens they need and this kills that cold.

Here are the 7 principles they use internally that nobody's talking
Gilles Babinet (@babgi) 's Twitter Profile Photo

Pour une fois un article qui ne raconte pas n'importe quoi à propos de l'emploi et de l'intelligence artificielle. Bien au contraire, il souligne que ce qui change c'est la structure du travail et des tâches dont il est composé, mais que pour l'instant, il n'y a pas de réseau

🔥 Matt Dancho (Business Science) 🔥 (@mdancho84) 's Twitter Profile Photo

Stanford just dropped a 457 page report on AI. It's packed with data on: cost drops, efficiency, benchmarks, adoption. This report is a cheat code for your career in 2026. I pulled the most important charts + what they mean for your career: 🧵

Stanford just dropped a 457 page report on AI. 

It's packed with data on: cost drops, efficiency, benchmarks, adoption.

This report is a cheat code for your career in 2026.

I pulled the most important charts + what they mean for your career: 🧵
Zach Morris Wilson (@eczachly) 's Twitter Profile Photo

You only need to read four books to truly get what’s going on in ML and data engineering: - Fundamentals of Data Engineering by Joe Reis - Designing Data Intensive Applications by Martin Kleppmann - AI engineering by Chip Huyen - Designing Machine Learning Systems by Chip Huyen

Ruben Hassid (@rubenhssd) 's Twitter Profile Photo

Don't type another prompt into Claude. Do these 9 simple things first: 1. Download the Claude app Claude .ai works. But the desktop app is better. Go to claude .com/download. Install it now. 2. Pick the Right Model Select Opus 4.6. Turn on Extended Thinking. Click models →

Don't type another prompt into Claude. 

Do these 9 simple things first:

1. Download the Claude app
Claude .ai works. But the desktop app is better.
Go to claude .com/download. Install it now.

2. Pick the Right Model
Select Opus 4.6. Turn on Extended Thinking.
Click models →
Akshay 🚀 (@akshay_pachaar) 's Twitter Profile Photo

A single 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱 file just hit 15K GitHub stars. (derived from Karpathy's coding rules) Andrej Karpathy observed that LLMs make the same predictable mistakes when writing code: over-engineering, ignoring existing patterns, and adding dependencies you never asked for.

A single 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱 file just hit 15K GitHub stars.

(derived from Karpathy's coding rules)

Andrej Karpathy observed that LLMs make the same predictable mistakes when writing code: over-engineering, ignoring existing patterns, and adding dependencies you never asked for.
Alex Prompter (@alex_prompter) 's Twitter Profile Photo

Everyone assumes LLMs are the future of AI. The permanent foundation. The layer everything else gets built on. I’m not so sure. The historical parallel that fits best isn’t the one most people want to hear. LLMs are Edison’s DC power grid: → Genuinely revolutionary →

Everyone assumes LLMs are the future of AI.

The permanent foundation. The layer everything else gets built on.

I’m not so sure.

The historical parallel that fits best isn’t the one most people want to hear.

LLMs are Edison’s DC power grid:
→ Genuinely revolutionary
→
Tech with Mak (@technmak) 's Twitter Profile Photo

Andrej Karpathy wrote something that every Claude Code user has felt but couldn't articulate. Three quotes. Read them slowly. "The models make wrong assumptions on your behalf and just run along with them without checking. They don't manage their confusion, don't seek

Andrej Karpathy wrote something that every Claude Code user has felt but couldn't articulate.

Three quotes. Read them slowly.

"The models make wrong assumptions on your behalf and just run along with them without checking. They don't manage their confusion, don't seek
🔥 Matt Dancho (Business Science) 🔥 (@mdancho84) 's Twitter Profile Photo

🚨 RIP Prompt Engineering Enter Context Engineering 2.0 It completely reframes how we think about human-AI interactions. This is what you need to know (28 page PDF):

🚨 RIP Prompt Engineering

Enter Context Engineering 2.0 

It completely reframes how we think about human-AI interactions. 

This is what you need to know (28 page PDF):
Zabihullah Atal (@zabihullahatal) 's Twitter Profile Photo

Stanford just released a 1.5-hour lecture on “LLM Architecture.” This is the exact thing systems engineers at Anthropic and OpenAI require to understand at a deep level. Give it some time. This might be the highest-ROI learning you do this month.

Tech with Mak (@technmak) 's Twitter Profile Photo

These are literally the kind of LLM interview questions most candidates wish they had seen earlier. A curated list of 50 LLM interview questions - shared by Hao Hoang. What's covered: Fundamentals: → Tokenization and why it matters → Attention mechanisms in transformers →

These are literally the kind of LLM interview questions most candidates wish they had seen earlier.

A curated list of 50 LLM interview questions - shared by Hao Hoang.

What's covered:

Fundamentals:
→ Tokenization and why it matters
→ Attention mechanisms in transformers
→
🔥 Matt Dancho (Business Science) 🔥 (@mdancho84) 's Twitter Profile Photo

Is context engineering just a new name for RAG? Not quite. But they're solving the same problem: building the right context for your LLM. Here's how we got from one to the other — and why it matters for AI data scientists.

Is context engineering just a new name for RAG?

Not quite. But they're solving the same problem: building the right context for your LLM.

Here's how we got from one to the other — and why it matters for AI data scientists.
Andrew Ng (@andrewyng) 's Twitter Profile Photo

AI-native software engineering teams operate very differently than traditional teams. The obvious difference is that AI-native teams use coding agents to build products much faster, but this leads to many other changes in how we operate. For example, some great engineers now play

AI-native software engineering teams operate very differently than traditional teams. The obvious difference is that AI-native teams use coding agents to build products much faster, but this leads to many other changes in how we operate. For example, some great engineers now play
Data Science Dojo (@datasciencedojo) 's Twitter Profile Photo

MCP and A2A are both agent protocols but they operate at completely different layers. MCP (Model Context Protocol) is about giving one LLM access to external tools. The model stays in the driver's seat throughout: it receives your query, decides which tools to call, gets the

MCP and A2A are both agent protocols but they operate at completely different layers.

MCP (Model Context Protocol) is about giving one LLM access to external tools. The model stays in the driver's seat throughout: it receives your query, decides which tools to call, gets the