𝖌𝖎𝖘𝖙 (narcissus arc) (@lfgist) 's Twitter Profile
𝖌𝖎𝖘𝖙 (narcissus arc)

@lfgist

tired boï - hundsome gang - pfp by @untitled01ipynb - our father in langley hallowed be your name

ID: 1627397499240517632

linkhttps://www.youtube.com/watch?v=3MgO0BLb6Cg calendar_today19-02-2023 19:59:50

10,10K Tweet

1,1K Takipçi

1,1K Takip Edilen

herbst (@hybridherbst) 's Twitter Profile Photo

I made a mini browser that sees the web like an AI Introducing: md-browse. free, open source Go make your websites and docs AI-friendly! github.com/needle-tools/m…

DHH (@dhh) 's Twitter Profile Photo

Kimi K2.5 at this kind of speed is just magic. Makes a man eye what kind of behemoth home cluster one would have to build to run this himself. Even if we saw no more AI progress, owning this kind of intelligence forever is incredibly alluring.

Harrison Kinsley (@sentdex) 's Twitter Profile Photo

Given the amt of slop lately, I'm cautious to say, but it's true: I think we're finally at the point of actually usable and useful local LLMs as coding agents running on slower mem 1. Ollama Claude Code 2. 50GB+ of memory (cpu/ram is very usable here) 3. Qwen3-Coder-Next 4bit+

Given the amt of slop lately, I'm cautious to say, but it's true: I think we're finally at the point of actually usable and useful local LLMs as coding agents running on slower mem

1. Ollama Claude Code
2. 50GB+ of memory (cpu/ram is very usable here)
3. Qwen3-Coder-Next 4bit+
Sebastien Bubeck (@sebastienbubeck) 's Twitter Profile Photo

AIxMath is going through some extremely confused chatter at the moment. Everyone agree on the facts, yet the interpretation oscillates between "I have never seen an AI have a brilliant idea & it probably will never happen" and "math is so over, look at this algebraic geometry

Bartosz Naskręcki (@nasqret) 's Twitter Profile Photo

Making an app a day has become my habit recently. When I was much younger I used to make a lot of plans for apps etc. but most of the time got stuck with the technologies and failed to build anything (though I learned a ton of programming). But now I really enjoy the interactions

Making an app a day has become my habit recently. When I was much younger I used to make a lot of plans for apps etc. but most of the time got stuck with the technologies and failed to build anything (though I learned a ton of programming). But now I really enjoy the interactions
Damek (@damekdavis) 's Twitter Profile Photo

Closing the loop: the formalization has now been completed Harmonic's Aristotle. I gave Aristotle a proof sketch, written by GPT5.2 Pro. It took 15 minutes and generated ~200 lines of lean. It compiled with warnings, so codex lightly edited the result. link below.

God of Prompt (@godofprompt) 's Twitter Profile Photo

🚨 Holy shit… Stanford just published the most uncomfortable paper on LLM reasoning I’ve read in a long time. This isn’t a flashy new model or a leaderboard win. It’s a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say

🚨 Holy shit… Stanford just published the most uncomfortable paper on LLM reasoning I’ve read in a long time.

This isn’t a flashy new model or a leaderboard win. It’s a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say
Fanghui Liu (@fanghui_sgra) 's Twitter Profile Photo

🚀 We present the first large-scale Lean 4 formulation of Statistical learning theory from scratch! Led by my student Yuanhe Zhang and collaborated with Jason Lee 📄 Paper: arxiv.org/abs/2602.02285 💻 GitHub: github.com/YuanheZ/lean-s… 🤗 Dataset: huggingface.co/collections/li…

🚀 We present the first large-scale Lean 4 formulation of Statistical learning theory from scratch!

Led by my student <a href="/yuanhezhang6/">Yuanhe Zhang</a> and collaborated with <a href="/jasondeanlee/">Jason Lee</a> 

📄 Paper: arxiv.org/abs/2602.02285
💻 GitHub: github.com/YuanheZ/lean-s…
🤗 Dataset: huggingface.co/collections/li…
Sanjeev Arora (@prfsanjeevarora) 's Twitter Profile Photo

It seems these mathematicians seem unaware that a single LLM call cannot solve a difficult problem, let alone an open problem. A single LLM call provides too little total compute. Difficult problems require orchestrated pipelines with many calls. e.g., as in this paper

Alex Imas (@alexolegimas) 's Twitter Profile Photo

I think the best mental model for today's agents is Guy Pearce's character in one of Nolan's first films, Memento. He's got extreme amnesia, and needs to look up instructions for every single action from notes (on his body). Learning still happens, but there's no updating of

I think the best mental model for today's agents is Guy Pearce's character in one of Nolan's first films, Memento. He's got extreme amnesia, and needs to look up instructions for every single action from notes (on his body). 

Learning still happens, but there's no updating of