Jaisidh Singh (@jaisidhsingh) 's Twitter Profile
Jaisidh Singh

@jaisidhsingh

Deep learner • 🇩🇪🇿🇦🇲🇺🇦🇪🇮🇳

ID: 1296745006066098180

linkhttps://jaisidhsingh.github.io calendar_today21-08-2020 09:44:57

320 Tweet

83 Takipçi

503 Takip Edilen

Vision Transformers (@vitransformer) 's Twitter Profile Photo

For builders: Spam <system-reminder> tags Front-load context before work Add reminders in tool results Use dynamic context injection The magic isn't intelligence. It's obsessive focus management. (5/6)

VraserX e/acc (@vraserx) 's Twitter Profile Photo

GPT-5 just casually did new mathematics. Sebastien Bubeck gave it an open problem from convex optimization, something humans had only partially solved. GPT-5-Pro sat down, reasoned for 17 minutes, and produced a correct proof improving the known bound from 1/L all the way to

GPT-5 just casually did new mathematics.

Sebastien Bubeck gave it an open problem from convex optimization, something humans had only partially solved. GPT-5-Pro sat down, reasoned for 17 minutes, and produced a correct proof improving the known bound from 1/L all the way to
Lucas Beyer (bl16) (@giffmana) 's Twitter Profile Photo

This is an unwise statement that can only make people confused about what LLMs can or cannot do. Let me tell you something: Programming is NOT about solving this kind of ad hoc automation problems. Yeah, by scraping available data and then clustering it, LLMs can sometimes solve

Jaisidh Singh (@jaisidhsingh) 's Twitter Profile Photo

Something that a lot people are noticing and I too align with it: GPT-5 writes crisper code than Claude. Less hand-holdy heavily commented slop. Shorter code overall. Better formatting?

Jaisidh Singh (@jaisidhsingh) 's Twitter Profile Photo

Every time I start making a research poster thinking I know how to make these things, I’m instantly corrected by the abomination that is my first draft.