Venkat Raman (@venkat2811) 's Twitter Profile
Venkat Raman

@venkat2811

Software Engineer by passion & profession. 🇮🇳 🇩🇪 Hobbies: Photography, History & Aviation. Views are my own.

ID: 1103934805

linkhttps://venkat.ai calendar_today19-01-2013 15:14:39

208 Tweet

155 Followers

905 Following

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

How to become expert at thing: 1 iteratively take on concrete projects and accomplish them depth wise, learning “on demand” (ie don’t learn bottom up breadth wise) 2 teach/summarize everything you learn in your own words 3 only compare yourself to younger you, never to others

effectfully (@effectfully) 's Twitter Profile Photo

A type system? No no no no. Why would you go after types? If you show people types, they'll ask "how expressive?" -- and it will never be enough. The lang that was the Java Script killer becomes another Elm. But if your lang has no types, you can say it's pre-types -- and it's a

A type system? No no no no. Why would you go after types? If you show people types, they'll ask "how expressive?" -- and it will never be enough. The lang that was the Java Script killer becomes another Elm.

But if your lang has no types, you can say it's pre-types -- and it's a
restate (@restatedev) 's Twitter Profile Photo

We just published a new post on building production-grade serverless AI agents. Serverless is perfect for bursty agentic workloads, but you quickly hit limits for long-running, stateful tasks, like human approvals. Here’s how we fixed that 👇

We just published a new post on building production-grade serverless AI agents. Serverless is perfect for bursty agentic workloads, but you quickly hit limits for long-running, stateful tasks, like human approvals.
Here’s how we fixed that 👇
Dylan Patel ✈️ ICLR (@dylan522p) 's Twitter Profile Photo

Today we are launching InferenceMAX! We have support from Nvidia, AMD, OpenAI, Microsoft, Pytorch, SGLang, vLLM, Oracle, CoreWeave, TogetherAI, Nebius, Crusoe, HPE, SuperMicro, Dell It runs every day on the latest software (vLLM, SGLang, etc) across hundreds of GPUs, $10Ms of

Demis Hassabis (@demishassabis) 's Twitter Profile Photo

We processed over 1.3 Quadrillion tokens last month - that's 1,300,000,000,000,000 tokens! or to put it another way that's 500M tokens a second or 1.8 Trillion tokens an hour... 🤯

We processed over 1.3 Quadrillion tokens last month - that's 1,300,000,000,000,000 tokens! or to put it another way that's 500M tokens a second or 1.8 Trillion tokens an hour... 🤯
Marc Andreessen 🇺🇸 (@pmarca) 's Twitter Profile Photo

Years the commentariat has been convinced there is a tech bubble: 1995, 1996, 1997, 1998, 1999, 2000, 2002, 2003, 2004, 2005, 2006, 2007, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2022, 2023, 2024, 2025.

Jerason Banes (@classicgamertwr) 's Twitter Profile Photo

It’s a bit easier to just explain how we got here. In the old days, memory and CPU ran at the same speed. The laws of physics make it incredibly hard to ramp up the MHz of the CPU while keeping physically separated memory chips in sync. The distance is just too far for the

Tom Blomfield (@t_blom) 's Twitter Profile Photo

Hearing from a lot of good founders that AI tools are writing most of their code now. Software engineers orchestrate the AI. They are also finding it extremely hard to hire because most experienced engineers have their heads in the sand and refuse to learn the latest tools.

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

Excited to release new repo: nanochat! (it's among the most unhinged I've written). Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single,

Excited to release new repo: nanochat!
(it's among the most unhinged I've written).

Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single,
Angel Bogado 🌻 (@angaisb_) 's Twitter Profile Photo

Things that didn't exist three months ago: - GPT-5 (Instant, Thinking, Mini, Nano, Pro) - GPT-5 Codex - GPT-OSS (120b & 20b) - Opus 4.1 - Genie 3 - Sora 2 - Sonnet 4.5 - Nano-banana - Seedream 4.0 I like doing these from time to time to feel the acceleration