Zacwatts (@gareth) 's Twitter Profile
Zacwatts

@gareth

ID: 4654150632

calendar_today26-12-2015 08:37:21

0 Tweet

0 Followers

1 Following

Lisan al Gaib (@scaling01) 's Twitter Profile Photo

GPT-5 was all about making investors happy - "optimize for inference cost" - "We could go make another giant model [...] and a lot of people would want to use it and we would disappoint them" (this is about price/perf and model training costs) - "If we didn’t pay for

GPT-5 was all about making investors happy

 - "optimize for inference cost"
 - "We could go make another giant model [...] and a lot of people would want to use it and we would disappoint them"  (this is about price/perf and model training costs)
- "If we didn’t pay for
Jeffrey Emanuel (@doodlestein) 's Twitter Profile Photo

I’m absolutely convinced that the smartest AI models out now, GPT-5 Pro and Grok4 Heavy, are already smart enough, and surely knowledgeable enough about math and AI, to conceive and develop important theoretical and practical discoveries, given the right kind of clever prompting.

Harj Taggar (@harjtaggar) 's Twitter Profile Photo

Gemini Pro 2.5 is my favorite model but man the software around it sucks so much. Several times it’s switched to research mode on me and lost the chat history. Google continues to snatch defeat from the jaws of victory.

Amjad Masad (@amasad) 's Twitter Profile Photo

Every coding revolution I helped with was mocked before it won. Codecademy's "anyone can learn to code" was mocked before it became standard advice. React was “HTML in JS, eww”—now it runs the web. Replit ⠕ we said “anyone can build” for years, now it’s real—backlash begins!

Yasser (@yasser_elsaid_) 's Twitter Profile Photo

It’s so easy to become an expert in AI engineering if you’re technical. Shouldn’t take longer than a month. It’s one of the highest ROI skills in the world rn and it looks intimidating from the outside so it’s scaring off your competition, which is perfect. It’s actually a

Haider. (@slow_developer) 's Twitter Profile Photo

historically, Google has done a poor job of marketing to consumers but their technology is excellent. in terms of LLMs, they're close to openAI (at least based on what's public) they research nearly every AI field -- image, video, audio, physics, chemistry, games, and more

Justine Moore (@venturetwins) 's Twitter Profile Photo

The easiest way to “level up” in your career is being early to something. This could be a seed stage startup, a new VC firm, the next big social app where you build an audience… There’s risk if it doesn’t work. But the reward is real.

Bindu Reddy (@bindureddy) 's Twitter Profile Photo

An unusually high number of humans tend to mistreat AI models. They constantly yell and insult the LLM and force it to say or do things even if the AI refuses to do so Anthropic has decided to shut down these conversations. In the future, the model will refuse to respond unless

Aaron Levie (@levie) 's Twitter Profile Photo

The paradigm of AI subagents is going to be super interesting. There was probably some hope or belief that a universal agent would be able to handle everything you needed in a workflow by stuffing all the relevant context into the context window. But even with larger context

Matt Shumer (@mattshumer_) 's Twitter Profile Photo

Super useful vibe coding prompt to get an agent unstuck when it's struggling to complete a tricky task: "I can help you gather information to fix this. Give me a prompt(s) you want me to give to GPT-5 w/ web search for me to run, and I'll give you the results."

Teknium (e/λ) (@teknium1) 's Twitter Profile Photo

Coming to realize how tightly integrated agentic code ide's and the models are. Seems like gpt-5 sucks anywhere but codex-cli (allegedly). Claude is good in cursor & claude code. Gemini doesnt seem to have been well integrated with RL on anything shucks

Haider. (@slow_developer) 's Twitter Profile Photo

i remain certain... LLMs are narrow AI they are already powerful and will only grow stronger, but they will never lead to human-level AI architecture is secondary; the most important factor is data we may need "smart" data, and the best data comes from nature, not humans

Sumanth (@sumanth_077) 's Twitter Profile Photo

Google just dropped a new light-weight 270M model! Gemma-3-270M is a lightweight, open-weight LLM that's perfect for task-specific fine-tuning with strong instruction-following. This notebook explains how to build Gemma-3-270M from scratch using PyTorch, step by step.

Google just dropped a new light-weight 270M model!

Gemma-3-270M is a lightweight, open-weight LLM that's perfect for task-specific fine-tuning with strong instruction-following.

This notebook explains how to build Gemma-3-270M from scratch using PyTorch, step by step.
Guillermo Rauch (@rauchg) 's Twitter Profile Photo

v0 is beloved by vibe coders, startups, and enterprises. Founders are replacing pitch decks with prototypes. Teams love the speed they can communicate new ideas. Enterprises are using it to reinvent themselves with AI. The way we work is changing

v0 is beloved by vibe coders, startups, and enterprises. Founders are replacing pitch decks with prototypes. Teams love the speed they can communicate new ideas. Enterprises are using it to reinvent themselves with AI. The way we work is changing
Guillermo Rauch (@rauchg) 's Twitter Profile Photo

A "crowded" market is not really crowded if it's just a bunch of crappy options. Music players were crowded before iPod. Earbuds were crowded before the Airpods. (Inspired by meeting a founder today who's building in a "crowded" market and steamrolling everyone.)