maxfaber (@maxfaber_om) 's Twitter Profile
maxfaber

@maxfaber_om

🇦🇺🏃🧑‍💻 Music, Running, QA, Testing, Test Automation, Indie hacking, Mental health

ID: 107342337

linkhttp://clickworks.me calendar_today22-01-2010 07:29:13

3,3K Tweet

126 Followers

597 Following

Playwright (@playwrightweb) 's Twitter Profile Photo

📢 Meet Playwright CLI — a SKILL-friendly way of the browser automation. Learn more at github.com/microsoft/play…. Happy testing!

Zac (@perceptualpeak) 's Twitter Profile Photo

jeffscottworld Hi! I actually just whipped up a repo for this - tell your Claude code to implement this and it should be able to whip it right up: github.com/zacdcook/claud…

Marc Puig (@mpuig) 's Twitter Profile Photo

With Qwen Qwen3-ASR 0.6 on Apple Silicon, local ASR finally became: • fast • accurate • predictable So I built dictate.sh -a tiny on-device dictation tool- and wrote about what changed 👇 medium.com/p/local-speech…

Qwen (@alibaba_qwen) 's Twitter Profile Photo

🚀 Introducing Qwen3-Coder-Next, an open-weight LM built for coding agents & local development. What’s new: 🤖 Scaling agentic training: 800K verifiable tasks + executable envs 📈 Efficiency–Performance Tradeoff: achieves strong results on SWE-Bench Pro with 80B total params and

🚀 Introducing Qwen3-Coder-Next, an open-weight LM built for coding agents & local development.
What’s new:
🤖 Scaling agentic training: 800K verifiable tasks + executable envs
📈 Efficiency–Performance Tradeoff: achieves strong results on SWE-Bench Pro with 80B total params and
Z.ai (@zai_org) 's Twitter Profile Photo

Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens.

Introducing GLM-5: From Vibe Coding to Agentic Engineering

GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens.
Koushik Sen (@koushik77) 's Twitter Profile Photo

Repo Optimizer: I let an AI agent optimize itself overnight. It cut its own cost by 98%. No manual tuning. No architecture redesign. Just a plain-English instruction and a feedback loop. I built a 69-line Python script that points my KISS framework's coding agent at its own