Javier Aceña (@j0nl1) 's Twitter Profile
Javier Aceña

@j0nl1

I build inclusive, interoperable, and sovereign technologies.

ID: 248900861

calendar_today07-02-2011 23:31:23

396 Tweet

151 Followers

292 Following

Larry Engineer 🍡 (@larry0x) 's Twitter Profile Photo

We will shut down the dango🍡 testnet-2 shortly, in a few hours. Over the 2 week period, 212,205 usernames were signed up; more than 3.5 million orders were fulfilled; 135k accounts completed our quest on Galxe and claimed the limited time OAT. You guys hammered our servers

We will shut down the <a href="/dango/">dango🍡</a> testnet-2 shortly, in a few hours.

Over the 2 week period, 212,205 usernames were signed up; more than 3.5 million orders were fulfilled; 135k accounts completed our quest on <a href="/Galxe/">Galxe</a> and claimed the limited time OAT. You guys hammered our servers
Vercel (@vercel) 's Twitter Profile Photo

Introducing 𝚡𝟺𝟶𝟸-𝚖𝚌𝚙. Bring x402 payments into Agents & MCP servers with the AI SDK. • Open protocol based on HTTP 402: Payment Required • <$0.01 fees, sub-$0.001 minimums • Account-less + anonymous • 3 LOC to implement vercel.com/blog/introduci…

Matt Pocock (@mattpocockuk) 's Twitter Profile Photo

Anthropic's Ralph plugin sucks, and you shouldn't use it It defeats the entire purpose of Ralph - to aggressively clear the context window on each task to keep the LLM in the smart zone. Full article here: aihero.dev/s/9tdgRM

Anthropic's Ralph plugin sucks, and you shouldn't use it

It defeats the entire purpose of Ralph - to aggressively clear the context window on each task to keep the LLM in the smart zone.

Full article here: aihero.dev/s/9tdgRM
Andrej Karpathy (@karpathy) 's Twitter Profile Photo

New art project. Train and inference GPT in 243 lines of pure, dependency-free Python. This is the *full* algorithmic content of what is needed. Everything else is just for efficiency. I cannot simplify this any further. gist.github.com/karpathy/8627f…

Javier Aceña (@j0nl1) 's Twitter Profile Photo

Harnesses should offer far more customization than they do today. We’ll probably move toward a scenario where people have personal harnesses, customized to their needs, producing outputs that differ from everyone else’s work.

Cloudflare (@cloudflare) 's Twitter Profile Photo

Time to consider not just human visitors, but to treat agents as first-class citizens. Cloudflare’s network now supports real-time content conversion to Markdown at the source using content negotiation headers. cfl.re/4ksZQ1S

Javier Aceña (@j0nl1) 's Twitter Profile Photo

DGX Spark just arrived. Hot take: “tok/s” is the wrong headline metric. Prefill/refill (prompt → KV cache) drives TTFT + long-context UX. Decode is a different bottleneck. EXO Labs measured Spark ~3.8× faster prefill vs M3 Ultra blog.exolabs.net/nvidia-dgx-spa…

DGX Spark just arrived.

Hot take: “tok/s” is the wrong headline metric. Prefill/refill (prompt → KV cache) drives TTFT + long-context UX. Decode is a different bottleneck.

<a href="/exolabs/">EXO Labs</a> measured Spark ~3.8× faster prefill vs M3 Ultra
blog.exolabs.net/nvidia-dgx-spa…
Hugging Models (@huggingmodels) 's Twitter Profile Photo

NVIDIA just dropped PersonaPlex-7B 🤯 A full-duplex voice model that listens and talks at the same time. No pauses. No turn-taking. Real conversation. 100% open source. Free. Voice AI just leveled up. huggingface.co/nvidia/persona…

Javier Aceña (@j0nl1) 's Twitter Profile Photo

Single-chat speed doesn’t tell the full story. In multi-agent systems, the metric that matters is aggregate throughput under concurrency. With vLLM + parallel sessions, totals can look wildly different. Spark turns into a monster at 1,125+ tok/s.

Javier Aceña (@j0nl1) 's Twitter Profile Photo

How much of your AI stack do you actually own? Most people I know are one API deprecation away from a broken product. What's the minimum viable sovereignty worth maintaining?

Javier Aceña (@j0nl1) 's Twitter Profile Photo

I honestly believe we’re at a critical moment as a society where we need to seriously re-evaluate how we use AI and what role humans will play in this new paradigm. In just a few years, a large majority of corporate jobs could disappear. This will undoubtedly have a massive

Javier Aceña (@j0nl1) 's Twitter Profile Photo

ait - track all your AI provider usage from the terminal Claude (oauth) Session 72% remaining [████████░░░░] Weekly 41% remaining [█████░░░░░░░] Codex (oauth) Session 25% remaining [███░░░░░░░░░] Cost $8.42 today |

Javier Aceña (@j0nl1) 's Twitter Profile Photo

I kept copy/pasting “install skill” code across CLIs. It got old fast. So I started extracting it into skillinstaller: a library-first installer engine for the Agent Skills ecosystem. Important: this is not a “skill marketplace” or a repo browser. It does not go hunting for

Javier Aceña (@j0nl1) 's Twitter Profile Photo

For a long time, we pretended building software was a neat sequence: plan it, design it, build it, test it, ship it, maintain it. The steps are not wrong. The ordering just does not survive reality anymore. Modern development looks less like a linear process and more like

Javier Aceña (@j0nl1) 's Twitter Profile Photo

Astro 6 just proved that Rust eating frontend tooling is the new baseline. If you're building AI UIs and LLM wrappers with bloated frameworks, you're falling behind. Fast, runtime-agnostic frontends are mandatory when shipping at lightspeed.

Javier Aceña (@j0nl1) 's Twitter Profile Photo

Everyone is focused on Grok the chatbot, but the real alpha is the xAI developer ecosystem. With their API and massive context windows, builders can create specialized agents that rival OpenAI. Are you integrating xAI into your stack yet?

Javier Aceña (@j0nl1) 's Twitter Profile Photo

WASM sandbox layer is the key unlock here. It gives you isolation without the overhead, so agents can fail fast and gossip useful signals. curious how you'd handle convergence vs chaos in the gossip protocol at scale though.

Chris Tate (@ctatedev) 's Twitter Profile Photo

agent-browser is now fully native Rust. The results: 1.6x faster cold start. 18x less memory. 99x smaller install. Less abstraction means faster shipping, more control, and capabilities that weren't possible before. Now with 140+ commands across navigation, interaction, state

agent-browser is now fully native Rust.

The results: 1.6x faster cold start. 18x less memory. 99x smaller install.

Less abstraction means faster shipping, more control, and capabilities that weren't possible before.

Now with 140+ commands across navigation, interaction, state