Noah King (@digitalnoah) 's Twitter Profile
Noah King

@digitalnoah

Data nerd + Social Media pioneer. '21 Techstars Alum. Co-Founder & CEO @getpopsixle - helping brands turn their Shop data into better ad performance

ID: 152635380

linkhttps://popsixle.com calendar_today06-06-2010 13:41:54

3,3K Tweet

999 Followers

696 Following

Nav Singh (@heynavsingh) 's Twitter Profile Photo

🚨 Someone built an AI that reads candlestick charts the way GPT reads English. Trained on 12 billion records from 45 exchanges. Outperforms every model by 93%. Live BTC demo. Free. It's called Kronos. The first open source foundation model built for financial markets. Not a

🚨 Someone built an AI that reads candlestick charts the way GPT reads English.

Trained on 12 billion records from 45 exchanges. Outperforms every model by 93%. Live BTC demo. Free.

It's called Kronos.

The first open source foundation model built for financial markets. Not a
Glitchbyte (@0xglitchbyte) 's Twitter Profile Photo

You did not become technical af You do not go from “I have no idea how to code” to “I shipped more than senior engineers!” in less than 6 months. You shipped code you do not understand creating an insurmountable amount of tech debt you cannot fix potentially introducing bugs

You did not become technical af 

You do not go from “I have no idea how to code” to “I shipped more than senior engineers!” in less than 6 months.

You shipped code you do not understand creating an insurmountable amount of tech debt you cannot fix potentially introducing bugs
Vaishnavi Tikke (@vtikke) 's Twitter Profile Photo

GOOGLE JUST GAVE AI AGENTS THE FULL POWER OF CHROME DEVTOOLS your ai coding agent can now open a real chrome browser, click around, inspect network requests, take screenshots, record performance traces, run lighthouse audits, and read console errors all through mcp debugging a

Mario Nawfal’s Roundtable (@roundtablespace) 's Twitter Profile Photo

Google quietly open sourced a time-series AI that predicts anything. Sales trends. Market prices. User traffic. Energy demand. Crypto volatility. It's called TimesFM. Pre-trained on 100B real-world data points. Zero-shot forecasting with no fine-tuning. Outperforms supervised

Google quietly open sourced a time-series AI that predicts anything.

Sales trends. Market prices. User traffic. Energy demand. Crypto volatility.

It's called TimesFM. Pre-trained on 100B real-world data points. Zero-shot forecasting with no fine-tuning. Outperforms supervised
OpenAI (@openai) 's Twitter Profile Photo

We’re expanding Trusted Access for Cyber with additional tiers for authenticated cybersecurity defenders. Customers in the highest tiers can request access to GPT-5.4-Cyber, a version of GPT-5.4 fine-tuned for cybersecurity use cases, enabling more advanced defensive workflows.

The Prohuman (@theprohumanai) 's Twitter Profile Photo

NVIDIA just dropped a 120B parameter model that only uses 12B at inference. It's called Nemotron 3 Super. 60.47% on SWE-Bench Verified, highest open-weight model ever for real-world coding. 85.6% on PinchBench, best open model as an AI agent brain. 91.75% on RULER at 1M

NVIDIA just dropped a 120B parameter model that only uses 12B at inference.

It's called Nemotron 3 Super.

60.47% on SWE-Bench Verified, highest open-weight model ever for real-world coding.

85.6% on PinchBench, best open model as an AI agent brain.

91.75% on RULER at 1M
Noah King (@digitalnoah) 's Twitter Profile Photo

If you want to run powerful local AI, Gemma 4 is not going to cut it. Two of the strongest open-weight models right now are MiniMax M2.7 and GLM-5.1. They’re starting to approach frontier-level performance on some tasks, but are still behind overall. And they need serious

Noah King (@digitalnoah) 's Twitter Profile Photo

I spent hours researching local AI models and the hardware needed to power them. Don’t believe the hype - you need to spend big money to run a model anywhere near as intelligent as frontier models like GPT 5.4 or Claude Opus 4.7.

송준 Jun Song (@songjunkr) 's Twitter Profile Photo

로컬LLM 초보자 가이드 (애플 맥) 램사이즈 32~64GB - Qwen3.6, Gemma4 (Claude Sonnet 4.5와 비슷한 성능) ~128GB - Minimax m2.7 (Opus 4.5와 비슷한 성능) 256GB~ - GLM5.1 (Opus 4.6과 비슷한 성능) M1 이상이라면 충분히 구동 가능합니다. 로컬은 매주 발전하고있어요.

Noah King (@digitalnoah) 's Twitter Profile Photo

I've been using Qwen3.6 35B as a local coding model on a Mac Studio with 96GB RAM using llama.cpp At first, I was happy to see ~70 tokens/sec But it couldn't do hard tasks w/o timeouts. Turns out it was running w/ 8K context. Now running with 128K context and its entirely

I've been using Qwen3.6 35B as a local coding model on a Mac Studio with 96GB RAM using llama.cpp

At first, I was happy to see ~70 tokens/sec

But it couldn't do hard tasks w/o timeouts.

Turns out it was running w/ 8K context. Now running with 128K context and its entirely
Noah King (@digitalnoah) 's Twitter Profile Photo

I set up a powerful Mac Studio as a remote workstation. I connect from my Macbook Air laptop via Tailscale to delegate AI tasks that run locally. This feels like a massive unlock.

Evan Luthra (@evanluthra) 's Twitter Profile Photo

The Head of Claude Code at Anthropic said he hasn’t written code by hand in months. In 2 days he shipped 49 full features. All written 100% by AI. He just dropped a 30 min talk on exactly how he does it. Worth more than any $500 vibe coding course. Bookmark it:

Mitko Vasilev (@iotcoi) 's Twitter Profile Photo

Kimi K2.6, 1T params runs on my desktop GPU workstation. 192GB VRAM + 475GB RAM Serious contender to be the best open coding LLM on Earth. We'll see I'm checking their default harness. Goes on and on and on... tool calls showoff Time to explore the dark side of the Moon 🌑

Kimi K2.6, 1T params runs on my desktop GPU workstation. 192GB VRAM + 475GB RAM

Serious contender to be the best open coding LLM on Earth. We'll see

I'm checking their default harness. Goes on and on and on... tool calls showoff 

Time to explore the dark side of the Moon 🌑
Grok (@grok) 's Twitter Profile Photo

T.Rex-Baby Noah King dawon 🇺🇸 EXO Labs Got it—RDMA over Thunderbolt 5 (new in macOS 26.2) lets connected Thunderbolt 5 Macs do direct, low-latency memory access across machines, bypassing the CPU. Latency drops from ~300μs to ~3μs, with ~80Gbps+ effective bandwidth per link. This turns multiple M3/M4 Macs (Studios,

Noah King (@digitalnoah) 's Twitter Profile Photo

AI prediction: Open models will continue to offer compelling alternatives to frontier models from OpenAI/Anthropic Inference cost from latest chips will continue to go down Low cost, premium intelligence will be available at scale by end of year