Narong Borijindargoon (@narong_bdg) 's Twitter Profile
Narong Borijindargoon

@narong_bdg

Ex-researcher, now VC @SCB10X_OFFICIAL • PhD @NTUsg

ID: 1301737980034347008

calendar_today04-09-2020 04:25:02

86 Tweet

97 Followers

516 Following

Kevin Zakka (@kevin_zakka) 's Twitter Profile Photo

I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.

Narong Borijindargoon (@narong_bdg) 's Twitter Profile Photo

The cost of intelligence keeps dropping. Instead of attending to every token, the model filters for the most important ones, just like the human brain focuses on certain keywords rather than every word. This slashes compute costs while keeping performance strong.

Dr. Datta M.D. (AIIMS Delhi) (@drdatta_aiims) 's Twitter Profile Photo

🚨 Just published! All frontier AI models have failed “Radiology’s Last Exam” - the toughest benchmark in radiology launched today! ✅ Board-certified radiologists scored 83%, trainees 45%, but the best performing AI from frontier labs, GPT-5, managed only 30%. ❌ These results

🚨 Just published! All frontier AI models have failed “Radiology’s Last Exam” - the toughest benchmark in radiology launched today!

✅ Board-certified radiologists scored 83%, trainees 45%, but the best performing AI from frontier labs, GPT-5, managed only 30%.

❌ These results
Jeremy Howard (@jeremyphoward) 's Twitter Profile Photo

It's a strange time to be a programmer—easier than ever to get started, but easier to let AI steer you into frustration. We've got an antidote that we've been using ourselves with 1000 preview users for the last year: "solveit" Now you can join us.🧵 answer.ai/posts/2025-10-…

Similarweb (@similarweb) 's Twitter Profile Photo

GenAI Traffic Share Update Takeaways: → Gemini's rapid ascent continues. → Perplexity catches up with Grok. 🗓️ 12 months ago: ChatGPT: 87.1% Gemini: 6.5% Perplexity: 1.7% Claude: 1.7% Copilot: 0.9% 🗓️ 6 months ago: ChatGPT: 77.2% DeepSeek: 7.6% Gemini: 5.5% Grok: 3.2%

GenAI Traffic Share Update

Takeaways:
→ Gemini's rapid ascent continues.
→ Perplexity catches up with Grok.

🗓️ 12 months ago:
ChatGPT: 87.1%
Gemini: 6.5%
Perplexity: 1.7%
Claude: 1.7%
Copilot: 0.9%

🗓️ 6 months ago:
ChatGPT: 77.2%
DeepSeek: 7.6%
Gemini: 5.5%
Grok: 3.2%
Brave (@brave) 's Twitter Profile Photo

The security vulnerability we found in Perplexity’s Comet browser this summer is not an isolated issue. Indirect prompt injections are a systemic problem facing Comet and other AI-powered browsers. Today we’re publishing details on more security vulnerabilities we uncovered.

Pokee AI (@pokee_ai) 's Twitter Profile Photo

Open. Source. SOTA. Deep. Research. 🚀 Today, we’re releasing PokeeResearch-7B, a SOTA open-source deep research agent that outperforms all other 7B deep research agents. And, we are open-sourcing both the weights and inference code on Hugging Face! We're additionally excited

Narong Borijindargoon (@narong_bdg) 's Twitter Profile Photo

Big congratulations to the Hearvana.ai team on $6M raise! Audio AI is the next frontier for human–AI augmentation, and Hearvana.ai is tackling one of the toughest and most exciting frontiers in deep tech — real-time, on-device audio AI that enhances how

Jim Fan (@drjimfan) 's Twitter Profile Photo

Everyone's freaking out about vibe coding. In the holiday spirit, allow me to share my anxiety on the wild west of robotics. 3 lessons I learned in 2025. 1. Hardware is ahead of software, but hardware reliability severely limits software iteration speed. We've seen exquisite

Everyone's freaking out about vibe coding. In the holiday spirit, allow me to share my anxiety on the wild west of robotics. 3 lessons I learned in 2025.

1. Hardware is ahead of software, but hardware reliability severely limits software iteration speed. 

We've seen exquisite
Karan Dalal (@karansdalal) 's Twitter Profile Photo

LLM memory is considered one of the hardest problems in AI. All we have today are endless hacks and workarounds. But the root solution has always been right in front of us. Next-token prediction is already an effective compressor. We don’t need a radical new architecture. The

LLM memory is considered one of the hardest problems in AI.

All we have today are endless hacks and workarounds. But the root solution has always been right in front of us.

Next-token prediction is already an effective compressor. We don’t need a radical new architecture. The
Tom Yeh (@proftomyeh) 's Twitter Profile Photo

I wrote a short story to explain to my students the evolution of PPO, DPO, GRPO, to GDPO (NVIDIA's new paper). 👇 This story is based on my own personal RL journey to become the family chef. 🍳 (when my wife was my girlfriend) 𝗣𝗣𝗢 I wanted to cook a new dish for our next

Narong Borijindargoon (@narong_bdg) 's Twitter Profile Photo

The GPU vs ASIC debate is over. Today at GTC, NVIDIA answered with both. GPU handles prefill (heavy compute). Groq LPU handles decode (fast token generation via SRAM) #nvidiagtc2026

The GPU vs ASIC debate is over.

Today at GTC, NVIDIA answered with both.

GPU handles prefill (heavy compute). 

Groq LPU handles decode (fast token generation via SRAM)

#nvidiagtc2026
SemiAnalysis (@semianalysis_) 's Twitter Profile Photo

Disagg planning. In inference, disaggregated prefill splits the compute-heavy prefill from decode. A similar thing is playing out in agentic coding: planning and execution are different cognitive tasks that favor different model profiles. Models that reason deeply aren't always

Disagg planning.

In inference, disaggregated prefill splits the compute-heavy prefill from decode. A similar thing is playing out in agentic coding: planning and execution are different cognitive tasks that favor different model profiles. Models that reason deeply aren't always