Alan Mosca (@nitbix) 's Twitter Profile
Alan Mosca

@nitbix

co-founder & CTO @nplanHQ

Decision Making + Uncertainty + Risk + Forecasting + AI + Project management.

Sometimes 3D printing.

ID: 33967759

linkhttp://nplan.io calendar_today21-04-2009 17:12:45

1,1K Tweet

3,3K Takipçi

691 Takip Edilen

Ted Werbel (@tedx_ai) 's Twitter Profile Photo

Few things pretty obvious to a few AI researchers but that most don't want to believe: 1. 90% of the most impactful AI research is already on arxiv, x, or company blog posts 2. q* aka strawberry = STaR (self-taught reasoners) with dynamic self-discover + something like DSPy for

Alan Mosca (@nitbix) 's Twitter Profile Photo

This is the type of insanity that nPlan is supposed to prevent - except they didn't want us on the program... I guess when this is the state of affairs, delays don't matter to those in charge, so it checks out.

Dmitrii Kovanikov (@chshersh) 's Twitter Profile Photo

Docker is stupid. “Sorry, we can’t deploy a single statically linked executable of size 10 MB, so LET’S JUST SHIP GIGABYTES OF DATA CONTAINING AN ENTIRE OS WITH ALL DEPENDENCIES FOR EVERY SINGLE SERVICE”

Alan Mosca (@nitbix) 's Twitter Profile Photo

That's goong to be a lot of projects! Major #AI policy updates: • #Stargate: $500B from Oracle, OpenAI & SoftBank • UK rolls out smart AI policies with smaller investments • EO to repeal AI Safety Funding will be mostly #datacenters & #SMR for energy. To keep projects on

Alan Mosca (@nitbix) 's Twitter Profile Photo

Most of #GovernmentEfficiency can be achieved by preventing delays and blowouts on megaprojects. Smarter risk management and forecasting will do that. Elon Musk GAO and DCMA are best placed to achieve this with the right tools!

Alan Mosca (@nitbix) 's Twitter Profile Photo

Calm down everyone. Did we learn nothing from the superconductor hype? Markets overreact to the latest news, forget the past, and jump to conclusions. Reality check on #DeepSeek: * It’s not the 'end' of other #AI companies. Especially not OpenAI * One benchmark ≠ universal

Alan Mosca (@nitbix) 's Twitter Profile Photo

DeepSeek’s cost-saving breakthroughs aren’t new ideas (MOE, MLA, fp8 training), but they’re the first to make them work together effectively. * MOE (Mixture of Experts) increases training efficiency by activating only 37B of 600B model parameters, cutting compute by 80%. * MLA

Russell Kaplan (@russelljkaplan) 's Twitter Profile Photo

It's wild to watch a $600B market swing in the wrong direction on news that each unit of compute is actually *more* valuable than previously appreciated.

It's wild to watch a $600B market swing in the wrong direction on news that each unit of compute is actually *more* valuable than previously appreciated.
Alan Mosca (@nitbix) 's Twitter Profile Photo

My DaTa is MOre VAluaBle To yoU tHaN YOUr pROducT Is tO Me Stop. You sound like an idiot and you have just revealed to everyone you don't understand #ai as it is since after 2023.