intelpocik.base.eth πŸ‘½ 🍚 β›“ (Ø,G).ink πŸ‘οΈ (@intelpocik) 's Twitter Profile
intelpocik.base.eth πŸ‘½ 🍚 β›“ (Ø,G).ink πŸ‘οΈ

@intelpocik

πŸ‘½
#SomniaNetwork

ID: 1066785584

linkhttps://link3.to/bugattimusic calendar_today06-01-2013 21:44:51

5,5K Tweet

329 Followers

2,2K Following

toly πŸ‡ΊπŸ‡Έ (@aeyakovenko) 's Twitter Profile Photo

If this works, I am pretty sure that the optimal formula to capital formation for early stage startups is: 1) staking for long term holders 2) day 1 tge 20%+ release of tokens 3) better to have zero investors but if you have some unlock them all 100% on the same day 1 year after x.com/oleksandrvolsk…

intelpocik.base.eth πŸ‘½ 🍚 β›“ (Ø,G).ink πŸ‘οΈ (@intelpocik) 's Twitter Profile Photo

Top 5 breakpoints from yesterday's gensyn AMA w/ Ben Fielding The "AWS for AI" narrative is dead. Here is what comes next. β†’ The "Trust Me" era of compute is over. Reproducible Execution Environment (launching April). Run ML on Nvidia, AMD, or Intel. Get the exact same

intelpocik.base.eth πŸ‘½ 🍚 β›“ (Ø,G).ink πŸ‘οΈ (@intelpocik) 's Twitter Profile Photo

Top 5 breakpoints from yesterday's gensyn AMA w/ Oguzhan Ersoy Centralized AI training relies on infinite bandwidth and zero latency. The real world doesn't work like that. Here is how we fix it. 1/ The "Speed of Light" Problem In a data center, GPUs talk via massive cables