Cats (@cats_cr) 's Twitter Profile
Cats

@cats_cr

cats are like dogs, except they're not/\///\\\////\\\\//\\\\\\\///////////\\\\\\\\\\\\ $dking, $tibbir, $kta //
discord.gg/webuildscore

ID: 817873379390849025

linkhttp://dking.bot calendar_today07-01-2017 23:19:43

9,9K Tweet

731 Followers

2,2K Following

elhllos (@elhllos) 's Twitter Profile Photo

المشروع دا هو بيتكوين عالم ال AI فكرة التدريب اللامركزي لل AI models دي عبقرية وهتدي دفع جامدة لل open source الرهان الجاي على الـ 400B، لو قدروا يدربوا نموذج بالحجم دا نقدر نقول إن كل المليارات اللي بتتصرف علي ال data centers من شركات زي جوجل و openAI هيبقوا اترموا فالأرض

Beaver 🦁 (@beaverd) 's Twitter Profile Photo

This is the largest model ever trained through decentralized compute/cooperation It needs to win. If LLM training isn't decentralized soon we will have a very big problem

Perry E. Metzger (@perrymetzger) 's Twitter Profile Photo

Decentralized training is a potential nightmare for the Doomers because it means that you would literally need computer police searching individual homes to find people doing illegal computations. Nothing makes the insanity of their position more manifest.

Aakash Gupta (@aakashg0) 's Twitter Profile Photo

A quarter million dollars in GPUs is the entry ticket to “democratized” AI training. Each participant needed a minimum of 8x NVIDIA B200 GPUs. “Anyone with GPUs could join” is technically true the same way “anyone can buy a Gulfstream” is technically true. Twenty-plus

Danny Herrmann (@muffindannyh) 's Twitter Profile Photo

OpenAI spends billions on data centers. 70 random GPU holders just matched that on Bittensor. A 72 billion parameter model. Trained on commodity internet. No central cluster. No whitelist. Anyone with a GPU could join. And it competed with LLaMA-2-70B. On MMLU it actually beat

const (@const_reborn) 's Twitter Profile Photo

Joshua Field templar understated part which no one really appreciates yet is that the system is fully incentivized and adversarial. no one, no lab, no team, no soul in the world has ever cracked that at any scale of model and its quite literally the linch-pin of functional decentralized training.

mikecontango | τ,τ (@mikecontango) 's Twitter Profile Photo

Please excuse the tardiness on this post... but, this is perhaps as big of an event for AI sovereignty as Bitcoin was for monetary sovereignty. You do not want the hearts and minds of billions of people controlled by an executive board room and an opaque black box of weights

templar (@tplr_ai) 's Twitter Profile Photo

TGIF #29 tomorrow! The Covenant-72B thread reached well beyond Bittensor this week. Distributed State and the full covenant team talk about what that traction means, where decentralized AI sits in the broader conversation, and what comes next. Miners, come celebrate with us.

TGIF #29 tomorrow! The Covenant-72B thread reached well beyond Bittensor this week. <a href="/DistStateAndMe/">Distributed State</a> and the full <a href="/covenant_ai/">covenant</a> team talk about what that traction means, where decentralized AI sits in the broader conversation, and what comes next.

Miners, come celebrate with us.
Cats (@cats_cr) 's Twitter Profile Photo

I rather not trust your indexes. Stop blaming Openτensor Foundaτion. It's not their fault you didn't have proper guardrails in place. Reimbursing your investors with 10% of their loses is a fucking joke.

I rather not trust your indexes.
Stop blaming <a href="/opentensor/">Openτensor Foundaτion</a>. 
It's not their fault you didn't have proper guardrails in place.
Reimbursing your investors with 10% of their loses is a fucking joke.
LunarCrush.com Social Data (@lunarcrush) 's Twitter Profile Photo

Bittensor just trained a 72-billion parameter language model across a fully decentralized network. No single company. No central data center. Just a global mesh of permissionless compute nodes collaborating on 1.1 trillion tokens. The model is called Covenant-72B, completed on

Bittensor just trained a 72-billion parameter language model across a fully decentralized network. No single company. No central data center. Just a global mesh of permissionless compute nodes collaborating on 1.1 trillion tokens.

The model is called Covenant-72B, completed on
Austin Barack (@austinbarack) 's Twitter Profile Photo

Templar completed the largest distributed and open LLM pre-training run in history. $SN3 is trading at a $40MM mcap. Right now it is not yet listed on any CEXs. A lot of alpha just researching what's interesting and leaning into onchain friction points before it is easy

Templar completed the largest distributed and open LLM pre-training run in history. $SN3 is trading at a $40MM mcap. Right now it is not yet listed on any CEXs. A lot of alpha just researching what's interesting and leaning into onchain friction points before it is easy
Punisher ττ (@cryptozpunisher) 's Twitter Profile Photo

$TAO This article is interesting and highlights an important milestone for #Bittensor. For those who may not know yet, the Covenant-72B model mentioned in the article was trained on Subnet 3 Templar, one of the historical subnets of the network dedicated to distributed model

Ken Jon (@kenjonmiyachi) 's Twitter Profile Photo

the templar lore is epic: - Legendary Novelty Search when const came in mid Macrocosmos presentation pumped he got distributed training working: youtube.com/watch?v=UM5UhZ… (check around 32:00) - caused some fud as other open source research was used as inspiration...