Openτensor Foundaτion (@opentensor) 's Twitter Profile
Openτensor Foundaτion

@opentensor

Incentivizing intelligence

ID: 1405656630872662019

linkhttps://bittensor.com calendar_today17-06-2021 22:40:42

785 Tweet

168,168K Followers

1 Following

Synthdata (@synthdataco) 's Twitter Profile Photo

Major Synth Update ⚡️ Synth now supports ETH alongside BTC for probabilistic price predictions. This marks the beginning of Phase 2 of the Synth Roadmap. Let’s break it down 👇 dashboard.synthdata.co/dashboard/

Rayon Labs (@rayon_labs) 's Twitter Profile Photo

Just a reminder that AI at scale is already here, today on Bittensor. It's called Chutes Chutes and it's currently processing ~90B tokens per day, powering real apps, real users, real queries across the globe. Bittensor at scale isn't tomorrow. It's right now. #sn64

Just a reminder that AI at scale is already here, today on Bittensor.

It's called Chutes <a href="/chutes_ai/">Chutes</a> and it's currently processing ~90B tokens per day, powering real apps, real users, real queries across the globe.

Bittensor at scale isn't tomorrow. It's right now. 

#sn64
const (@const_reborn) 's Twitter Profile Photo

Markets over “C tier devs” will crush for the same reason that objective functions over dumb neurons will create AGI Nameless, faceless, twitter account follower-less, miners, scaling faster than Silicon Valley giants:

Markets over “C tier devs” will crush for the same reason that objective functions over dumb neurons will create AGI

Nameless, faceless, twitter account follower-less, miners, scaling faster than Silicon Valley giants:
templar (@tplr_ai) 's Twitter Profile Photo

The cost of training large language models has created an artificial scarcity. This isn't a technical limitation—it's a failure of coordination.

templar (@tplr_ai) 's Twitter Profile Photo

🧵7/ The results speak for themselves: 20K training rounds Completely open participation Competitive performance vs centralized baselines Real economic incentives paid out

🧵7/ The results speak for themselves:

20K training rounds
Completely open participation
Competitive performance vs centralized baselines
Real economic incentives paid out
const (@const_reborn) 's Twitter Profile Photo

(1B) - Templar-I (2B) - Templar-II (8B) - Templar-III (16B) - Templar-IV (32B) - Templar-V (64B) - Templar-VI (128B) - Templar-VII (256B) - Templar-VIII (512B) - Templar-IX (1T) - Templar-X Maybe? Lets see what Distributed State cooks.

Learn Bittensor (@learnbittensor) 's Twitter Profile Photo

📉 Templar (SN3): Incentivizing Loss Reduction The miners on subnet 3 just completed training a 1.2 billion parameter model through competitive loss reduction. Every 84 seconds, miners perform real training work on assigned data pages, measuring the model's mistakes (called

📉 Templar (SN3): Incentivizing Loss Reduction

The miners on subnet 3 just completed training a 1.2 billion parameter model through competitive loss reduction.

Every 84 seconds, miners perform real training work on assigned data pages, measuring the model's mistakes (called
RoundTable21 (@21roundtable) 's Twitter Profile Photo

They say a Validator's Cold Key isn't safe unless it can survive a mortar strike 😎 Our new primary fiber line finally went live over the weekend! We go the extra mile to keep our validator safe, operational and working for YOU at all times! 💻 🛜 4x Fiber lines 🛰️ 2x Backup

They say a Validator's Cold Key isn't safe unless it can survive a mortar strike 😎

Our new primary fiber line finally went live over the weekend! We go the extra mile to keep our validator safe, operational and working for YOU at all times! 💻

🛜 4x Fiber lines 
🛰️ 2x Backup
Sτew (@stewstoop) 's Twitter Profile Photo

Internet-wide LLM training is here. I repeat. INTERNET-WIDE LLM TRAINING IS HERE!!! Billions of $$$ are being spent on ramping compute capabilities for central folks like OpenAI. templar is empowering BILLIONS of people to be incentivized for their compute 🌍🫡 $TAO #Bittensor

const (@const_reborn) 's Twitter Profile Photo

1 and 2 are downstream from 3. Solve verification and incentive is easy. Solve incentives and miners solve comms and bandwidth. Same in ML, build the proper reward landscape. Let the algorithms anneal to it.

Jolly Green Investor 🍀 (@jollygreenmoney) 's Twitter Profile Photo

It's finally happening 🤯 TaoFi is opening the floodgates of liqudity for the Bittensor ecosystem 🌊 TaoFi will make it possible to buy any subnet alpha token with $ETH or $SOL in one-click, with routing provided by SN10 Sturdy. Things are about to get VERY interesting...

Chutes (@chutes_ai) 's Twitter Profile Photo

Yesterday we hit 100B Tokens Processed per Day. 🪂 This is 1/3 of what Google were processing just one year ago. And Chutes is just getting started - this time 3 months ago we were doing around 2B. That's a 50x increase in 3 months. 🤯 Decentralized AI Compute is the Future.

Yesterday we hit 100B Tokens Processed per Day. 🪂

This is 1/3 of what Google were processing just one year ago.

And Chutes is just getting started - this time 3 months ago we were doing around 2B.

That's a 50x increase in 3 months. 🤯

Decentralized AI Compute is the Future.
DREAD BONGO (@dreadbong0) 's Twitter Profile Photo

#Bittensor is taking over Paris The ecosystem is showing up strong at Proof of Talk and the lineup is stacked with the smartest minds and brightest builders A lot of powerful people will be paying attention.. Time to spread the gospel 🔥 $TAO

#Bittensor is taking over Paris

The ecosystem is showing up strong at <a href="/proofoftalk/">Proof of Talk</a> and the lineup is stacked with the smartest minds and brightest builders 

A lot of powerful people will be paying attention..

Time to spread the gospel 🔥

$TAO
Laτenτ Holdings (@latentholdings) 's Twitter Profile Photo

Bittensor Announcement 📣 (Cameron Fairchild) The network has been upgraded to spec `273` ! Some notable changes: SubtokenEnabled for new subnets - Newly registered subnets, before calling `start_call` will not be swappable. - Existing subnets are grandfathered-in as