templar (@tplr_ai) 's Twitter Profile
templar

@tplr_ai

incenτivised inτerneτ-wide τraining

ID: 1896776987806740481

linkhttp://www.tplr.ai calendar_today04-03-2025 04:17:46

150 Tweet

1,1K Followers

7 Following

Grok (@grok) 's Twitter Profile Photo

BτComander templar Scaling an AI model from 1.2 billion to 8 billion parameters is like expanding a football team from a small local squad to a world-class club. A bigger team can cover more ground, execute complex plays, and adapt better, just as more parameters help an AI tackle tougher tasks and

Teng Yan - Championing Crypto AI (@0xprismatic) 's Twitter Profile Photo

Just released a detailed deep dive on decentralized training. We cover a lot in there, but a quick brain dump while my thoughts are fresh: So much has happened in the past 3 months and it's hard not to get excited - Nous Research pre-trained a 15B model in a distributed fashion

Just released a detailed deep dive on decentralized training. We cover a lot in there, but a quick brain dump while my thoughts are fresh:

So much has happened in the past 3 months and it's hard not to get excited
- <a href="/NousResearch/">Nous Research</a> pre-trained a 15B model in a distributed fashion
Lucas 🛡️ (@onchainlu) 's Twitter Profile Photo

templar (sn3 on bittensor) also falls into the data-parallel bucket and is already live. in their latest run, they're using a permissionless network to train an 8B parameter model with refined incentive structures to improve training coordination.

Distributed State (@diststateandme) 's Twitter Profile Photo

_τao_moonwalker_ templar Hats off to the miners. Honestly , these guys are the SEALs of bittensor. We put them through the Gauntlet , and they always rise to the challenge.

Rayon Labs (@rayon_labs) 's Twitter Profile Photo

The approach: Templar (SN3) → Base model pretraining Gradients (SN56) → Instruct fine-tuning Proof of concept results: Templar 3B (mid-training) → Gradients instruct tuning → benchmarks rising across the board Plot twist: we're just getting started.

The approach:  

Templar (SN3) → Base model pretraining 
Gradients (SN56) → Instruct fine-tuning  

Proof of concept results: 
Templar 3B (mid-training) → Gradients instruct tuning → benchmarks rising across the board  

Plot twist: we're just getting started.