Jimmy (@jzheng1994) 's Twitter Profile
Jimmy

@jzheng1994

@PrimeIntellect

ID: 779349826853011457

calendar_today23-09-2016 16:00:53

87 Tweet

497 Followers

1,1K Following

Prime Intellect (@primeintellect) 's Twitter Profile Photo

We are excited to share a preview of our peer-to-peer decentralized inference stack Engineered for consumer GPUs and high-latency networks — plus a research roadmap to scale it to a planetary-scale decentralized inference engine.

Prime Intellect (@primeintellect) 's Twitter Profile Photo

Releasing INTELLECT-2: We’re open-sourcing the first 32B parameter model trained via globally distributed reinforcement learning: • Detailed Technical Report • INTELLECT-2 model checkpoint primeintellect.ai/blog/intellect…

Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞) (@teortaxestex) 's Twitter Profile Photo

all that remains to solve for unlocking truly massive scale is the question of aligning incentives to host rollouts. Also, we need to go even smaller as we go bigger. I would donate a 4090 whenever it's idling. When I bought a 1080 Ti, the first thing I did was install BOINC…

TBPN (@tbpn) 's Twitter Profile Photo

We asked Vincent Weisser about his vision for utilizing GPUs. "Every idling GPU is a market failure. Compute will be one of the biggest slices of GDP." "The goal is to have a fault-tolerant 'genius pool' of compute, ranging from H100s, A100s to even RTX 3090s." "Unutilized

Prime Intellect (@primeintellect) 's Twitter Profile Photo

Introducing PCCL, the Prime Collective Communications Library — a low-level communication library built for decentralized training over the public internet, with fault tolerance as a core design principle. In testing, PCCL achieves up to 45 Gbit/s of bandwidth across datacenters

will brown (@willccbb) 's Twitter Profile Photo

fun fact: @primeintellect has around 20 employees total everyone is exceptional at what they do. you have a lot of autonomy, and that comes with a lot of responsibility. we're hiring, but not rapidly. we want someone really, really good for this role. sound like fun?

Matthew Di Ferrante (@matthewdif) 's Twitter Profile Photo

if you're into pure maths and can code come join me at Prime Intellect - there's lots of fun and alpha in being able to reason about the parameter space through the tools of differential and algebraic geometry.

if you're into pure maths and can code come join me at <a href="/PrimeIntellect/">Prime Intellect</a> - there's lots of fun and alpha in being able to reason about the parameter space through the tools of differential and algebraic geometry.
TBPN (@tbpn) 's Twitter Profile Photo

We asked Sholto Douglas from Anthropic about the costs of RL (Reinforcement Learning) runs. "In Dario Amodei's essay, he said that RL runs cost only $1M back in December." "RL is a more naively parallelizable and scalable than pre-training." "With pre-training, you need

Prime Intellect (@primeintellect) 's Twitter Profile Photo

Launching SYNTHETIC-2: our next-gen open reasoning dataset and planetary-scale synthetic data generation run. Powered by our P2P inference stack and DeepSeek-R1-0528, it verifies traces for the hardest RL tasks. Contribute towards AGI via open, permissionless compute.

Rohan Pandey (@khoomeik) 's Twitter Profile Photo

there is no excuse for any gpu on earth to be idling right now every idle gpu can and should contribute to generating high quality synthetic data for training the next generation of opensource reasoning models

Prime Intellect (@primeintellect) 's Twitter Profile Photo

We did it — SYNTHETIC‑2 is complete. A planetary-scale decentralized inference run generating 4M verified reasoning samples. 1,250+ GPUs joined in 3 days — from 4090s to H200s — creating data for complex RL tasks. Full open-source release + technical report coming next week!

We did it — SYNTHETIC‑2 is complete.

A planetary-scale decentralized inference run generating 4M verified reasoning samples.

1,250+ GPUs joined in 3 days — from 4090s to H200s — creating data for complex RL tasks.

Full open-source release + technical report coming next week!
Kevin Lu (@_kevinlu) 's Twitter Profile Photo

So I think something else that doesn't get discussed much is the extrapolation of this inference : training trend - 2015: back in the day, we would train one model per dataset, and inference it once (to obtain the eval result for our paper) - 2020: with chatgpt, multi-task

So I think something else that doesn't get discussed much is the extrapolation of this inference : training trend

- 2015: back in the day, we would train one model per dataset, and inference it once (to obtain the eval result for our paper)
- 2020: with chatgpt, multi-task
Prime Intellect (@primeintellect) 's Twitter Profile Photo

Releasing SYNTHETIC-2: our open dataset of 4m verified reasoning traces spanning a comprehensive set of complex RL tasks and verifiers. Created by hundreds of compute contributors across the globe via our pipeline parallel decentralized inference stack. primeintellect.ai/blog/synthetic…

Vincent Weisser (@vincentweisser) 's Twitter Profile Photo

We are releasing SYNTHETIC-2 — an open dataset of 4m verified reasoning traces of complex rl tasks and verifiers The dataset was collaboratively generated by over 1,250 GPUs contributed across the globe via our pipeline-parallel decentralized inference

We are releasing SYNTHETIC-2 — an open dataset of 4m verified reasoning traces of complex rl tasks and verifiers

The dataset was collaboratively generated by over 1,250 GPUs contributed across the globe via our pipeline-parallel decentralized inference
Johannes Hagemann (@johannes_hage) 's Twitter Profile Photo

wrapping up our team off-site more hyped than ever about what we're building rn at prime. we're accelerating on all fronts and are on a very differentiated path towards open & decentralized superintelligence. we have lots of product launches planned for the open-source community

Vincent Weisser (@vincentweisser) 's Twitter Profile Photo

we’re hiring ai researchers, engineers, growth, interns etc at Prime Intellect ping me if you want to work on open agi & frontier research infra for everyone