Njdeh Satourian (@satourian) 's Twitter Profile
Njdeh Satourian

@satourian

Thinking step-by-step at @cerebrassystems

ID: 1292764709473714177

calendar_today10-08-2020 10:08:43

178 Tweet

300 Followers

1,1K Following

Andrew Feldman (@andrewdfeldman) 's Twitter Profile Photo

Are you looking for a comparison of tool calling capabilities from different services running Qwen Coder? GosuCoder developed a thoughtful set of tests with results below. Note that the top score is and that it did this while running 20X faster than

Cerebras (@cerebrassystems) 's Twitter Profile Photo

The first Nvidia Blackwell results for GPT OSS 120B are out! Artificial Analysis ran Cerebras vs. Nvidia head to head: Blackwell DGX B200: 1 user = 900 TPS, 10 users = 580 TPS Cerebras CS-3: 1 user = 2790 TPS, 10 users = 2740 TPS Also, Cerebras is live in production.

The first Nvidia Blackwell results for GPT OSS 120B are out! <a href="/ArtificialAnlys/">Artificial Analysis</a> ran Cerebras vs. Nvidia head to head:
Blackwell DGX B200: 1 user = 900 TPS, 10 users = 580 TPS
Cerebras CS-3: 1 user = 2790 TPS, 10 users = 2740 TPS

Also, Cerebras is live in production.
Perceptron AI (@perceptroninc) 's Twitter Profile Photo

1/ Introducing Isaac 0.1 — our first perceptive-language model. 2B params, open weights. Matches or beats models significantly larger on core perception. We are pushing the efficient frontier for physical AI. perceptron.inc/blog/introduci…

1/ Introducing Isaac 0.1 — our first perceptive-language model. 2B params, open weights. Matches or beats models significantly larger on core perception. We are pushing the efficient frontier for physical AI.
perceptron.inc/blog/introduci…
Visual Studio Code (@code) 's Twitter Profile Photo

Cerebras Inference powers the world's top models - like Qwen3 Coder at 2000 tokens/s 🤯 And you can now access these models directly in Visual Studio Code with the extension and an API key (that you can get for free from cerebras.ai!) aka.ms/VSCode/Cerebras

Bloomberg (@business) 's Twitter Profile Photo

Cerebras Systems said it closed a $1.1 billion funding round valuing the data center operator and maker of AI chips at $8.1 billion, including dollars raised bloomberg.com/news/articles/…

murat 🍥 (@mayfer) 's Twitter Profile Photo

i've arguably got more work value out of cerebras thx to gpt-oss & qwen-coder, than i got from incremental improvements from claude 3.5>4.5 a 3s turnaround time vs 3min is a qualitative difference. the slow smart model is still essential to get unstuck, but not every step along

Cerebras (@cerebrassystems) 's Twitter Profile Photo

On Friday, Cerebras withdrew our S-1. It had become stale and no longer reflected the current state of our business. Our business and financial position have evolved significantly for the better since our initial filing in 2024: • In 2024, we achieved record revenues. • In

Cerebras (@cerebrassystems) 's Twitter Profile Photo

🟧 Cerebras Inference self-serve is finally here 🟧 – Pay by credit card starting at $10 – Run Qwen3 Coder, GPT OSS & more at 2,000+ TPS – 20x the speed of GPU-based model providers Go ahead. Melt our wafers. cloud.cerebras.ai

Cerebras (@cerebrassystems) 's Twitter Profile Photo

Pay-as-you-go is now available on Amazon Web Services Marketplace. Use your AWS account—no upfronts, no lock-ins—to serve frontier models 20x faster than leading GPUs.

Pay-as-you-go is now available on <a href="/awscloud/">Amazon Web Services</a>  Marketplace. 

Use your AWS account—no upfronts, no lock-ins—to serve frontier models 20x faster than leading GPUs.
Cerebras (@cerebrassystems) 's Twitter Profile Photo

Today, Cognition released SWE-1.5 – the world’s fastest coding agent, powered by Cerebras. SWE-1.5 achieves frontier-level coding ability, comparable to Sonnet 4.5 and surpassing GPT-5. Cerebras and Cognition engineers worked hand in hand over the past few weeks, training a

Today, <a href="/cognition/">Cognition</a> released SWE-1.5 – the world’s fastest coding agent, powered by Cerebras.

SWE-1.5 achieves frontier-level coding ability, comparable to Sonnet 4.5 and surpassing GPT-5.

Cerebras and Cognition engineers worked hand in hand over the past few weeks, training a
Andrew Feldman (@andrewdfeldman) 's Twitter Profile Photo

Once again Cerebras is the fastest in the world in serving frontier open-weight models. Artificial Analysis shows GLM-4.6 on Cerebras at > 1400 tps/user. By way of comparison to other frontier models, running on Cerebras, GLM-4.6 is 17x faster than Claude Sonnet 4.5, 25%

Once again <a href="/cerebras/">Cerebras</a>  is the fastest in the world in serving frontier open-weight models. <a href="/ArtificialAnlys/">Artificial Analysis</a>   shows GLM-4.6 on Cerebras at &gt; 1400 tps/user.
 
By  way of comparison to other frontier models, running on Cerebras,  GLM-4.6 is 17x faster than Claude Sonnet 4.5, 25%
Andrew Feldman (@andrewdfeldman) 's Twitter Profile Photo

Today the US Government reached an agreement with UAE's AI National Champion and Cerebras strategic partner, G42 , enabling the Emirati firm to deploy Cerebras solutions in the UAE. We’re grateful for the Administration’s decision, which allows us to scale U.S.-built

Cerebras (@cerebrassystems) 's Twitter Profile Photo

GLM-4.7 from Z.ai is live on Cerebras! - Frontier intelligence for coding, tool-driven agents, and multi-turn reasoning - Record coding speed: ~1,000 tokens per second (up to 1,700 TPS for other uses) - Strong price-performance: ~10x higher than Sonnet 4.5

Njdeh Satourian (@satourian) 's Twitter Profile Photo

Last month, I closed out a project I started in 2023: publishing a biography of my paternal grandmother, Zvart. The story traces her ancestors' stories from Trabzon (Ottoman Empire), Krasnodar (Russian Empire and USSR), then her life in Iran, France, and eventually Canada. To

Last month, I closed out a project I started in 2023: publishing a biography of my paternal grandmother, Zvart. The story traces her ancestors' stories from Trabzon (Ottoman Empire), Krasnodar (Russian Empire and USSR), then her life in Iran, France, and eventually Canada. To
Njdeh Satourian (@satourian) 's Twitter Profile Photo

Extremely proud to be a part of this incredible team today and everyday. “OpenAI is partnering with Cerebras to add 750MW of ultra low-latency AI compute to the OpenAI platform.”

Andrew Feldman (@andrewdfeldman) 's Twitter Profile Photo

OpenAI and @Cerebras have signed a multi-year agreement to deploy 750 megawatts of Cerebras wafer-scale systems to serve OpenAI customers. This has been a decade in the making. Deployment begins in early 2026, and when fully rolled out, it will be the largest high-speed AI

<a href="/OpenAI/">OpenAI</a> and @Cerebras have signed a multi-year agreement to deploy 750 megawatts of Cerebras wafer-scale systems to serve OpenAI customers.

This has been a decade in the making.

Deployment begins in early 2026, and when fully rolled out, it will be the largest high-speed AI
Cerebras (@cerebrassystems) 's Twitter Profile Photo

Cerebras Systems today announced the closing of a $1 billion Series H financing at a post-money valuation of approximately $23 billion. The round was led by Tiger Global, with participation from Benchmark, Fidelity Management & Research Company, Atreides Management, Alpha Wave

Andrew Feldman (@andrewdfeldman) 's Twitter Profile Photo

Just one month after announcing our partnership with OpenAI, we’re launching our first model together: OpenAI Codex-Spark, powered by Cerebras. Codex-Spark is built for real-time software development. In coding, responsiveness is the product. It is not a nice to have.

Just one month after announcing our partnership with <a href="/OpenAI/">OpenAI</a>, we’re launching our first model together: OpenAI Codex-Spark, powered by <a href="/cerebras/">Cerebras</a>.

Codex-Spark is built for real-time software development.

In coding, responsiveness is the product. 
It is not a nice to have.