Andrey Cheptsov (@andrey_cheptsov) 's Twitter Profile
Andrey Cheptsov

@andrey_cheptsov

@dstackai. AI infra. Anti-k8s. Previously @JetBrains

ID: 13244412

linkhttps://github.com/dstackai/dstack calendar_today08-02-2008 12:13:50

8,8K Tweet

1,1K Followers

317 Following

Matthäus Krzykowski (@matthausk) 's Twitter Profile Photo

Andrey Cheptsov Constantin Dominique Paul With OSS LLMs - Europe’s on it. Llama was basically an (FAIR & HF) EU product. Qwen/DeepSeek-level work can happen here. I realise I sound like an old fart & I apologise - as an angel investor happy to back the teams building this. Am eg invested in Continue & Rasa.

<a href="/andrey_cheptsov/">Andrey Cheptsov</a> <a href="/consdi/">Constantin</a> <a href="/DominiqueCAPaul/">Dominique Paul</a> With OSS LLMs - Europe’s on it. Llama was basically an (FAIR &amp; HF) EU product. Qwen/DeepSeek-level work can happen here.
I realise I sound like an old fart &amp; I apologise - as an angel investor happy to back the teams building this. Am eg invested in Continue &amp; Rasa.
dstack (@dstackai) 's Twitter Profile Photo

dstack 0.19.39 is out. 🚀 This release focuses on bug fixes and documentation improvements. * Docs can now be built locally, making contributions easier. * Added /llms.txt and /llms-full.txt to help LLMs generate accurate commands and configs. * Added AGENTS .md to help agents

dstack (@dstackai) 's Twitter Profile Photo

Early access: B300 (Blackwell Ultra) VMs are now available on Verda (formerly DataCrunch) (former DataCrunch) via dstack. * Available on both on-demand and spot * Up to 8× GPUs per VM * Pricing: $4.95/h on-demand, $1.24/h spot If you need Blackwell nodes for training, eval or inference, you

Early access: B300 (Blackwell Ultra) VMs are now available on <a href="/verdacloud/">Verda (formerly DataCrunch)</a> (former DataCrunch) via dstack.

* Available on both on-demand and spot
* Up to 8× GPUs per VM
* Pricing: $4.95/h on-demand, $1.24/h spot

If you need Blackwell nodes for training, eval or inference, you
dstack (@dstackai) 's Twitter Profile Photo

A new case study is out: Toffee AI breaks down how they use dstack to manage AI inference across neoclouds like Runpod and vast.ai. One control plane, consistent deployments, much less infra overhead. Read it here: research.toffee.ai/blog/how-we-us…

A new case study is out: <a href="/toffee_ai/">Toffee AI</a> breaks down how they use dstack to manage AI inference across neoclouds like <a href="/runpod/">Runpod</a> and <a href="/vast_ai/">vast.ai</a>.

One control plane, consistent deployments, much less infra overhead.

Read it here: research.toffee.ai/blog/how-we-us…
Andrey Cheptsov (@andrey_cheptsov) 's Twitter Profile Photo

New blog from Toffee AI on how they use dstack to run model inference across multiple GPU clouds like Runpod and vast.ai. We built dstack for GPU-native orchestration, and messy GPU availability and pricing are a big part of that. I love seeing AI startups use it to tame

Shubham Agrawal (@musickeeda) 's Twitter Profile Photo

Full blog here 👇 aerlabs.tech/blogs/dstack-g… A big shoutout to Andrey Cheptsov the driving force behind dstack, for building something genuinely thoughtful and developer-first. 🙌

Nebius (@nebiusai) 's Twitter Profile Photo

November at Nebius brought yet another multi-billion hyperscaler deal and further expansion of our data center footprint, new leading benchmark results and deeper platform ecosystem integrations. Read the digest: nebius.com/blog/posts/dig…

November at Nebius brought yet another multi-billion hyperscaler deal and further expansion of our data center footprint, new leading benchmark results and deeper platform ecosystem integrations.

Read the digest: nebius.com/blog/posts/dig…
dstack (@dstackai) 's Twitter Profile Photo

Congrats to Nebius on the $3B Meta deal — a major boost for the cloud AI ecosystem. Exciting to see capacity scaling this quickly for both labs and startups!

dstack (@dstackai) 's Twitter Profile Photo

Read how Toffee AI simplified their multi-cloud GPU stack with dstack and now ship LLM and image-gen inference across GPU clouds, all while reducing GPU spend by 2-3x. Read the full case study: dstack.ai/blog/toffee/

Read how <a href="/toffee_ai/">Toffee AI</a> simplified their multi-cloud GPU stack with dstack and now ship LLM and image-gen inference across GPU clouds, all while reducing GPU spend by 2-3x.

Read the full case study: dstack.ai/blog/toffee/
dstack (@dstackai) 's Twitter Profile Photo

Excited to see dstack featured in Verda (formerly DataCrunch)'s monthly digest! We're thrilled about the dstack-Verda integration and how it simplifies training and inference orchestration for AI builders.