Sesterce (@sestercegroup) 's Twitter Profile
Sesterce

@sestercegroup

We are shaping the future of AI with high-performance GPU clusters,
ranging from 100 to 15,000 GPUs, offering unparalleled scalability and efficiency.

ID: 1047135204178104320

linkhttps://www.sesterce.com/ calendar_today02-10-2018 14:44:22

1,1K Tweet

1,1K Takipçi

181 Takip Edilen

Sesterce (@sestercegroup) 's Twitter Profile Photo

At the Teratec Forum, Kacem Lounissi, Head of AI and MLOps at Sesterce, joined the roundtable on "Infrastructures and Data for AI Factories". He shared Sesterce’s approach to building scalable, efficient, and sovereign AI platforms. Thanks to Emmanuel Duteil for a

Sesterce (@sestercegroup) 's Twitter Profile Photo

Just dropped: 8x B200 bare-metal nodes — now live in Europe. 🔹1440 GB GPU vRAM 🔹 2304 GB system RAM 🔹 No virtualization. Full control. EU-hosted. Built for: ✅ Training reasoning-first LLMs ✅ Scaling inference at speed ✅ Distributed AI workloads → Live now on Sesterce

Just dropped: 8x B200 bare-metal nodes — now live in Europe.
🔹1440 GB GPU vRAM
🔹 2304 GB system RAM
🔹 No virtualization. Full control. EU-hosted.

Built for:
✅ Training reasoning-first LLMs
✅ Scaling inference at speed
✅ Distributed AI workloads
→ Live now on Sesterce
Sesterce (@sestercegroup) 's Twitter Profile Photo

Just 1 week before #GTCParis — we’re unlocking something big: Cluster LUNA is now live — 128× NVIDIA H200 GPUs, 3 TB RAM/node, 500 TB VΛST Data storage. Hosted in Paris. Sovereign. SLURM-ready. No egress fees. 🔹 $1.99/h — GTC Paris exclusive 🔹 Only 1 cluster available.

Just 1 week before #GTCParis — we’re unlocking something big:

Cluster LUNA is now live — 128× <a href="/nvidia/">NVIDIA</a>  H200 GPUs, 3 TB RAM/node, 500 TB <a href="/VAST_Data/">VΛST Data</a> storage.

Hosted in Paris. Sovereign. SLURM-ready. No egress fees.
🔹 $1.99/h — GTC Paris exclusive
🔹 Only 1 cluster available.
Sesterce (@sestercegroup) 's Twitter Profile Photo

If you're building any of these, you know that time-to-compute is everything. That's why we've opened access to GH200 96GB GPUs at just $4.65/hour — fully on-demand. No lock-in. No waiting. Just direct access to one of the most powerful AI chips out there — with 96GB of unified

If you're building any of these, you know that time-to-compute is everything.

That's why we've opened access to GH200 96GB GPUs at just $4.65/hour — fully on-demand.

No lock-in. No waiting. Just direct access to one of the most powerful AI chips out there — with 96GB of unified
Sesterce (@sestercegroup) 's Twitter Profile Photo

Got a bold idea for AI? In 5 days at #GTC25, we’re kicking off the #LUNAChallenge — your shot to win a share of $20,000 in AI prizes. No code needed. Just vision. Stay tuned. #PrivateAI #H200 #GTC25 #Sesterce

Got a bold idea for AI?

In 5 days at #GTC25, we’re kicking off the #LUNAChallenge — your shot to win a share of $20,000 in AI prizes.

No code needed. Just vision.
Stay tuned.

#PrivateAI #H200 #GTC25 #Sesterce
Sesterce (@sestercegroup) 's Twitter Profile Photo

🇫🇷 Today at #F5AppWorld Paris, our CEO Youssef El Manssouri joined a powerful roundtable on scaling secure & sovereign AI infrastructure. 🔹 With Ahmed Guetari F5, Nathaniel Ives NVIDIA 🎙️ Moderated by Alix Leconte Europe’s AI must run on trusted, high-performance foundations. #Sesterce

🇫🇷 Today at #F5AppWorld Paris, our CEO <a href="/yoemsri/">Youssef El Manssouri</a>  joined a powerful roundtable on scaling secure &amp; sovereign AI infrastructure.
🔹 With <a href="/aguetari/">Ahmed Guetari</a> <a href="/F5/">F5</a>, Nathaniel Ives <a href="/nvidia/">NVIDIA</a> 
🎙️ Moderated by Alix Leconte

Europe’s AI must run on trusted, high-performance foundations.
#Sesterce
Sesterce (@sestercegroup) 's Twitter Profile Photo

🚨 New Journal du Net Benchmark: Sesterce ranks #1 in AI Cloud pricing Lowest public prices on the market: 🔹H100: $1.79/hr 🔹H200: $2.48/hr Cheaper than AWS, GCP, CoreWeave. 🔹 Sovereign infra 🔹 PUE 1.1 datacenters 🔹 Scale from 1 to 15,000 GPUs ➡️ journaldunet.com/cloud/1542459-…

Sesterce (@sestercegroup) 's Twitter Profile Photo

Choosing the right compute instance shouldn’t be a guessing game. We broke it down so you can pick based on what you’re training, generating or deploying: 🔹Training vs. inference 🔹LLMs vs. vision 🔹H100 vs. GB200 ➡️ Tutorial : bit.ly/3TJUhQ5 What tutorial should we

Choosing the right compute instance shouldn’t be a guessing game.

We broke it down so you can pick based on what you’re training, generating or deploying:
🔹Training vs. inference
🔹LLMs vs. vision
🔹H100 vs. GB200

➡️ Tutorial : bit.ly/3TJUhQ5

What tutorial should we
Sesterce (@sestercegroup) 's Twitter Profile Photo

“France has 5 GW of unused energy — let’s build sovereign AI datacenters!” At Sesterce, we believe France can be Europe’s AI powerhouse. By 2030, AI needs 100 GW — France must act now. 🎥 Watch Youssef El Manssouri on why France could lead Europe’s AI future: bit.ly/46bmqr1 #AI

Sesterce (@sestercegroup) 's Twitter Profile Photo

Want to serve a Hugging Face model like an API? We wrote a step-by-step guide to do it with vLLM—fast, efficient, and OpenAI-compatible. 🔹 Works great on Sesterce 🔹 Easy to deploy 🔹 Ready for production ➡️ Full tutorial → bit.ly/45gHk6Y

Want to serve a Hugging Face model like an API?

We wrote a step-by-step guide to do it with vLLM—fast, efficient, and OpenAI-compatible.

🔹 Works great on Sesterce
🔹 Easy to deploy
🔹 Ready for production

➡️ Full tutorial → bit.ly/45gHk6Y
Sesterce (@sestercegroup) 's Twitter Profile Photo

Already told a friend about Sesterce? Might as well earn from it. Every user gets a referral link. Share it → earn 3% commission on every transaction your contacts make. No catch. Just clean revenue for helping others scale GPU infra fast (and sovereign). ➡️

Already told a friend about Sesterce?
Might as well earn from it.

Every user gets a referral link.
Share it → earn 3% commission on every transaction your contacts make.

No catch. Just clean revenue for helping others scale GPU infra fast (and sovereign).

➡️
Sesterce (@sestercegroup) 's Twitter Profile Photo

Generic cloud slows research. Competitive AI R&D needs Custom GPU Clusters: ✅ InfiniBand/NVLink ✅ Sovereign hosting ✅ Fleet observability This is how AI innovation scales: bit.ly/41fxcc7

Generic cloud slows research.

Competitive AI R&amp;D needs Custom GPU Clusters:
✅ InfiniBand/NVLink
✅ Sovereign hosting
✅ Fleet observability

This is how AI innovation scales: bit.ly/41fxcc7
Sesterce (@sestercegroup) 's Twitter Profile Photo

Open models, without lock-in. Deploy Pixtral, Llama 4, Qwen 3 on: ✅ Sovereign EU infra ✅ GPU-accelerated endpoints ✅ Full compliance, zero setup Start building ➡️ bit.ly/4mdh7MV #AIInference #SovereignAI #Llama4 #Qwen3 #Pixtral

Open models, without lock-in.

Deploy Pixtral, Llama 4, Qwen 3 on:
✅ Sovereign EU infra
✅ GPU-accelerated endpoints
✅ Full compliance, zero setup

Start building ➡️ bit.ly/4mdh7MV

#AIInference #SovereignAI #Llama4 #Qwen3 #Pixtral
Sesterce (@sestercegroup) 's Twitter Profile Photo

Deployed in France. Live in seconds. No fine print, just sovereign GPU clusters: ✅ H200, H100, L40, L4 ✅ Regions: Paris & Marseille ✅ Ubuntu-based envs ✅ No-commit pricing Born in 🇫🇷. Built for global AI. ➡️ bit.ly/3H8ye30 #GPUCloud #SovereignAI #HPC #PrivateAI

Deployed in France. Live in seconds.

No fine print, just sovereign GPU clusters:
✅ H200, H100, L40, L4
✅ Regions: Paris &amp; Marseille
✅ Ubuntu-based envs
✅ No-commit pricing

Born in 🇫🇷. Built for global AI.
➡️ bit.ly/3H8ye30

#GPUCloud #SovereignAI #HPC #PrivateAI
Sesterce (@sestercegroup) 's Twitter Profile Photo

Infra should be fast. Not buried in tabs. Sesterce Cloud is 100% CLI-native. No dashboards. No dropdowns. Just: ✅ GPU VMs & Bare Metal ✅ Region & type flags ✅ Script & CI/CD ready 🖥️ sesterce create --region=par4 --gpu=H200 --count=8 ➡️ bit.ly/4lKSbLp

Sesterce (@sestercegroup) 's Twitter Profile Photo

Training gets the glory. But inference runs the show. Every chat, every recommendation, every AI task = inference. Live. Continuous. Revenue-critical. Still scaling like it's 2021? You’ll hit latency walls fast. Inference is your factory. Architect like it. #AIInfra

Training gets the glory.

But inference runs the show.
Every chat, every recommendation, every AI task = inference.
Live. Continuous. Revenue-critical.

Still scaling like it's 2021?
You’ll hit latency walls fast.

Inference is your factory.
Architect like it.

#AIInfra
Sesterce (@sestercegroup) 's Twitter Profile Photo

What an amazing few days we had in San Francisco! Sesterce team was right in the thick of things at the PyTorch Summit (Oct 22-23), and let me tell you, the energy was absolutely buzzing. The big takeaway? The future of AI isn’t just about creating larger models; it’s about

What an amazing few days we had in San Francisco!

Sesterce team was right in the thick of things at the PyTorch Summit (Oct 22-23), and let me tell you, the energy was absolutely buzzing. The big takeaway? The future of AI isn’t just about creating larger models; it’s about
Alex Zhang (@a1zhang) 's Twitter Profile Photo

The wait is over! We’re so excited to announce the GPU MODE x @NVIDIA kernel optimization competition for NVFP4 kernels on Blackwell B200s! We will be awarding NVIDIA DGX Spark’s & RTX 50XX series GPUs for individual rankings on each problem, as well as a Dell Pro Max with

The wait is over! We’re so excited to announce the <a href="/GPU_MODE/">GPU MODE</a> x @NVIDIA kernel optimization competition for NVFP4 kernels on Blackwell B200s!

We will be awarding NVIDIA DGX Spark’s &amp; RTX 50XX series GPUs for individual rankings on each problem, as well as a Dell Pro Max with