Cudo (@cudoventures) 's Twitter Profile
Cudo

@cudoventures

Disrupting the cloud and sharing the wealth. We’re building the largest global distributed platform and we’re doing it ethically!

ID: 948621541964701698

linkhttp://CudoVentures.com calendar_today03-01-2018 18:26:14

243 Tweet

903 Followers

1,1K Following

CUDOS (@cudos_) 's Twitter Profile Photo

If you’re building AI on proprietary APIs, you’re renting your future. 🔒 ASI:Cloud gives builders: • Open-source LLMs • Transparent GPU pricing • Wallet-based access • Full data + infra control Discover what it really means to build #AI on infrastructure you control. 🌐

CUDO Compute (@cudo_compute) 's Twitter Profile Photo

Production #AI only scales when land, power, and compute move together. Next week we’ll be in Las Vegas for Red Hat One ecosystem discussions. Dean and Mark from our team are attending. CUDO is built to move AI from pilots into power-backed production. #LifeAtRedHat

Production #AI only scales when land, power, and compute move together.

Next week we’ll be in Las Vegas for <a href="/RedHat/">Red Hat</a> One ecosystem discussions. Dean and Mark from our team are attending.

CUDO is built to move AI from pilots into power-backed production. #LifeAtRedHat
CUDOS (@cudos_) 's Twitter Profile Photo

If you’re building #AI apps, models aren’t the hard part. Infrastructure is. Luke GnX breaks down: ⚡ How AI compute works ⚡ Why GPUs matter ⚡ Inference + scaling costs Watch his presentation from our hackathon with Imperial College London to see what your AI really runs on.

CUDO Compute (@cudo_compute) 's Twitter Profile Photo

48V is not a trend. It is math. A 4x lower current than 12V means far lower losses. At 100 kW racks, 48V makes high density practical and improves cost per token. Read our breakdown on why power architecture is now a performance lever: cudocompute.com/blog/designing…

48V is not a trend. It is math. A 4x lower current than 12V means far lower losses.

At 100 kW racks, 48V makes high density practical and improves cost per token.

Read our breakdown on why power architecture is now a performance lever: cudocompute.com/blog/designing…
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

#AI at scale is an operations problem before it is a model problem. In Las Vegas today, CUDO Compute is in Red Hat ecosystem discussions on power density, activation timelines, and control at scale. Dean & Mark from our team are attending. #LifeAtRedHat

#AI at scale is an operations problem before it is a model problem.

In Las Vegas today, CUDO Compute is in <a href="/RedHat/">Red Hat</a> ecosystem discussions on power density, activation timelines, and control at scale.

Dean &amp; Mark from our team are attending. #LifeAtRedHat
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

Production #AI breaks where operations are ignored. At #RedHatOne Las Vegas, discussions are on power density, activation timelines, and day two control as clusters scale. Dean & Mark from CUDO Compute are in Red Hat ecosystem discussions on what keeps systems stable once live.

Production #AI breaks where operations are ignored.

At #RedHatOne Las Vegas, discussions are on power density, activation timelines, and day two control as clusters scale.

Dean &amp; Mark from CUDO Compute are in <a href="/RedHat/">Red Hat</a> ecosystem discussions on what keeps systems stable once live.
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

At modern rack densities, air cooling hits a wall. Heat flux outruns airflow. Direct-to-chip liquid cooling removes most of the GPU's heat. Air handles the rest. Read our complete analysis on why cooling now limits #AI infrastructure: cudocompute.com/blog/designing…

At modern rack densities, air cooling hits a wall.
Heat flux outruns airflow.

Direct-to-chip liquid cooling removes most of the GPU's heat. Air handles the rest.

Read our complete analysis on why cooling now limits #AI infrastructure: cudocompute.com/blog/designing…
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

Scale isn’t a cloud vs on-prem debate. Some workloads belong on-prem. Others fit the cloud. The real work is optimizing how scale happens. As @cudopete puts it, you don’t “finish” scaling. You engineer for it.

Scale isn’t a cloud vs on-prem debate.

Some workloads belong on-prem. Others fit the cloud.
The real work is optimizing how scale happens.

As @cudopete puts it, you don’t “finish” scaling.
You engineer for it.
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

Progress happens when engineers, operators, partners, & platforms share what’s working and what’s breaking. Red Hat One was a reminder that production #AI is a team effort. Dean & Mark were on the ground at #LifeAtRedHat in Vegas, sharing real operator insight on production AI.

Progress happens when engineers, operators, partners, &amp; platforms share what’s working and what’s breaking.

<a href="/RedHat/">Red Hat</a> One was a reminder that production #AI is a team effort.

Dean &amp; Mark were on the ground at #LifeAtRedHat in Vegas, sharing real operator insight on production AI.
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

#AI-ready is not a label. It means power and cooling never become the bottleneck. If infrastructure is visible to the workload through throttling or slowdowns, the facility is already interfering. See what an AI-ready design really requires: cudocompute.com/blog/designing…

#AI-ready is not a label.  It means power and cooling never become the bottleneck.

If infrastructure is visible to the workload through throttling or slowdowns, the facility is already interfering.

See what an AI-ready design really requires: cudocompute.com/blog/designing…
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

In multi-year #AI deployments, hardware is often less than half of the total cost. Power, cooling, & operations dominate. Facility efficiency improvements can save millions & unlock more usable compute. We share the numbers behind AI infrastructure TCO: cudocompute.com/blog/designing…

In multi-year #AI deployments, hardware is often less than half of the total cost.
Power, cooling, &amp; operations dominate.

Facility efficiency improvements can save millions &amp; unlock more usable compute.

We share the numbers behind AI infrastructure TCO: cudocompute.com/blog/designing…
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

Underestimating compute is one of the fastest ways to kill #AI adoption. Models are becoming more sophisticated, and capacity gaps manifest as latency, queues, and poor user experience. As Pete Hill has warned, you have to plan for more than you think you’ll need.

Underestimating compute is one of the fastest ways to kill #AI adoption.

Models are becoming more sophisticated, and capacity gaps manifest as latency, queues, and poor user experience.

As <a href="/cudopete/">Pete Hill</a> has warned, you have to plan for more than you think you’ll need.
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

We’re attending CiscoLive next week in Amsterdam. As #AI infrastructure scales, the discussion gets concrete: power headroom, site constraints, fabric design, and readiness. Pete Hill & Matt Hawkins will be on the ground with teams planning real 2026 deployments at #CiscoLive.

We’re attending <a href="/CiscoLive/">CiscoLive</a> next week in Amsterdam.

As #AI infrastructure scales, the discussion gets concrete: power headroom, site constraints, fabric design, and readiness.

<a href="/CudoPete/">Pete Hill</a> &amp; <a href="/HawkinsTech/">Matt Hawkins</a> will be on the ground with teams planning real 2026 deployments at #CiscoLive.
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

Failure in #AI training usually first shows up as underperformance. Small power or thermal drift becomes longer step times at scale. Full-path telemetry and fast controls protect throughput before it erodes. See how telemetry becomes a performance tool: cudocompute.com/blog/designing…

Failure in #AI training usually first shows up as underperformance. Small power or thermal drift becomes longer step times at scale.

Full-path telemetry and fast controls protect throughput before it erodes.

See how telemetry becomes a performance tool: cudocompute.com/blog/designing…
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

We are here at CiscoLive in Amsterdam. The focus today is performance consistency at scale: keeping throughput predictable once workloads hit production. Pete Hill & Matt Hawkins are on site at #CiscoLive throughout the entire week.

We are here at <a href="/CiscoLive/">CiscoLive</a> in Amsterdam.

The focus today is performance consistency at scale: keeping throughput predictable once workloads hit production.

<a href="/CudoPete/">Pete Hill</a> &amp; <a href="/HawkinsTech/">Matt Hawkins</a> are on site at #CiscoLive throughout the entire week.
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

Data center slowdowns usually begin with a small operational drift. Left unchecked, it compounds into real outages. Reliability involves playbooks, spares, and teams trained to execute under pressure. We share why operations decide outcomes: cudocompute.com/blog/designing…

Data center slowdowns usually begin with a small operational drift. Left unchecked, it compounds into real outages.

Reliability involves playbooks, spares, and teams trained to execute under pressure.

We share why operations decide outcomes: cudocompute.com/blog/designing…
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

Regulation will always lag behind innovation. But designing for compliance can’t wait until later. As Pete Hill points out, teams that plan for compliance early avoid costly rework when rules catch up. Build for it from day one.

Regulation will always lag behind innovation.

But designing for compliance can’t wait until later.

As <a href="/cudopete/">Pete Hill</a> points out, teams that plan for compliance early avoid costly rework when rules catch up. 

Build for it from day one.
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

Data centers built a decade ago were never designed for modern #AI workloads. Facilities engineered for 5-10 kW racks are now being pushed to sustained densities of 30-100 kW. This breaks old assumptions fast. Read our thread on infrastructure bottlenecks below.

Data centers built a decade ago were never designed for modern #AI workloads.

Facilities engineered for 5-10 kW racks are now being pushed to sustained densities of 30-100 kW.

This breaks old assumptions fast. 
Read our thread on infrastructure bottlenecks below.
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

Our cofounders Matt Hawkins and Pete Hill had an eventful week at CiscoLive EMEA in Amsterdam. Big takeaway: #AI wins need infrastructure that performs within real power and deployment limits. Thanks to everyone who connected with us and shared industry insights.

Our cofounders <a href="/HawkinsTech/">Matt Hawkins</a> and <a href="/CudoPete/">Pete Hill</a> had an eventful week at <a href="/CiscoLive/">CiscoLive</a> EMEA in Amsterdam.

Big takeaway: #AI wins need infrastructure that performs within real power and deployment limits.

Thanks to everyone who connected with us and shared industry insights.
CUDO Compute (@cudo_compute) 's Twitter Profile Photo

Transformer lead times moving to 3–4 years forces a new question: which infrastructure decisions are reversible vs locked in? That’s the real risk model for #AI buildouts. Read our full capacity planning framework: cudocompute.com/blog/ai-data-c…

Transformer lead times moving to 3–4 years forces a new question: which infrastructure decisions are reversible vs locked in? 

That’s the real risk model for #AI buildouts.  Read our full capacity planning framework: cudocompute.com/blog/ai-data-c…