@dMatrix (@dmatrix_ai) 's Twitter Profile
@dMatrix

@dmatrix_ai

d-Matrix built a new energy efficient way to run data centers with fast AI inference, digital in-memory compute + ultra-high bandwidth throughput.

ID: 1496639770264584192

linkhttps://www.dmatrix.ai/ calendar_today24-02-2022 00:17:02

139 Tweet

440 Takipçi

177 Takip Edilen

@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

Delivering “ultra low-latency batched inference” meant rethinking accelerator design to achieve high memory bandwidth + high memory capacity for no-compromise solution for the real-world. d-Matrix redesigned the datacenter for AI Inference. Some math: tinyurl.com/3f5attby

Delivering “ultra low-latency batched inference” meant  rethinking accelerator design to achieve high memory bandwidth + high memory capacity for no-compromise solution for the real-world. 

d-Matrix redesigned the datacenter for AI Inference. 
Some math: tinyurl.com/3f5attby
@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

Standing Room Only UT Austin Hook 'Em House on energy efficiency and AI with @dMatrix's Richard Ogawa. AI's requirement for more and more energy in the data center as we move into the age of reasoning and inference needs a whole new architecture and tech.

@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

d-Matrix Corsair's unique topology is critical for fast token generation speed. Four pairs of Corsair cards (ie. 8 cards) can be connected with PCIe switches & scale up to build an inference server that integrates easily into AI rack infrastructure More lnkd.in/g3sG5nVj

@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

d-Matrix's Corsair is a groundbreaking AI inference platform designed for exceptional performance, efficiency, scalability, and ease of use - the world’s most efficient AI inference platform for datacenters. Take the 3-minute tour: lnkd.in/g-9-6R2t

Six Five Media (@thesixfivemedia) 's Twitter Profile Photo

The Six Five Pod | EP 255: From Intel to Innovation: Pat Gelsinger's New Ventures -- We're covering it ALL! 🚀 Patrick Moorhead & @DanielNewmanUV dive into Intel news, OpenAI's valuation, and chat with Pat Gelsinger about his new ventures post-Intel & the future of US chip

@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

🥂Happy 10th Playground Global ! Cheers to a fantastic AGM where we got to celebrate some of the most exciting innovations imaginable - and welcome friends, old & new We also took the stage to talk @dMatrix Check us out: d-matrix.ai

🥂Happy 10th <a href="/Playground_VC/">Playground Global</a> ! 
 
Cheers to a fantastic AGM where we got to celebrate some of the most exciting innovations imaginable - and welcome friends, old &amp; new

We also took the stage to talk <a href="/dMatrix_AI/">@dMatrix</a> 
 
Check us out: d-matrix.ai
@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

Thank you Embedded.com for a deep dive on d-Matrix's new architecture for Gen AI that smashes through the memory wall with a unique design using chiplets to innovate memory-compute, making #AI #inference fast + efficient for the real world. embedded.com/breaking-the-m…

@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

At d-Matrix we smashed the memory barrier with our new architecture for AI Inference . We deliver ultra-low latency batched inference at scale. Now #GenAI is commercially attainable for datacenters + enterprises near you! More: lnkd.in/gQ7S9aMA d-matrix.ai

@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

Join @dMatrix CEO Sid Sheth this week stepping through the infrastructure that will deliver on GenAI. If you are early in your #Enterprise #AI journey, this is the 'not to miss 'room to get the answers. Join us 5/2: lnkd.in/gtftce8y #TiEcon2025 #TiESiliconValley

Join <a href="/dMatrix_AI/">@dMatrix</a>  CEO Sid Sheth this week stepping through the infrastructure that will deliver on GenAI. If you are early in your #Enterprise #AI journey, this is the 'not to miss 'room to get the answers.

Join us 5/2: lnkd.in/gtftce8y
 
#TiEcon2025 #TiESiliconValley
GigaIO (@giga_io) 's Twitter Profile Photo

Our SuperNODE platform, capable of supporting dozens of Corsair AI inference accelerators in a single node, delivers unprecedented scale and efficiency for next-generation AI inference workloads. Learn more: bit.ly/3YqISaM #NextGenAI #GenAI #AIworkloads @dMatrix

Our SuperNODE platform, capable of supporting dozens of Corsair AI inference accelerators in a single node, delivers unprecedented scale and efficiency for next-generation AI inference workloads. Learn more: bit.ly/3YqISaM #NextGenAI #GenAI #AIworkloads <a href="/dMatrix_AI/">@dMatrix</a>
@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

🌍 We are going global with Corsair! See us at #COMPUTEX2025 Innovex May 20 - 23 in Taipei. We'd love to share more about our innovative AI #Inference technology for Gen AI New d-Matrix Corsair is ideal for modern #AI workloads. Learn how in Taiwan: d-matrix.ai

🌍 We are going global with Corsair! 

See us at #COMPUTEX2025 Innovex May 20 - 23 in Taipei. We'd love to share more about our innovative AI #Inference technology for Gen AI

New d-Matrix Corsair is ideal for modern #AI workloads. Learn how in Taiwan:  d-matrix.ai
@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

Delivering “ultra low-latency batched inference” meant rethinking AI accelerator design to achieve high memory bandwidth + high memory capacity in a no-compromise solution for the datacenter. More: lnkd.in/gsucDDzQ #inference #datacenter #energyefficient_AI

@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

Excited GPU MODE is hosting a live talk with d-Matrix Engineering on delivering low latency batched inference with d-Matrix's novel architecture. Gaurav Jain - Kernels Akhil Arunkumar - Inference Engine Satyam Srivastava - Architecture Save your Spot: lnkd.in/gSpK4Kvi

Excited GPU MODE is hosting a live talk with d-Matrix Engineering on delivering low latency batched inference with d-Matrix's novel architecture. 

Gaurav Jain - Kernels 
Akhil Arunkumar - Inference Engine 
Satyam Srivastava - Architecture

Save your Spot: lnkd.in/gSpK4Kvi
Napatech (@napatech) 's Twitter Profile Photo

NEWS: Napatech disclosed a design win with d-Matrix, the creator of Corsair™, the world’s most efficient Artificial Intelligence (AI) computing platform used for inferencing in datacenters. Read more here: napatech.com/media/press-re… #AI #SmartNIC #Datacenters

NEWS: Napatech disclosed a design win with d-Matrix, the creator of Corsair™, the world’s most efficient Artificial Intelligence (AI) computing platform used for inferencing in datacenters. Read more here: napatech.com/media/press-re… #AI #SmartNIC #Datacenters
@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

d-Matrix's Corsair is a groundbreaking AI inference platform designed for exceptional performance, efficiency, scalability, and ease of use - the world’s most efficient AI inference platform for datacenters The 3-minute tour: lnkd.in/g-9-6R2t #Inference #AI #datacenter

@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

See us @ Toronto ML Conf as Xin Wang steps through decades-held theory + practices of network quantization against new era LLMs. Scaling laws reliably predict model yet high uncertainty lingers after post-training quantization at inference-time deployment lnkd.in/dW8q7DrU

See us @ Toronto ML Conf as Xin Wang steps through decades-held theory + practices of network quantization against new era LLMs. Scaling laws reliably predict model yet high uncertainty lingers after post-training quantization at inference-time deployment

lnkd.in/dW8q7DrU
@dMatrix (@dmatrix_ai) 's Twitter Profile Photo

📣 d-Matrix to the rescue. While the #GPU has admirably served us, its capabilities fall short of aligning with the future. The disparity between the exponential growth of computing over the years & pace of memory technology stalled. Here is what's next d-matrix.ai