DogeDesigner(@cb_doge) 's Twitter Profile Photo

'Optimus will be more valuable than everything else combined. Tesla's A.I. Inference efficiency is vastly better than any other company. There's no company even close to the efficiency of Tesla.'

一 Elon Musk

account_circle
Herbert Ong(@herbertong) 's Twitter Profile Photo

Tesla's humanoid bot Optimus is on track to transform factory operations with its enhanced finger sensors and efficient inference chips. It is set to be commercially available by the end of next year.

With plans for mass production of 5,000 bots, how will this impact production…

account_circle
AlphaCall(@alphacallx) 's Twitter Profile Photo

The Layer-1 blockchain for AI

Nesa

💢A FULLY END-TO-END ENCRYPTED AI INFERENCE
NETWORK

Nesa is a lightweight layer-1 platform designed to address the critical need…

The Layer-1 blockchain for AI

                                                                @nesaorg

💢A FULLY END-TO-END ENCRYPTED AI INFERENCE 
                                             NETWORK

Nesa is a lightweight layer-1 platform designed to address the critical need…
account_circle
Identity V(@GameIdentityV) 's Twitter Profile Photo

Dear Visitors,
Spirit of the East, the Qilin steps forth! Truth & Inference 6th Anniversary gift box Qilin of the East pre-sale starts on April 28 at 10:30 AM (UTC+8)! Don’t miss out! NetEase Games Store
Here is the link: smartyoudao.com/collections/id…

Dear Visitors, 
Spirit of the East, the Qilin steps forth! Truth & Inference 6th Anniversary gift box Qilin of the East pre-sale starts on April 28 at 10:30 AM (UTC+8)! Don’t miss out! @neteasestore
Here is the link: smartyoudao.com/collections/id…
#IdentityV #6thAnniversary
account_circle
Intel(@intel) 's Twitter Profile Photo

Ready to jump to hyperspace with our accelerators? 🚀

Immerse yourself in the galaxy of this May the 4th with the inference and efficiency of 3 AI accelerators.

account_circle
Nosana(@nosana_ai) 's Twitter Profile Photo

AI Inference is becoming increasingly costly, escalating barriers to innovation.

Decentralizing GPU access helps reduce these expenses — making advanced AI development more accessible to all 🌐

AI Inference is becoming increasingly costly, escalating barriers to innovation. 

Decentralizing GPU access helps reduce these expenses — making advanced AI development more accessible to all 🌐
account_circle
ChawazZ🌙(@Chawawanya) 's Twitter Profile Photo

have a sketchy surveyor/inference animation with wacky translation, my treat (i needed to get the song out of my head)

account_circle
Rohan Paul(@rohanpaul_ai) 's Twitter Profile Photo

Run LLama 3 70B on a Single 4GB GPU - with airllm and layered inference 🔥

📌 And this is without using quantization, distillation, pruning or other model compression techniques.

layer-wise inference is essentially the 'divide and conquer' approach

📌 The reason large language…

Run LLama 3 70B on a Single 4GB GPU - with airllm and layered inference 🔥

📌 And this is without using quantization, distillation, pruning or other model compression techniques.

layer-wise inference is essentially the 'divide and conquer' approach

📌 The reason large language…
account_circle
Prince Canuma(@Prince_Canuma) 's Twitter Profile Photo

LLaVA Llama-3 and Phi-3 now on MLX 🎉🚀

You can now run inference locally on your Mac.

pip install -U mlx-vlm

I’m getting ~50 tokens on a M3 Max.

Model cards 👇🏾

account_circle
Rohan Paul(@rohanpaul_ai) 's Twitter Profile Photo

🧠 Run LLama 3 70B on a Single 4GB GPU - with airllm and layered inference 🔥

layer-wise inference is essentially the 'divide and conquer' approach

📌 And this is without using quantization, distillation, pruning or other model compression techniques

📌 The reason large…

🧠 Run LLama 3 70B on a Single 4GB GPU - with airllm and layered inference 🔥

layer-wise inference is essentially the 'divide and conquer' approach

📌 And this is without using quantization, distillation, pruning or other model compression techniques

📌 The reason large…
account_circle