Stefano Ermon (@stefanoermon) 's Twitter Profile
Stefano Ermon

@stefanoermon

Associate Professor of #computerscience @Stanford #AI #ML

ID: 1145851147

linkhttp://www.cs.stanford.edu/~ermon/ calendar_today03-02-2013 18:16:08

429 Tweet

16,16K Takipçi

366 Takip Edilen

Andrew Ng (@andrewyng) 's Twitter Profile Photo

Transformers have dominated LLM text generation, and generate tokens sequentially. This is a cool attempt to explore diffusion models as an alternative, by generating the entire text at the same time using a coarse-to-fine process. Congrats Stefano Ermon & team!

Pika (@pika_labs) 's Twitter Profile Photo

Pika 2.2 is HERE, with 10s generations, 1080p resolution, and Pikaframes— key frame transitions anywhere from 1-10s. More transformation, more imagination. Try it at Pika dot art

NVIDIA AI Developer (@nvidiaaidev) 's Twitter Profile Photo

Speed + efficiency = the future of AI ⚡️ Mercury Coder running on NVIDIA H100 GPUs can hit over 1000 output tokens/second —that's a 5x speed increase for high quality responses at low costs. Congrats to Inception Labs and welcome to the #NVIDIAInception program 🎊

Cartesia (@cartesia_ai) 's Twitter Profile Photo

We've raised a $64M Series A led by Kleiner Perkins to build the platform for real-time voice AI. We'll use this funding to expand our team, and to build the next generation of models, infrastructure, and products for voice, starting with Sonic 2.0, available today. Link below

Aditya Grover (@adityagrover_) 's Twitter Profile Photo

Ultra-fast generation now a cURL away! Check out our website for more details on the Inception API and various integrations across various platforms including Continue and OpenRouter.

Stefano Ermon (@stefanoermon) 's Twitter Profile Photo

They’re here. 🔥 Inception’s diffusion LLMs — lightning fast, state-of-the-art, and now public. Go build the future → platform.inceptionlabs.ai #GenAI #dLLMs #diffusion

Volodymyr Kuleshov 🇺🇦 (@volokuleshov) 's Twitter Profile Photo

You can now run Mercury diffusion language models directly in Continue.dev. Lightning-fast chat and completions—powered by parallel diffusion inference—now inside VS Code. blog.continue.dev/a-shift-to-par…

Matt Shumer (@mattshumer_) 's Twitter Profile Photo

WOW. Mercury coder feels almost as fast as models on Groq, but it runs on consumer hardware. Just wait till the Groq Inc team gets this on their platform... you'll be able to generate nicely-sized codebases in seconds. It's going to be insane.

Cline (@cline) 's Twitter Profile Photo

Inception Labs Mercury Coder Small Beta is now available in Cline. It's the first commercial diffusion LLM (dLLM), offering a different approach to text generation. It rivals models like Claude 3.5 Haiku and GPT-4o Mini in code quality while running significantly faster. 🧵

<a href="/InceptionAILabs/">Inception Labs</a> Mercury Coder Small Beta is now available in Cline. It's the first commercial diffusion LLM (dLLM), offering a different approach to text generation.

It rivals models like Claude 3.5 Haiku and GPT-4o Mini in code quality while running significantly faster.

🧵
Volodymyr Kuleshov 🇺🇦 (@volokuleshov) 's Twitter Profile Photo

Watch the Mercury diffusion LLM in action 🧑‍💻 In this video, we vibe code a ViT in PyTorch using VS Code + Continue. The model nearly instantaneously generates big blocks of code based on user instructions.

Stefano Ermon (@stefanoermon) 's Twitter Profile Photo

🚀 Beyond thrilled to join forces with Microsoft as a founding partner of #NLWeb! Our ultra-fast Mercury diffusion LLM is powering lightning-quick, natural conversations for every website. The future of web interaction just got a major speed boost ⚡️