Blake Mizerany (@bmizerany) 's Twitter Profile
Blake Mizerany

@bmizerany

Engineer @ollama.

Previously Songbird, early @heroku, early @CoreOS, founder of Backplane (not Lady Gaga’s), @grax, founder @tierrun

ID: 2379441

calendar_today27-03-2007 00:38:07

4,4K Tweet

3,3K Followers

671 Following

Alex Reibman 🖇️ (@alexreibman) 's Twitter Profile Photo

Never underestimate the open source AI community. These cracked engineers are here to break the limits of what’s possible with local LLMs. We just witnessed some nutty inventions. Here’s what we saw at the ollama Open Source and Local AI meetup at Cerebral Valley (🧵):

Never underestimate the open source AI community.

These cracked engineers are here to break the limits of what’s possible with local LLMs. We just witnessed some nutty inventions.

Here’s what we saw at the <a href="/ollama/">ollama</a> Open Source and Local AI meetup at <a href="/cerebral_valley/">Cerebral Valley</a> (🧵):
Paul Copplestone — e/postgres (@kiwicopple) 's Twitter Profile Photo

Today we're adding native AI support in Supabase Edge Functions ◆ Embedding models ◆ Large language models (powered by ollama) We've removed the cold-boot by placing the models inside the edge runtime and we're rolling out a GPU-powered sidecar. See it in action:

ollama (@ollama) 's Twitter Profile Photo

.Mistral AI's Mixtral 8x22B Instruct is now available on Ollama! ollama run mixtral:8x22b We've updated the tags to reflect the instruct model by default. If you have pulled the base model, please update it by performing an `ollama pull` command.

ollama (@ollama) 's Twitter Profile Photo

.Meta Llama 3 - The most capable openly available LLM to date! ollama run llama3 ollama.com/library/llama3 If you have pull the llama 3 model prior to this post, please update the model using `ollama pull`.

.<a href="/Meta/">Meta</a>  Llama 3 - The most capable openly available LLM to date! 

ollama run llama3 

ollama.com/library/llama3

If you have pull the llama 3 model prior to this post, please update the model using `ollama pull`.
Shital Shah (@sytelus) 's Twitter Profile Photo

You can also try out phi-3 quickly using ollama running 100% locally on your machine even if you don't have GPU 😎. Have fun!

You can also try out phi-3 quickly using ollama running 100% locally on your machine even if you don't have GPU 😎.

Have fun!
ollama (@ollama) 's Twitter Profile Photo

We need your help test a new backend performance improvement! 1. Pull a model you don't have yet (or remove it): examples: ollama pull issue1736.ollama.dev/library/llama3… ollama pull issue1736.ollama.dev/library/gemma:… ollama pull issue1736.ollama.dev/library/mistral ollama pull issue1736.ollama.dev/library/llava-… ollama

Jessie Frazelle (@jessfraz) 's Twitter Profile Photo

Anyone saying we will fail, CAD is already as fast as it possibly ever could be, it will never work, you’ll need decades, only give me one of those mushroom power ups and make me stronger. I’ve spent my entire career proving people wrong. It’s what makes it fun.

Brian Lopez (@brianmario) 's Twitter Profile Photo

In case anyone was wondering why I haven't been keeping up with my open source projects ;) winebusiness.com/news/people/ar…

ollama (@ollama) 's Twitter Profile Photo

Llama 3.2 is available on Ollama! It's lightweight and multimodal! It's so fast and good! 🥕 Try it: 1B ollama run llama3.2:1b 3B ollama run llama3.2 🕶️ vision models are coming very soon! ollama.com/library/llama3…

ollama (@ollama) 's Twitter Profile Photo

ollama run qwq If you have previously downloaded the QwQ preview model, please update directly via: `ollama pull qwq`. Thank you Junyang Lin Binyuan Hui. Let's go!