Awni Hannun(@awnihannun) 's Twitter Profileg
Awni Hannun

@awnihannun

Machine Learning Research @Apple

ID:245262377

linkhttps://awnihannun.com/ calendar_today31-01-2011 08:05:27

1,7K Tweets

16,2K Followers

245 Following

Awni Hannun(@awnihannun) 's Twitter Profile Photo

CTC (Connectionist Temporal Classification) loss is in MLX!

Runs on both the CPU and GPU.

pip install mlx-ctc

Extension code (h/t djphoenix): github.com/djphoenix/mlx-…

CTC (Connectionist Temporal Classification) loss is in MLX! Runs on both the CPU and GPU. pip install mlx-ctc Extension code (h/t djphoenix): github.com/djphoenix/mlx-…
account_circle
Awni Hannun(@awnihannun) 's Twitter Profile Photo

Latest MLX Swift supports full NumPy style array indexing (h/t github.com/davidkoski).

Docs: swiftpackageindex.com/ml-explore/mlx…

Quickly becoming a great option for any kind of numerical computing in Swift.

Latest MLX Swift supports full NumPy style array indexing (h/t github.com/davidkoski). Docs: swiftpackageindex.com/ml-explore/mlx… Quickly becoming a great option for any kind of numerical computing in Swift.
account_circle
Vaibhav (VB) Srivastav(@reach_vb) 's Twitter Profile Photo

On to 3000 models by the end of this quarter ;)

Seriously tho, what an inspiration you, the team and the mlx community are for the broader open-source ecosystem.

account_circle
Awni Hannun(@awnihannun) 's Twitter Profile Photo

Wow, 1000 members in the 🤗 MLX Community.

And 300 pre-quantized / converted models. More added every day.

Check it out: huggingface.co/mlx-community

Wow, 1000 members in the 🤗 MLX Community. And 300 pre-quantized / converted models. More added every day. Check it out: huggingface.co/mlx-community
account_circle
Prince Canuma(@Prince_Canuma) 's Twitter Profile Photo

Mixtral 8x22B now on MLX🚀

You can run inference locally on your Mac (+96GB URAM).

> pip install -U mlx_lm

Model info:

🧠 170B params
🪟65K context window
🕵🏾‍♂️ 8 experts, 2 per token
🤓32K vocab size
✂️ Similar tokenizer as 7B

Model card 👇🏾

Mixtral 8x22B now on MLX🚀 You can run inference locally on your Mac (+96GB URAM). > pip install -U mlx_lm Model info: 🧠 170B params 🪟65K context window 🕵🏾‍♂️ 8 experts, 2 per token 🤓32K vocab size ✂️ Similar tokenizer as 7B Model card 👇🏾
account_circle
Prince Canuma(@Prince_Canuma) 's Twitter Profile Photo

LangChain + MLX integration is OUT 🎉

You can now use all of LangChain features with MLX.

Thank you to Awni Hannun (MLX), Jacob Lee, Erick Friis, Harrison Chase and the entire LangChain team for their hardwork and helping me contribute.👏🏾

github.com/langchain-ai/l…

account_circle
Prince Canuma(@Prince_Canuma) 's Twitter Profile Photo

New Stable LM 2 models (1.6B & 12B) now on MLX 🚀

You can run inference, and (Q)LoRA fine-tuning locally on your Mac.

> pip install -U mlx_lm

I’m getting 38-43 tokens/s for the 1.6B version on my M1 Air ⚡️

Model cards 👇🏾

account_circle