Rohan Paul (@rohanpaul_ai) 's Twitter Profile
Rohan Paul

@rohanpaul_ai

I Build & Write AI stuff.

→ Join my LLM Newsletter - rohanpaul.substack.com

💼 AI Engineer

ID: 2588345408

linkhttps://linktr.ee/rohanpaul calendar_today25-06-2014 22:38:54

17,17K Tweet

30,30K Followers

374 Following

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

Transformer and Human Brain - Andrej Karpathy ----- Video Credit - Original video from "No Priors: AI, Machine Learning, Tech, & Startups" YouTube Channel (Link in comment)

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

Linkedin's great Liger Kernel repo just released the updated version. - New Integrations: SFTtrainer, Axolotl, LLaMa-Factory - New Models Support: Phi3 & Qwen2 - AutoModel API: Meet AutoLigerKernelForCausalLM - Enhanced FusedLinearCrossEntropy: support bias term

Linkedin's great Liger Kernel repo just released the updated version.

- New Integrations: SFTtrainer, Axolotl, LLaMa-Factory
- New Models Support: Phi3 & Qwen2
- AutoModel API: Meet AutoLigerKernelForCausalLM
- Enhanced FusedLinearCrossEntropy: support bias term
Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

Andrej Karpathy on the importance of extremely smaller-sized distilled models (even 1Bn param model should be good enough) Video Credit - Original video from "No Priors: AI, Machine Learning, Tech, & Startups" YouTube Channel (Link in comment)

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

This 1.58 bit LLMs with llamafile is so interesting. - Weights: -1, 0, +1 (like Soviet Setun) - 8 size options: 78MB to 1.3GB - Strong benchmark performance on 4 CPUs -------- llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023. It uses Cosmopolitan Libc to

This 1.58 bit LLMs with llamafile is so interesting.

- Weights: -1, 0, +1 (like Soviet Setun)
- 8 size options: 78MB to 1.3GB
- Strong benchmark performance on 4 CPUs

--------

llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023. It uses Cosmopolitan Libc to