̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝ (@quantum_stat) 's Twitter Profile
̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝

@quantum_stat

҈̿ ̶̢̢̧̡̼̜̝̬͍̜̘̙͉̘͎͓͍̣̰͖̹͖͚̭̘̖̟͕̬̠̬͇̹̮͎̣̱̎̾͌̊́̇͛̅͂̀̀͑̃̈̀̓̏͌͌͋̐̾̒͋͋̏͋̽̈́͐͑̐̂͆̊̈́̾͌̓͌̕̚͝͝͠͠͝ͅ

Repos: index.quantumstat.com

ID: 970498980403712000

calendar_today05-03-2018 03:19:22

3,3K Tweet

1,1K Takipçi

138 Takip Edilen

̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝ (@quantum_stat) 's Twitter Profile Photo

🚀✨ Run CodeGen on CPUs with this detailed Colab notebook! 📝 Explore how to sparsify and perform Large Language Model (LLM) inference using Neural Magic's stack, featuring Salesforce/codegen-350M-mono as an example. Dive into these key steps: 1️⃣ **Installation**: Quickly set

🚀✨ Run CodeGen on CPUs with this detailed Colab notebook! 📝

Explore how to sparsify and perform Large Language Model (LLM) inference using Neural Magic's stack, featuring Salesforce/codegen-350M-mono as an example.

Dive into these key steps:

1️⃣ **Installation**: Quickly set
̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝ (@quantum_stat) 's Twitter Profile Photo

🚀🚀 Hey, check out our blog on Hugging Face 🤗regarding running LLMs on CPUs! The blog discusses how researchers at IST Austria & Neural Magic have cracked the code for fine-tuning large language models. The method, combining sparse fine-tuning and distillation-type losses,

̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝ (@quantum_stat) 's Twitter Profile Photo

🚀🚀 Explore Sparsify's One-Shot Experiment Guide! Discover how to quickly optimize your models with post-training algorithms for a 3-5x speedup. Perfect for when you need to sparsify your model with limited time and improved inference speedups.🔥 **FYI, this is what I used to

🚀🚀 Explore Sparsify's One-Shot Experiment Guide!

Discover how to quickly optimize your models with post-training algorithms for a 3-5x speedup. Perfect for when you need to sparsify your model with limited time and improved inference speedups.🔥

**FYI, this is what I used to
̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝ (@quantum_stat) 's Twitter Profile Photo

🌟First, want to thank everyone for pushing this model past 1,000 downloads in only a few days!! Additionally, I added bge-base models to MTEB. Most importantly, code snippets were added for running inference in the model cards for everyone to try out! huggingface.co/zeroshot/bge-s…

̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝ (@quantum_stat) 's Twitter Profile Photo

Exciting News! 🚀 DeepSparse is now integrated with LangChain , opening up a world of possibilities in Generative AI on CPUs. Langchain, known for its innovative design paradigms for large language model (LLM) applications, was often constrained by expensive APIs or cumbersome

Raj KB (@rajkbnp) 's Twitter Profile Photo

I love the #ChatGPT Cheat Sheet by Ricky Costa (̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝) which includes 🔹NLP Tasks 🔹Code 🔹Structured Output Styles 🔹Unstructured Output Styles 🔹Media Types 🔹Meta ChatGPT 🔹Expert Prompting Get your hands on this amazing resource at:i.mtr.cool/ehyhxpfexx

I love the #ChatGPT Cheat Sheet by Ricky Costa (<a href="/Quantum_Stat/">̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝</a>)

which includes
🔹NLP Tasks
🔹Code
🔹Structured Output Styles
🔹Unstructured Output Styles
🔹Media Types
🔹Meta ChatGPT
🔹Expert Prompting

Get your hands on this amazing resource at:i.mtr.cool/ehyhxpfexx