
Unsloth AI
@unslothai
Open source LLM fine-tuning! 𦄠github.com/unslothai/unslā¦
ID: 1730159888402395136
http://unsloth.ai 30-11-2023 09:40:46
305 Tweet
21,21K Followers
487 Following

We partnered with AI at Meta on a free notebook that turns your documents into high-quality synthetic datasets using Llama! Features: ⢠Parses PDFs, websites, videos ⢠Use Llama to generate QA pairs + auto-filter data ⢠Fine-tunes dataset with Llama šcolab.research.google.com/github/unslothā¦


You can now fine-tune Qwen3 (14B) for free with our notebook! Unsloth makes Qwen3 finetuning 2x faster with 70% less VRAM and 8x longer context lengths - with no accuracy loss. Guide: docs.unsloth.ai/basics/qwen3-h⦠GitHub: github.com/unslothai/unsl⦠Colab: colab.research.google.com/drive/1_ZJD6xqā¦



If you're running Qwen3 locally, this is worth a look. Unsloth AI's 30B quant scores 82.2% on MMLU-Pro (CS), same as Qwen3-32B, but runs 5Ć faster (~45 tok/s vs <10 tok/s). Source: Wolfram Ravenwolf



We're releasing a new advanced GRPO notebook for Qwen3. Learn about: ⢠Fine-tuning Qwen3-Base to enable reasoning ⢠Proximity scoring (closer answers = reward) ⢠Advanced GRPO templates ⢠OpenR1 dataset ⢠Prefinetuning to skip GRPO learning format šcolab.research.google.com/github/unslothā¦


You can now fine-tune TTS models with Unsloth! Train, run and save models like Sesame-CSM and OpenAI's Whisper locally with our free notebooks. Unsloth makes TTS training 1.5x faster with 50% less VRAM. GitHub: github.com/unslothai/unsl⦠Docs & Notebooks: docs.unsloth.ai/basics/text-toā¦


You can now finetune Sesame-CSM (1B) for free with our notebook! Clone voices, learn new emotions, tones & styles. Unsloth makes TTS train 1.5x faster with -50% VRAM & 0 accuracy loss GitHub: github.com/unslothai/unsl⦠Guide: docs.unsloth.ai/basics/text-to⦠Colab: colab.research.google.com/github/unslothā¦




Finetune DeepSeek-R1-0528-Qwen3 with GRPO using our free notebook! Our new reward function increases multilingual (or custom domain) response rates by 40%+. Unsloth makes R1 finetuning 2Ć faster with 70% less VRAM. GitHub: github.com/unslothai/unsl⦠Colab: colab.research.google.com/github/unslothā¦
