Qwen (@alibaba_qwen) 's Twitter Profile
Qwen

@alibaba_qwen

Open foundation models for AGI.

ID: 1753339277386342400

linkhttps://qwen.ai/ calendar_today02-02-2024 08:47:32

218 Tweet

74,74K Followers

4 Following

All Hands AI (@allhands_ai) 's Twitter Profile Photo

It's great to see that Qwen3 works out-of-the-box with OpenHands! We've heard from community members that Qwen3-30B-A3B also works quite well, and achieves reasonable speed (50-60 tokens/s) even on a Mac M1 processor.

Together AI (@togethercompute) 's Twitter Profile Photo

We’re thrilled to announce the launch of Qwen 3 on Together AI. Qwen3 235B A22B, a state of the art hybrid reasoning model, is now available on the Together API. It excels in tool calling, coding, multi-lingual tasks, math, and general tasks. Link & details below!

We’re thrilled to announce the launch of Qwen 3 on Together AI.

Qwen3 235B A22B, a state of the art hybrid reasoning model, is now available on the Together API. It excels in tool calling, coding, multi-lingual tasks, math, and general tasks.

Link & details below!
Qwen (@alibaba_qwen) 's Twitter Profile Photo

We’re excited to announce the release of Qwen2.5-Omni-3B, enabling developers with lightweight GPU accessibility! 🔹 Compared to Qwen2.5-Omni-7B model, the 3B version achieves a remarkable 50%+ reduction 🚀 in VRAM consumption during long-context sequence processing (~25k

We’re excited to announce the release of Qwen2.5-Omni-3B, enabling developers with lightweight GPU accessibility!

🔹 Compared to Qwen2.5-Omni-7B model, the 3B version achieves a remarkable 50%+ reduction 🚀 in VRAM consumption during long-context sequence processing (~25k
Qwen (@alibaba_qwen) 's Twitter Profile Photo

We will release the quantized models of Qwen3 to you in the following days. Today we release the AWQ and GGUFs of Qwen3-14B and Qwen3-32B, which enables using the models with limited GPU memory. Qwen3-32B-AWQ: huggingface.co/Qwen/Qwen3-32B… Qwen3-32B-GGUF: huggingface.co/Qwen/Qwen3-32B…

We will release the quantized models of Qwen3 to you in the following days. Today we release the AWQ and GGUFs of Qwen3-14B and Qwen3-32B, which enables using the models with limited GPU memory.

Qwen3-32B-AWQ: huggingface.co/Qwen/Qwen3-32B…
Qwen3-32B-GGUF: huggingface.co/Qwen/Qwen3-32B…
Qwen (@alibaba_qwen) 's Twitter Profile Photo

We’re officially releasing the quantized models of Qwen3 today! Now you can deploy Qwen3 via Ollama, LM Studio, SGLang, and vLLM — choose from multiple formats including GGUF, AWQ, and GPTQ for easy local deployment. Find all models in the Qwen3 collection on Hugging Face and

We’re officially releasing the quantized models of Qwen3 today!

Now you can deploy Qwen3 via Ollama, LM Studio, SGLang, and vLLM — choose from multiple formats including GGUF, AWQ, and GPTQ for easy local deployment.

Find all models in the Qwen3 collection on Hugging Face and
Qwen (@alibaba_qwen) 's Twitter Profile Photo

After a few weeks of phased testing, Deep Research on Qwen Chat is now live and available for everyone ! 🎉 Here's how to use it: Just ask something you're curious about — like "Tell me something about robotics." Qwen will then ask you to narrow it down — maybe history, theory,

Binyuan Hui (@huybery) 's Twitter Profile Photo

Parameter and inference-time scaling have already demonstrated that more compute brings more intelligence. 🤔 But is there a new way to scale compute? The answer might be yes! We propose Parallel Scaling—increasing parallel computation during training and inference. As an

Parameter and inference-time scaling have already demonstrated that more compute brings more intelligence. 

🤔 But is there a new way to scale compute? The answer might be yes!

We propose Parallel Scaling—increasing parallel computation during training and inference. As an
Qwen (@alibaba_qwen) 's Twitter Profile Photo

We’re releasing the quantized models of Qwen2.5-Omni-7B today! Find all models in the Qwen2.5-Omni collection on Hugging Face and ModelSope. Hugging Face:huggingface.co/collections/Qw… ModelScope:modelscope.cn/collections/Qw… Enjoy!

We’re releasing the quantized models of Qwen2.5-Omni-7B today!

Find all models in the Qwen2.5-Omni collection on Hugging Face and ModelSope.
Hugging Face:huggingface.co/collections/Qw…
ModelScope:modelscope.cn/collections/Qw…

Enjoy!
Qwen (@alibaba_qwen) 's Twitter Profile Photo

Title: Modeling World Preference Our research reveals that human preference modeling follows Scaling Laws, suggesting that diverse human preferences might share a unified representation. We propose "Modeling World Preference" to emphasize this potential for scalability. We

Qwen (@alibaba_qwen) 's Twitter Profile Photo

🚀 Qwen Web Dev just got even better! ✨ One prompt. One website. One click to deploy. 💡 Let your creativity shine — and share it with the world. 🔥 What will you build today?