Pierric Cistac (@pierrci) 's Twitter Profile
Pierric Cistac

@pierrci

🤗 Hub @huggingface

ID: 125798546

linkhttp://hf.co calendar_today23-03-2010 22:30:47

558 Tweet

1,1K Followers

992 Following

Simon Brandeis (@simonbrandeis) 's Twitter Profile Photo

An easy way to use open weights models without too much change in your workflow: use Hugging Face Inference Providers! Compatible with the OpenAI client - with automatic provider selection 📕 Docs: hf.co/docs/inference… 🔗 Snippet: ray.so/jLkS4WT

An easy way to use open weights models without too much change in your workflow: use <a href="/huggingface/">Hugging Face</a> Inference Providers!

Compatible with the OpenAI client - with automatic provider selection 

📕 Docs: hf.co/docs/inference…
🔗 Snippet: ray.so/jLkS4WT
Wauplin (@wauplin) 's Twitter Profile Photo

🚀 Introducing responses.js: a new open source project for building with Responses APIs, powered by Hugging Face Inference Providers! responses.js is a lightweight Express.js server built on top of chat completion. It support images, streaming, structured output, and tool calls.

🚀 Introducing responses.js: a new open source project for building with Responses APIs, powered by Hugging Face Inference Providers!

responses.js is a lightweight Express.js server built on top of chat completion. It support images, streaming, structured output, and tool calls.
👋 Jan (@jandotai) 's Twitter Profile Photo

Jan v0.6.5 is out: SmolLM3-3B now run locally Highlights 💫 - Support for Hugging Face's SmolLM3-3B - Fully responsive design across all screen sizes - New layout for Model Providers Update your Jan or download the latest.

Adrien Carreira (@xcid_) 's Twitter Profile Photo

Starting today you can run any of the 100K+ GGUFs on Hugging Face directly with Docker Run! All of it one single line: docker model run hf.co/bartowski/Llam… Excited to see how y'all will use it

Starting today you can run any of the 100K+ GGUFs on Hugging Face directly with Docker Run! 

All of it one single line: docker model run hf.co/bartowski/Llam…

Excited to see how y'all will use it
clem 🤗 (@clementdelangue) 's Twitter Profile Photo

I'm notorious for turning down 99% of the hundreds of requests every months to join calls (because I hate calls!). The Hugging Face team saw an opportunity and bullied me in accepting to do a zoom call with users who upgrade to pro. I only caved under one strict condition:

I'm notorious for turning down 99% of the hundreds of requests every months to join calls (because I hate calls!). The <a href="/huggingface/">Hugging Face</a> team saw an opportunity and bullied me in accepting to do a zoom call with users who upgrade to pro. I only caved under one strict condition:
Wauplin (@wauplin) 's Twitter Profile Photo

Say hello to `hf`: a faster, friendlier Hugging Face CLI ✨ We are glad to announce a long-awaited quality-of-life improvement: the Hugging Face CLI has been officially renamed from huggingface-cli to hf! So... why this change?

Say hello to `hf`: a faster, friendlier Hugging Face CLI ✨

We are glad to announce a long-awaited quality-of-life improvement: the Hugging Face CLI has been officially renamed from huggingface-cli to hf!

So... why this change?
Vaibhav (VB) Srivastav (@reach_vb) 's Twitter Profile Photo

Introducing Hugging Face Jobs - a *fully* managed way to run CPU and GPU Jobs directly from your CLI or Python scripts ⚡ Leaving you room to try, experiment and build without the worrying about setting up and finding compute! starting jobs is as simple as: hf jobs run

👋 Jan (@jandotai) 's Twitter Profile Photo

Jan v0.6.6 is out: Jan now runs fully on llama.cpp. - Cortex is gone, local models now run on Georgi Gerganov's llama.cpp - Toggle between llama.cpp builds - Hugging Face added as a model provider - Hub enhanced - Images from MCPs render inline in chat Update Jan or grab the latest.

👋 Jan (@jandotai) 's Twitter Profile Photo

Hugging Face 🤝 Jan You can now use Hugging Face as a remote model provider in Jan. Go to Settings -> Model Providers -> add your Hugging Face API key. Then open a new chat and pick a model from Hugging Face. Works with any model in Hugging Face in Jan.

OpenAI Developers (@openaidevs) 's Twitter Profile Photo

Student credits for gpt-oss With Hugging Face, we’re offering 500 students $50 in inference credits to explore gpt-oss. We hope these open models can help unlock new opportunities in class projects, research, fine-tuning, and more: tally.so/r/mKKdXX

Roo Code (@roo_code) 's Twitter Profile Photo

ICYMI: Roo Code now integrates with Hugging Face 🤗 Plug in your API key, explore 90+ models, and run them directly from your editor—no wrappers, no token copy-paste. Try it now!

👋 Jan (@jandotai) 's Twitter Profile Photo

Introducing Jan-v1: 4B model for web search, an open-source alternative to Perplexity Pro. In our evals, Jan v1 delivers 91% SimpleQA accuracy, slightly outperforming Perplexity Pro while running fully locally. Use cases: - Web search - Deep Research Built on the new version

célina (@hanouticelina) 's Twitter Profile Photo

Starting today, you can use Hugging Face Inference Providers directly in GitHub Copilot Chat on Visual Studio Code! 🔥 which means you can access frontier open-source LLMs like Qwen3-Coder, gpt-oss and GLM-4.5 directly in VS Code, powered by our world-class inference partners -

clem 🤗 (@clementdelangue) 's Twitter Profile Photo

🎉 We just crossed 500,000 public datasets on HF 🎉 - there is a new dataset shared every 60 seconds - most datasets are text, images & audio but there's an increasing number of video, 3D, time-series, biology, chemistry and robotics ones - 80% are loadable in one line of code

🎉 We just crossed 500,000 public datasets on HF 🎉
- there is a new dataset shared every 60 seconds
- most datasets are text, images &amp; audio but there's an increasing number of video, 3D, time-series, biology, chemistry and robotics ones
- 80% are loadable in one line of code
Scaleway (@scaleway) 's Twitter Profile Photo

🚀 Scaleway is now officially listed as an Inference Provider on Hugging Face! This means developers can now select Scaleway directly on the Hugging Face Hub to run serverless inference requests, with full flexibility to use either Hugging Face routing or their own Scaleway API

🚀 Scaleway is now officially listed as an Inference Provider on <a href="/huggingface/">Hugging Face</a>!

This means developers can now select Scaleway directly on the Hugging Face Hub to run serverless inference requests, with full flexibility to use either Hugging Face routing or their own Scaleway API
clem 🤗 (@clementdelangue) 's Twitter Profile Photo

Xet by Hugging Face is the most important AI technology that nobody is talking about! Under the hood, it now powers 5M Xet-enabled AI models & datasets on HF which see hundreds of terabytes of uploads and downloads every single day. What makes it super powerful is that it

Xet by Hugging Face is the most important AI technology that nobody is talking about!

Under the hood, it now powers 5M Xet-enabled AI models &amp; datasets on HF which see hundreds of terabytes of uploads and downloads every single day.

What makes it super powerful is that it
Xuan-Son Nguyen (@ngxson) 's Twitter Profile Photo

Long-awaited feature has dropped! You can now edit GGUF metadata directly from Hugging Face, without having to download the model locally 🔥 Huge kudos to Mishig Davaadorj for implementing this! ❤️

Long-awaited feature has dropped! You can now edit GGUF metadata directly from Hugging Face, without having to download the model locally 🔥

Huge kudos to <a href="/mishig25/">Mishig Davaadorj</a> for implementing this! ❤️
Vaibhav (VB) Srivastav (@reach_vb) 's Twitter Profile Photo

The Hugging Face Hub team is on a tear recently: > You can create custom apps with domains on spaces > Edit GGUF metadata on the Fly > 100% of the Hub is powered by Xet - faster, efficient > Responses API support for ALL Inference Providers > MCP-UI support for HF MCP Server >

The Hugging Face Hub team is on a tear recently:

&gt; You can create custom apps with domains on spaces
&gt; Edit GGUF metadata on the Fly
&gt; 100% of the Hub is powered by Xet - faster, efficient
&gt; Responses API support for ALL Inference Providers
&gt; MCP-UI support for HF MCP Server
&gt;