Ella Charlaix(@ellacharlaix) 's Twitter Profileg
Ella Charlaix

@ellacharlaix

ML Eng @huggingface

ID:3385680719

calendar_today21-07-2015 10:20:40

14 Tweets

619 Followers

219 Following

Leo Tronchon(@LeoTronchon) 's Twitter Profile Photo

Today we release Idefics2 our newest 8B Vison-Language Model!
💪 With only 8B parameters Idefics is one of the strongest open models out there
📋 We used multiple OCR datasets, including PDFA and IDL from Ross Wightman and Pablo Montalvo, and increased resolution up to 980x980 to improve…

Today we release Idefics2 our newest 8B Vison-Language Model! 💪 With only 8B parameters Idefics is one of the strongest open models out there 📋 We used multiple OCR datasets, including PDFA and IDL from @wightmanr and @m_olbap, and increased resolution up to 980x980 to improve…
account_circle
Julien Simon(@julsimon) 's Twitter Profile Photo

Interested in generating images with Hugging Face models? On @Intel CPUs? In less than 5 seconds? Our new blog post shows you how to optimize diffusion models with Optimum Intel, OpenVINO, IPEX, and more 🚀 🚀🚀

huggingface.co/blog/stable-di…

account_circle
Ella Charlaix(@ellacharlaix) 's Twitter Profile Photo

Tired of your slow Stable Diffusion model ? 🎨

Go up to 1.6x faster by statically reshaping your model with OpenVINO ⚡️

➡️ Give it a try huggingface.co/docs/optimum/i…

Tired of your slow Stable Diffusion model ? 🎨 Go up to 1.6x faster by statically reshaping your model with OpenVINO ⚡️ ➡️ Give it a try huggingface.co/docs/optimum/i…
account_circle
Ella Charlaix(@ellacharlaix) 's Twitter Profile Photo

Speed-up inference by 2.4x with OpenVINO quantization 🚀

📕 Check out our blog post to see how we did on a ViT!
huggingface.co/blog/openvino

account_circle
Anton Lozhkov(@anton_lozhkov) 's Twitter Profile Photo

🏭 The hardware optimization floodgates are open!🔥

Diffusers 0.3.0 supports an experimental ONNX exporter and pipeline for Stable Diffusion 🎨

To find out how to export your own checkpoint and run it with onnxruntime, check the release notes:

github.com/huggingface/di…

🏭 The hardware optimization floodgates are open!🔥 Diffusers 0.3.0 supports an experimental ONNX exporter and pipeline for Stable Diffusion 🎨 To find out how to export your own checkpoint and run it with @onnxruntime, check the release notes: github.com/huggingface/di…
account_circle
Ella Charlaix(@ellacharlaix) 's Twitter Profile Photo

ONNX Runtime inference for Seq2Seq models has never been so easy ! 🚀

With 🤗 Optimum, you can now export your T5 model to the ONNX format and perform ONNX Runtime inference using 🤗 Transformers pipelines.

➡️ Start here huggingface.co/docs/optimum/v…

ONNX Runtime inference for Seq2Seq models has never been so easy ! 🚀 With 🤗 Optimum, you can now export your T5 model to the ONNX format and perform ONNX Runtime inference using 🤗 Transformers pipelines. ➡️ Start here huggingface.co/docs/optimum/v…
account_circle
Philipp Schmid(@_philschmid) 's Twitter Profile Photo

Together with Lewis Tunstall, I gave a talk today on the MLOps world about acceleration Transformers with Hugging Face Optimum🌍🚀

We covered how to accelerate DistilBERT up to ~3x faster latency while keeping > 99.7% accuracy🏎💨 

📕 philschmid.de/static-quantiz…
⭐️github.com/philschmid/opt…

Together with @_lewtun, I gave a talk today on the MLOps world about acceleration Transformers with Hugging Face Optimum🌍🚀 We covered how to accelerate DistilBERT up to ~3x faster latency while keeping > 99.7% accuracy🏎💨  📕 philschmid.de/static-quantiz… ⭐️github.com/philschmid/opt…
account_circle
Leandro von Werra(@lvwerra) 's Twitter Profile Photo

Evaluation is one of the most important aspects of ML but today’s evaluation landscape is scattered and undocumented which makes evaluation unnecessarily hard.

For that reason we are excited to release 🤗 Evaluate!

github.com/huggingface/ev…

Let’s take a tour:

Evaluation is one of the most important aspects of ML but today’s evaluation landscape is scattered and undocumented which makes evaluation unnecessarily hard. For that reason we are excited to release 🤗 Evaluate! github.com/huggingface/ev… Let’s take a tour:
account_circle
Philipp Schmid(@_philschmid) 's Twitter Profile Photo

Optimum v1.2 adds ACCELERATED inference pipelines - including text generation - for onnxruntime🚀

Learn how to accelerate RoBERTa for Question-Answering including quantization and optimization with 🤗Optimum in our blog 🦾🔥

📕huggingface.co/blog/optimum-i…
⭐️github.com/huggingface/op…

Optimum v1.2 adds ACCELERATED inference pipelines - including text generation - for @onnxruntime🚀 Learn how to accelerate RoBERTa for Question-Answering including quantization and optimization with 🤗Optimum in our blog 🦾🔥 📕huggingface.co/blog/optimum-i… ⭐️github.com/huggingface/op…
account_circle
Ella Charlaix(@ellacharlaix) 's Twitter Profile Photo

You can now accelerate inference by applying quantization to models from the Hugging Face Hub 🔥

➡️ With 🤗 Optimum, you can easily apply static and dynamic quantization on your model before exporting it to the ONNX format 🤯

Start here 👉 huggingface.co/docs/optimum/m…

You can now accelerate inference by applying quantization to models from the @huggingface Hub 🔥 ➡️ With 🤗 Optimum, you can easily apply static and dynamic quantization on your model before exporting it to the ONNX format 🤯 Start here 👉 huggingface.co/docs/optimum/m…
account_circle
Lewis Tunstall(@_lewtun) 's Twitter Profile Photo

TIL you can apply *static quantization* to Transformer models 🤯!

This technique finds the best quant. scheme by feeding data through the model to observe the activation patterns ahead of inference time 🔮

With 🤗Optimum, you can do this easily with any dataset from the Hub 🥳

TIL you can apply *static quantization* to Transformer models 🤯! This technique finds the best quant. scheme by feeding data through the model to observe the activation patterns ahead of inference time 🔮 With 🤗Optimum, you can do this easily with any dataset from the Hub 🥳
account_circle
Hugging Face(@huggingface) 's Twitter Profile Photo

One small step for 🤗 Optimum, a giant leap for using 🤗 Transformers with Graphcore IPUs 🚀

With this initial release, start accelerating your trainings with IPUs. We're only getting started - star the repo to follow along! 🌟
github.com/huggingface/op…

One small step for 🤗 Optimum, a giant leap for using 🤗 Transformers with @graphcoreai IPUs 🚀 With this initial release, start accelerating your trainings with IPUs. We're only getting started - star the repo to follow along! 🌟 github.com/huggingface/op…
account_circle
François Lagunas(@madlag) 's Twitter Profile Photo

I will present tomorrow at EMNLP21 (Session 9F) 'Block Pruning for Faster Transformers', a paper we proposed with Ella Charlaix Victor Sanh Sasha Rush (ICLR) Hugging Face . I will describe how we managed to speedup fine-tuned transformers by more than 2.5x while preserving accuracy !

account_circle
Hugging Face(@huggingface) 's Twitter Profile Photo

We are proud to collaborate with Intel AI to Accelerate AI for Production! 🤝🦾

Check out how to easily quantize and prune models with our new 🤗 Optimum library, integrating Intel LPOT:
huggingface.co/hardware

Reply if you spotted this billboard on US 101 by SFO this week! 😎

We are proud to collaborate with @IntelAI to Accelerate AI for Production! 🤝🦾 Check out how to easily quantize and prune models with our new 🤗 Optimum library, integrating Intel LPOT: huggingface.co/hardware Reply if you spotted this billboard on US 101 by SFO this week! 😎
account_circle