ML explorations (@ml_explorations) 's Twitter Profile
ML explorations

@ml_explorations

ID: 1679398387190231040

calendar_today13-07-2023 07:52:43

47 Tweet

4 Takipçi

260 Takip Edilen

AI at Meta (@aiatmeta) 's Twitter Profile Photo

Introducing SeamlessM4T, the first all-in-one, multilingual multimodal translation model. This single model can perform tasks across speech-to-text, speech-to-speech, text-to-text translation & speech recognition for up to 100 languages depending on the task. Details ⬇️

Sebastian Raschka (@rasbt) 's Twitter Profile Photo

Will be interesting how this LoRA-on-demand service will compare to open-source LoRA on prem. Here's a little reminder that open-source Llama 2 compares very favorably to ChatGPT / GPT 3.5

Will be interesting how this LoRA-on-demand service will compare to open-source LoRA on prem.

Here's a little reminder that open-source Llama 2 compares very favorably to ChatGPT / GPT 3.5
merve (@mervenoyann) 's Twitter Profile Photo

AutoGPTQ is now natively supported in transformers! 🤩 AutoGPTQ is a library for GPTQ, a post-training quantization technique to quantize autoregressive generative LLMs. 🦜 With this integration, you can quantize LLMs with few lines of code! Read more 👉 hf.co/blog/gptq-inte…

AutoGPTQ is now natively supported in transformers! 🤩
AutoGPTQ is a library for GPTQ, a post-training quantization technique to quantize  autoregressive generative LLMs. 🦜
With this integration, you can quantize LLMs with few lines of code!
Read more 👉 hf.co/blog/gptq-inte…
Baptiste Rozière (@b_roziere) 's Twitter Profile Photo

Today, we release CodeLlama, a collection of base and instruct-finetuned models with 7B, 13B and 34B parameters. For coding tasks, CodeLlama 7B is competitive with Llama 2 70B and CodeLlama 34B is state-of-the-art among open models. Paper and weights: ai.meta.com/research/publi…

Today, we release CodeLlama, a collection of base and instruct-finetuned models with 7B, 13B and 34B parameters. For coding tasks, CodeLlama 7B is competitive with Llama 2 70B and CodeLlama 34B is state-of-the-art among open models. Paper and weights: ai.meta.com/research/publi…
Phind (@phindsearch) 's Twitter Profile Photo

We beat GPT-4 on HumanEval with fine-tuned CodeLlama-34B! Here's how we did it: phind.com/blog/code-llam… 🚀 Both models have been open-sourced on Huggingface: huggingface.co/Phind

AI at Meta (@aiatmeta) 's Twitter Profile Photo

New on Hugging Face — CoTracker simultaneously tracks the movement of multiple points in videos using a flexible design based on a transformer network — it models correlation of the points in time via specialized attention layers. 🤗 Try CoTracker ➡️ bit.ly/3swQFqt

Davide Scaramuzza (@davsca1) 's Twitter Profile Photo

We are thrilled to share our groundbreaking paper published today in nature: "Champion-Level Drone Racing using Deep Reinforcement Learning." We introduce "Swift," the first autonomous vision-based drone that beat human world champions in several fair head-to-head races! PDF

We are thrilled to share our groundbreaking paper published today in <a href="/Nature/">nature</a>: "Champion-Level Drone Racing using Deep Reinforcement Learning." We introduce "Swift," the first autonomous vision-based drone that beat human world champions in several fair head-to-head races! PDF
AK (@_akhaliq) 's Twitter Profile Photo

LLaSM: Large Language and Speech Model paper page: huggingface.co/papers/2308.15… Multi-modal large language models have garnered significant interest recently. Though, most of the works focus on vision-language multi-modal models providing strong capabilities in following

LLaSM: Large Language and Speech Model

paper page: huggingface.co/papers/2308.15…

Multi-modal large language models have garnered significant interest recently. Though, most of the works focus on vision-language multi-modal models providing strong capabilities in following
Paul Couvert (@itspaulai) 's Twitter Profile Photo

Canva now has incredible AI features. You can easily create visuals in seconds. I'll show you how to create AI-boosted designs on Canva:

AK (@_akhaliq) 's Twitter Profile Photo

Multimodal Foundation Models: From Specialists to General-Purpose Assistants paper page: huggingface.co/papers/2309.10… paper presents a comprehensive survey of the taxonomy and evolution of multimodal foundation models that demonstrate vision and vision-language capabilities,

Multimodal Foundation Models: From Specialists to General-Purpose Assistants

paper page: huggingface.co/papers/2309.10…

paper presents a comprehensive survey of the taxonomy and evolution of multimodal foundation models that demonstrate vision and vision-language capabilities,
Guillaume Lample @ NeurIPS 2024 (@guillaumelample) 's Twitter Profile Photo

Mistral 7B is out. It outperforms Llama 2 13B on every benchmark we tried. It is also superior to LLaMA 1 34B in code, math, and reasoning, and is released under the Apache 2.0 licence. mistral.ai/news/announcin…

Mistral 7B is out. It outperforms Llama 2 13B on every benchmark we tried. It is also superior to LLaMA 1 34B in code, math, and reasoning, and is released under the Apache 2.0 licence.
mistral.ai/news/announcin…
merve (@mervenoyann) 's Twitter Profile Photo

There are many known “foundation models” for chat models, but what about computer vision? 🧐 In this thread, we’ll talk about few of them 👇 🖼️ Segment Anything Model 🦉 OWLViT 💬 BLIP-2 🐕 IDEFICS 🧩 CLIP 🦖 Grounding DINO Let’s go! ✨

Andy Zou (@andyzou_jiaming) 's Twitter Profile Photo

LLMs can hallucinate and lie. They can be jailbroken by weird suffixes. They memorize training data and exhibit biases. 🧠 We shed light on all of these phenomena with a new approach to AI transparency. 🧵 Website: ai-transparency.org Paper: arxiv.org/abs/2310.01405

LLMs can hallucinate and lie. They can be jailbroken by weird suffixes. They memorize training data and exhibit biases.

🧠 We shed light on all of these phenomena with a new approach to AI transparency. 🧵

Website: ai-transparency.org
Paper: arxiv.org/abs/2310.01405
ML explorations (@ml_explorations) 's Twitter Profile Photo

Lots of magic in Canva! So much #AI #ML tools in one place to edit photos, create presentations and much more… canva.com/newsroom/news/…

Benjamin Groessing (@begroe) 's Twitter Profile Photo

Just one hour since Canva dropped its new AI and the design world will never be the same 🤯 10 new features to 10x your productivity 🧵👇

Katja Vogt (@katvankatz) 's Twitter Profile Photo

It used to take me hours of work to turn my presentation slides into content. With Canva's newest update, you can get it done in seconds. I've created a short video walkthrough of how to use it. What's great about it: It allows you to repurpose assets you already have with a

Clémentine Fourrier 🍊 (@clefourrier) 's Twitter Profile Photo

New leaderboard powered by Decoding Trust (outstanding paper at Neurips!), to evaluate LLM safety, such as bias and toxicity, PII, and robustness 🚀 You can find it here: huggingface.co/spaces/AI-Secu… And the intro blog is here: huggingface.co/blog/leaderboa… Congrats to Bo Li !

merve (@mervenoyann) 's Twitter Profile Photo

In case you have missed, this week Hugging Face released IDEFICS3Llama a vision language model state-of-the-art of it's size in many benchmarks 😍