
Carlos Lassance
@cadurosar
MTS @ Cohere, constantly trying to make Information Retrieval work better, while making mistakes on the process.
ID: 969183405077401600
http://cadurosar.github.io 01-03-2018 12:11:45
256 Tweet
464 Followers
122 Following

Welcome Cohere For AI Command-R! The top trending among over 500k open-access models! ๐ huggingface.co/CohereForAI/c4โฆ


๐บ๐ณ๐๐๐๐ ๐๐ข๐ค๐ข๐ฉ๐๐๐ข๐ ๐๐ฆ๐๐๐๐๐ข๐ง๐ ๐ฌ ๐ข๐ง ๐๐๐+ ๐๐๐ง๐ ๐ฎ๐๐ ๐๐ฌ ๐บ๐ณ What could you build if your RAG has access to Wikipedia in all 300+ languages? Available for anyone to use, using our state-of-the-art multilingual embedding model: huggingface.co/datasets/Coherโฆ


Just as splade-v3 comes out (huggingface.co/naver/splade-v3), splade++ achieves 1M monthly downloads again, pretty cool seeing this happen again! Always curious to know what people are doing it with and congrats to the NAVER LABS Europe team @thibault_formal Stรฉphane Clinchant HerveDejean


๐ ๐๐จ๐ก๐๐ซ๐ ๐๐ฆ๐๐๐ ๐๐ - ๐ข๐ง๐ญ๐ & ๐๐ข๐ง๐๐ซ๐ฒ ๐๐ฎ๐ฉ๐ฉ๐จ๐ซ๐ญ๐ I'm excited to launch our native support for int8 & binary embeddings for Cohere Embed V3. They slash your vector DB cost 4x - 32x while keeping 95% - 100% of the search quality. txt.cohere.com/int8-binary-emโฆ


People are asking me what to expect from "Faster Learned Sparse Retrieval with Block-Max Pruning" with Nicola Tonellotto and Torsten Suel 10x faster safe retrieval wrt. Maxscore and you can check yourself the approximate retrieval trade-offs on naver/splade-cocondenser-ensembledistil


0โฃ ๐๐จ๐ซ๐ฅ๐ ๐ ๐ข๐ซ๐ฌ๐ญ ๐๐ข๐ง๐๐ซ๐ฒ ๐๐๐๐ญ๐จ๐ซ ๐๐๐ญ๐๐๐๐ฌ๐ 1โฃ Happy to annouce the world first ๐๐ข๐ง๐๐ซ๐ฒ ๐๐๐๐ญ๐จ๐ซ ๐๐๐ญ๐๐๐๐ฌ๐ (for educational purposes). ๐ฐ32x less memory ๐ฐ ๐ 40x faster search ๐ Github: github.com/cohere-ai/Binaโฆ







Aya-Expanse, the strongest open weights multilingual LLM, was just released by Cohere For AI It beats Llama 70B multilingual, while being half the size and twice the speed.


๐๐๐ฎ๐ง๐๐ก ๐จ๐ ๐๐จ๐ก๐๐ซ๐ ๐๐๐ซ๐๐ง๐ค ๐.๐ - ๐๐จ๐จ๐ฌ๐ญ ๐ฒ๐จ๐ฎ๐ซ ๐๐๐๐ซ๐๐ก ๐ What is new: - Large gains in multilingual retrieval ๐บ๐ณ - Reasoning Capabilities ๐งฎ - Strong gains Finance ๐ข, eCommerce ๐, project management ๐ฃ - New platforms: AWS Bedrock & Pinecone


Excited to share that Provence is accepted to #ICLR2025! Provence is a method for training an efficient & high-performing context pruner for #RAG, either standalone or combined with a reranker huggingface.co/blog/nadiinchiโฆ w/ @thibault_formal Vassilina Nikoulina Stรฉphane Clinchant NAVER LABS Europe

