Guillaume Lample @ NeurIPS 2024 (@guillaumelample) 's Twitter Profile
Guillaume Lample @ NeurIPS 2024

@guillaumelample

Cofounder & Chief Scientist Mistral.ai (@MistralAI). Working on LLMs. Ex @MetaAI | PhD @Sorbonne_Univ_ | MSc @CarnegieMellon | X11 @Polytechnique

ID: 806058672619212800

calendar_today06-12-2016 08:52:18

541 Tweet

40,40K Followers

637 Following

Mistral AI (@mistralai) 's Twitter Profile Photo

Introducing Mistral Medium 3: our new multimodal model offering SOTA performance at 8X lower cost. - A new class of models that balances performance, cost, and deployability. - High performance in coding and function-calling. - Full enterprise capabilities, including hybrid or

Introducing Mistral Medium 3: our new multimodal model offering SOTA performance at 8X lower cost.

- A new class of models that balances performance, cost, and deployability.
- High performance in coding and function-calling.
- Full enterprise capabilities, including hybrid or
Mistral AI (@mistralai) 's Twitter Profile Photo

Introducing Le Chat Enterprise, the most customizable and secure agent-powered AI assistant for businesses, making AI a real leverage for competitiveness. - Integration with your company knowledge (starting with Gmail, Google Drive, Sharepoint…) - Ability to add frequently used

Mistral AI (@mistralai) 's Twitter Profile Photo

Meet Document AI, our end-to-end document processing solution powered by the world’s best OCR model! mistral.ai/solutions/docu…

Meet Document AI, our end-to-end document processing solution powered by the world’s best OCR model!

mistral.ai/solutions/docu…
Mistral AI (@mistralai) 's Twitter Profile Photo

Introducing Agents API: your go-to tool for building tailored agents to solve complex real-world problems! mistral.ai/news/agents-api

Introducing Agents API: your go-to tool for building tailored agents to solve complex real-world problems! 

mistral.ai/news/agents-api
Sam Rodriques (@sgrodriques) 's Twitter Profile Photo

Today we are releasing ether0, our first scientific reasoning model. We trained Mistral 24B with RL on several molecular design tasks in chemistry. Remarkably, we found that LLMs can learn some scientific tasks more much data-efficiently than specialized models trained from

Mistral AI (@mistralai) 's Twitter Profile Photo

We're proud to announce Mistral Compute—an unprecedented AI infrastructure undertaking in Europe, and a strategic initiative that will ensure that all nation states, enterprises, and research labs globally remain at the forefront of AI innovation. Read more in the thread.

We're proud to announce Mistral Compute—an unprecedented AI infrastructure undertaking in Europe, and a strategic initiative that will ensure that all nation states, enterprises, and research labs globally remain at the forefront of AI innovation. 

Read more in the thread.
Mistral AI (@mistralai) 's Twitter Profile Photo

Introducing Mistral Small 3.2, a small update to Mistral Small 3.1 to improve: - Instruction following: Small 3.2 is better at following precise instructions - Repetition errors: Small 3.2 produces less infinite generations or repetitive answers - Function calling: Small

Introducing Mistral Small 3.2, a small update to Mistral Small 3.1 to improve:  

- Instruction following: Small 3.2 is better at following precise instructions
- Repetition errors: Small 3.2 produces less infinite generations or repetitive answers 
- Function calling: Small
Mistral AI (@mistralai) 's Twitter Profile Photo

Introducing Devstral Small and Medium 2507! This latest update offers improved performance and cost efficiency, perfectly suited for coding agents and software engineering tasks.

Introducing Devstral Small and Medium 2507! This latest update offers improved performance and cost efficiency, perfectly suited for coding agents and software engineering tasks.
Mistral AI (@mistralai) 's Twitter Profile Photo

In our continued commitment to open-science, we are releasing the Voxtral Technical Report: arxiv.org/abs/2507.13264 The report covers details on pre-training, post-training, alignment and evaluations. We also present analysis on selecting the optimal model architecture, which

In our continued commitment to open-science, we are releasing the Voxtral Technical Report: arxiv.org/abs/2507.13264

The report covers details on pre-training, post-training, alignment and evaluations. We also present analysis on selecting the optimal model architecture, which