😷 ➡️ @FabienTarrade@sigmoid.social (@fabtar) 's Twitter Profile
😷 ➡️ @[email protected]

@fabtar

Sr. Data Scientist & Applied ML Scientist @AXA | ex 👨‍🔬 @CERN | | 🗣️ mine | he/him | 🧠 Working on Machine Learning & NLP at scale 👨‍💻 Python/Cloud native

ID: 91852037

linkhttp://fabien-tarrade.eu/ calendar_today22-11-2009 19:33:06

3,3K Tweet

3,3K Followers

4,4K Following

David Rousseau (@dhpmrou) 's Twitter Profile Photo

You've run an ML challenge ? Great ! Now what ? Check out "Towards impactful challenges with post-challenge papers, benchmarks..." arxiv.org/abs/2312.06036 a chapter (with A. Marot / @NehzUx) of upcoming landmark book "AI Competitions and Benchmarks" #kaggle ChaLearn

Graham Neubig (@gneubig) 's Twitter Profile Photo

We're excited about all the interest in our Gemini report and working to make it even better! This week we made major improvements, switching to the Mistral AI instruct model, and working with the Gemini team to reproduce their results. Updates below.

Andreas Köpf (@neurosp1ke) 's Twitter Profile Photo

We release today the final Open Assistant dataset with data collected on open-assistant.io until Nov 5, 2023. OASST2: huggingface.co/datasets/OpenA… Thanks again everyone who contributed to the project! It was a pleasure to work with all of you. Happy holidays! 💙🎅

😷 ➡️ @FabienTarrade@sigmoid.social (@fabtar) 's Twitter Profile Photo

I would like to wish you all a very happy, healthy, and prosperous New Year 2024 with the best success in your projects. May it be full of peace, joy, love and success for you and your loved ones. 🎆🥂

Sebastian Raschka (@rasbt) 's Twitter Profile Photo

Thanks, everyone, for all the support and positive words for my "Build a Large Language Model (from Scratch)" book! The next chapter on *coding self-attention, multi-head attention, and causal self-attention from scratch* is on the way and will be in the MEAP in a few weeks!

Thanks, everyone, for all the support and positive words for my "Build a Large Language Model (from Scratch)" book!

The next chapter on *coding self-attention, multi-head attention, and causal self-attention from scratch* is on the way and will be in the MEAP in a few weeks!
Martin Görner (@martin_gorner) 's Twitter Profile Photo

The "Self-Extend" paper arxiv.org/abs/2401.01325 promises magic for your LLMs: extending the context window beyond what they were trained on. You can take an LLM trained on 2000 token sequences, feed it 5000 tokens and expect it to work. Thread 🧵 (SWA below=sliding window attn.)

The "Self-Extend" paper arxiv.org/abs/2401.01325 promises magic for your LLMs: extending the context window beyond what they were trained on. You can take an LLM trained on 2000 token sequences, feed it 5000 tokens and expect it to work. Thread 🧵
(SWA below=sliding window attn.)
François Fleuret (@francoisfleuret) 's Twitter Profile Photo

"The little book of Deep Learning" update v1.2! Minor updates ("meta-parameter" -> "hyper-parameter") + fine-tuning in 3.6 + a new chapter 8. Draft below, comments are welcome.

"The little book of Deep Learning" update v1.2! 

Minor updates ("meta-parameter" -> "hyper-parameter") + fine-tuning in 3.6 + a new chapter 8.

Draft below, comments are welcome.
Hamel Husain (@hamelhusain) 's Twitter Profile Photo

If you remember our Applied LLMs course, you'll love this. Today, we are making all these resources available for free to everyone! 📚 We did extra work to add learning tracks, resources, and notes to each lesson to maximize your learning. Link in next tweet

If you remember our Applied LLMs course, you'll love this.  Today, we are making all these resources available for free to everyone! 📚 

We did extra work to add learning tracks, resources, and notes to each lesson to maximize your learning.   Link in next tweet
Swiss Python Summit (@pythonsummit) 's Twitter Profile Photo

Swiss Python Summit Early Bird Alert! 🇨🇭🐍🎉 Get your tickets on python-summit.ch before it's too late and save up to 40%! 🤩 #python #pythonsummit #EarlyBird #Discount

Emmanuel Ameisen (@mlpowered) 's Twitter Profile Photo

Sam and I did a deep dive into some of our recent results, discussing how language models plan, perform computations, and reason across languages! If you use LLMs like Claude and want to know how they work, I recommend giving it a listen. No technical background needed!

Lucas Beyer (bl16) (@giffmana) 's Twitter Profile Photo

You're in Zurich or its zone of influence (Lausanne, Paris, BXL, Munich, London, ...) and like AI + Robots? We (@openai) together with mimic, lokirobotics, and Zurich Builds are organizing a hackathon from Fri 9 May afternoon to Sun 11. Limited spots, more below:

You're in Zurich or its zone of influence (Lausanne, Paris, BXL, Munich, London, ...) and like AI + Robots?

We (@openai) together with <a href="/mimicrobotics/">mimic</a>, <a href="/lokirobotics/">lokirobotics</a>, and Zurich Builds are organizing a hackathon from Fri 9 May afternoon to Sun 11.

Limited spots, more below:
Florian Tramèr (@florian_tramer) 's Twitter Profile Photo

Last year we launched the Swiss AI red-teaming network with ETH Zürich EPFL and multiple companies We started off with discussing pragmatic prompt injection defenses We compiled our thoughts into a list of "design patterns" that we think can be effective in many practical settings

Last year we launched the Swiss AI red-teaming network with <a href="/ETH/">ETH Zürich</a> <a href="/EPFL/">EPFL</a> and multiple companies

We started off with discussing pragmatic prompt injection defenses

We compiled our thoughts into a list of "design patterns" that we think can be effective in many practical settings
Divam Gupta (@divamgupta) 's Twitter Profile Photo

We launched on possibly the worst timing ever - right when OpenAI, Google DeepMind, and Anthropic all had major releases. Somehow we still made it to #1 on Hacker News. Thanks to everyone who checked us out! 🙏

We launched on possibly the worst timing ever - right when OpenAI, Google DeepMind, and Anthropic all had major releases. Somehow we still made it to #1 on Hacker News. Thanks to everyone who checked us out! 🙏