tom.ipynb (@tomialtilio) 's Twitter Profile
tom.ipynb

@tomialtilio

ID: 165542773

calendar_today11-07-2010 22:10:26

507 Tweet

89 Takipçi

1,1K Takip Edilen

Ben Tossell (@bentossell) 's Twitter Profile Photo

A million things → what's happening in AI, summarised. Here's a bunch of cool shit that's going on. (plus Sir Benedict Cucumber batch h/t r/stablediffusion) ⤵

A million things → what's happening in AI, summarised.

Here's a bunch of cool shit that's going on.

(plus Sir Benedict Cucumber batch h/t r/stablediffusion)

⤵
The Full Stack (@full_stack_dl) 's Twitter Profile Photo

Over the last few months of running the Full Stack Deep Learning course, we released one lecture video (+notes) each week and wrote an accompanying Twitter thread. That's a lot of content, so here's a 🧵 thread-of-threads 🧵 collecting all of them up.

meowbooks (@untitled01ipynb) 's Twitter Profile Photo

These 5 Habits Separate the best Machine Learning Engineers from the rest: 1. Distrusting the labels 2. Distrusting the evaluation metrics 3. Constantly honing office politics 4. Refusing to learn SQL 5. Telling everyone they know MLOps

Chris Levy (@cleavey1985) 's Twitter Profile Photo

Bojan Tunguz I’m not sure what tasks you are working on but I found Philipp Schmid blog very useful. For example, I used ideas in one of his recent blog posts two train two models at work on lambda labs h100. philschmid.de/getting-starte…

Mark Tenenholtz (@marktenenholtz) 's Twitter Profile Photo

5 quality of life changes I’d recommend for tabular problems: • Change your baseline model to LightGBM • Store your data in parquets • Default to leave-one-out feature importance • Use Polars for feature engineering • If you have a GPU, use RAPIDS Nice and quick wins.

Lucila (@papaspai) 's Twitter Profile Photo

Hi The Economist how many editors did this go through before you decided to frame a fairly interesting question through such a racist lens? Yours, an uneducated, useless worker.

Hi <a href="/TheEconomist/">The Economist</a> how many editors did this go through before you decided to frame a fairly interesting question through such a racist lens? Yours, an uneducated, useless worker.
Pau Labarta Bajo (@paulabartabajo_) 's Twitter Profile Photo

You don't need 20 GPUs to fine-tune a Large Language Model. Lit-Parrot is a Python library by Lightning AI ⚡️ that lets you fine-tune the latest 7B Falcon model using 𝗼𝗻𝗹𝘆 𝟭 𝗚𝗣𝗨 And the best part? It is just one pip install away from you 🦜↓ github.com/Lightning-AI/l…

Pau Labarta Bajo (@paulabartabajo_) 's Twitter Profile Photo

Interested in NLP? Gone are the days when NLP engineers trained models from scratch. Fine-tuning is the new training. But, how do you fine-tune a Large Language Model (LLM) without breaking the bank? Here is the way 🧠↓

Luke Gessler (@lukegessler) 's Twitter Profile Photo

this paper's nuts. for sentence classification on out-of-domain datasets, all neural (Transformer or not) approaches lose to good old kNN on representations generated by.... gzip aclanthology.org/2023.findings-…

this paper's nuts. for sentence classification on out-of-domain datasets, all neural (Transformer or not) approaches lose to good old kNN on representations generated by.... gzip aclanthology.org/2023.findings-…
tom.ipynb (@tomialtilio) 's Twitter Profile Photo

TIL Cuando se arma un container con Docker, correr todas las instalaciones primero y a lo último copiar los directorios. Ahorra mucho tiempo porque las instalaciones quedan en caché.