babyLM (@babylmchallenge) 's Twitter Profile
babyLM

@babylmchallenge

Train small large language models

ID: 1768701751056564224

linkhttps://babylm.github.io/ calendar_today15-03-2024 18:12:23

67 Tweet

301 Followers

61 Following

Akari Haga (@_akari000) 's Twitter Profile Photo

👶I am happy to announce that our paper "BabyLM Challenge: Exploring the Effect of Variation Sets on Language Model Training Efficiency" received the ✨Outstanding Paper Award✨ at babyLM !! #CoNLL2024 #EMNLP2024

👶I am happy to announce that our paper "BabyLM Challenge: Exploring the Effect of Variation Sets on Language Model Training Efficiency" received the ✨Outstanding Paper Award✨ at <a href="/babyLMchallenge/">babyLM</a> !!
#CoNLL2024 #EMNLP2024
Miyu Oba (@rodamille) 's Twitter Profile Photo

👶Happy to share that our paper on the effects of variation sets on LMs received the 👑Outstanding Paper Award👑 babyLM Huge congrats to Akari Haga, Akiyo Fukatsu, Arianna Bisazza and Yohei Oseki Check our paper👉: arxiv.org/abs/2411.09587 #EMNLP2024 #CoNLL2024

GroNLP (@gronlp) 's Twitter Profile Photo

🌴We had a great time at #EMNLP2024 presenting our works, meeting old friends, getting to know new people, and winning some prizes (Best Social Impact Award at #EMNLP2024 Main and babyLM award #CoNLL2024) 🤩

🌴We had a great time at #EMNLP2024 presenting our works, meeting old friends, getting to know new people, and winning some prizes (Best Social Impact Award at #EMNLP2024 Main and <a href="/babyLMchallenge/">babyLM</a> award #CoNLL2024) 🤩
babyLM (@babylmchallenge) 's Twitter Profile Photo

Of course, babies only take small naps See you at BabyLM Workshop in EMNLP25 Suzhou More details soon here, on slack and on babylm.github.io Until then, please share your suggestions for next year

Of course, babies only take small naps
See you at BabyLM Workshop in EMNLP25 Suzhou
More details soon here, on slack and on
babylm.github.io
Until then, please share your suggestions for next year
Suchir Salhan (@suchirsalhan) 's Twitter Profile Photo

Together with the CambridgeNLP BabyLM team, we extend the babyLM task beyond English and introduce acquisition-inspired techniques to improve curriculum learning strategies. Check out our group's paper: arxiv.org/abs/2410.22886.

babyLM (@babylmchallenge) 's Twitter Profile Photo

Understanding efficient and cognitively inspired pretraining helps linguistics. Have anything relevant? Remember the challenge of this year is on and introduces also interaction and a workshop accepting related papers!

babyLM (@babylmchallenge) 's Twitter Profile Photo

Close your books, test time! The evaluation pipelines are out, baselines are released and the challenge is on. There is still time to join and we are excited to learn from you on pretraining and the gaps between humans and models. *Don't forget to fast-eval on checkpoints

Close your books, test time!
The evaluation pipelines are out, baselines are released and the challenge is on.
There is still time to join and we are excited to learn from you on pretraining and the gaps between humans and models.

*Don't forget to fast-eval on checkpoints