meetNLP - for Hamburg and the whole wide world (@hamburgnlp) 's Twitter Profile
meetNLP - for Hamburg and the whole wide world

@hamburgnlp

This Meetup Group is intended as a forum for everyone who is interested in the topic of Natural Language Processing (NLP) and related fields.

ID: 1301466689066934272

calendar_today03-09-2020 10:27:04

8 Tweet

49 Followers

64 Following

Google AI (@googleai) 's Twitter Profile Photo

Today we describe a #NaturalLanguageProcessing model that achieves near BERT-level performance on text classification tasks, while using orders of magnitude fewer model parameters. Learn all about it below: goo.gle/2FRyHqx

meetNLP - for Hamburg and the whole wide world (@hamburgnlp) 's Twitter Profile Photo

Are you enthusiastic about Natural Language Processing? Are you interested in hearing the latest findings from #NLP experts, discussing with the community about possible applications or just learning more about #NLP? Then take part in our #Meetup: meetup.com/de-DE/Hamburg-…

meetNLP - for Hamburg and the whole wide world (@hamburgnlp) 's Twitter Profile Photo

It's Meetup day! Anyone interested in NLP - please join us tonight for the 5th edition of the Hamburg Natural Language Processing Meetup starting at 18:30 CEST. Info on tonight's amazing speakers below. ⬇️⬇️ Link: meetu.ps/e/JPLw6/KkSxX/i

meetNLP - for Hamburg and the whole wide world (@hamburgnlp) 's Twitter Profile Photo

meetNLP's 6th edition is here. 29th June. 18:30 CEST. Online. NLP and beyond is this edition's topic. Tune in for talks on intellectual property & other key legal issues in NLP and what language models could learn from neuroscience. 👏 Sign up here: lnkd.in/dVx8-X5

Ofir Press (@ofirpress) 's Twitter Profile Photo

Since Transformer LMs were invented, we’ve wanted them to be able to read longer inputs during inference than they saw during training. Our Attention with Linear Biases enables this, in very few lines of code, without requiring extra params or runtime ofir.io/train_short_te… 🧵⬇

Since Transformer LMs were invented, we’ve wanted them to be able to read longer inputs during inference than they saw during training. Our Attention with Linear Biases enables this, in very few lines of code, without requiring extra params or runtime ofir.io/train_short_te… 🧵⬇