NLPurr (@nlpurr) 's Twitter Profile
NLPurr

@nlpurr

SciComm of Academic NLP Papers | Research Scientist | Explainability, Prompting, Benchmarking, Metrics, Red-Teaming & Eval of LLMs

ID: 1543673611671638016

linkhttps://nlpurr.github.io/ calendar_today03-07-2022 19:13:15

936 Tweet

1,1K Followers

744 Following

NLPurr (@nlpurr) 's Twitter Profile Photo

If you keep your DMs open to everyone (as message requests), please note that twitter has changed the default option to requests from verified users only+people you follow. Make sure to shift it to the last option (if you did not keep it at the first option previously).

If you keep your DMs open to everyone (as message requests), please note that twitter has changed the default option to requests from verified users only+people you follow.

Make sure to shift it to the last option (if you did not keep it at the first option previously).
Alex Gu @ iclr (@minimario1729) 's Twitter Profile Photo

researchers should start making "collaborate with me" docs the same way people make "date me" docs (and no, neither a resume nor a CV counts)

Preetum Nakkiran (@preetumnakkiran) 's Twitter Profile Photo

careful about overfitting to lists like this. there are many ways to do good research -- my fav papers were born out of getting "stuck in rabbit holes" that no-one else went down...

David Pfau (@pfau) 's Twitter Profile Photo

Scientific work which cannot be replicated is failed scientific work. Work using closed methods that don't even allow the possibility of replication should be treated as marketing rather than science. Scientists who publish said work should have their reputations suffer.

David Pfau (@pfau) 's Twitter Profile Photo

These takes are going to reverse polarize me into being a Google defender. Do people just forget that Bard exists and was shipped to the public in like March?

Sasha Rush (@srush_nlp) 's Twitter Profile Photo

I attended a Google-hosted workshop today. Workshops like these are a great chance to spread their work. I enjoyed the talks immensely. However, for whatever reason, this was the gender breakdown. I'm posting because I think it's important that people know these statistics.

I attended a Google-hosted workshop today. Workshops like these are a great chance to spread their work. I enjoyed the talks immensely.

However, for whatever reason, this was the gender breakdown. I'm posting because I think it's important that people know these statistics.
Yann LeCun (@ylecun) 's Twitter Profile Photo

Jim Fan Richard Sutton Animals and humans get very smart very quickly with vastly smaller amounts of training data. My money is on new architectures that would learn as efficiently as animals and humans. Using more data (synthetic or not) is a temporary stopgap made necessary by the limitations of our

Matthew Leavitt (@leavittron) 's Twitter Profile Photo

The next 10x in deep learning efficiency gains are going to come from intelligent intervention on training data. But tools for automated data curation at scale didn’t exist—until now. I’m so excited to announce that I’ve co-founded @DatologyAI, with Ari Morcos and Bogdan Gaza

Graham Neubig (@gneubig) 's Twitter Profile Photo

Researchers often have to ask for recommendation letters for visa/job applications, etc. I wrote a script that allows you to find who cites your papers frequently to create a list of potential letter writers: github.com/neubig/researc… Hope it's helpful, improvements are welcome!

David Pfau (@pfau) 's Twitter Profile Photo

OK, this is probably going to raise more questions than it answers, but I just want to put this out here so that no one ever says "we can just get around the data limitations of LLMs with self-play" ever again.

OK, this is probably going to raise more questions than it answers, but I just want to put this out here so that no one ever says "we can just get around the data limitations of LLMs with self-play" ever again.
Sam Bowman (@sleepinyourhat) 's Twitter Profile Photo

📰 Excited to see this go out! 📰 LLMs generalize from succeeding at mundane opportunities for reward-seeking to pursuing more concerning ones.

Nathan Lambert (@natolambert) 's Twitter Profile Photo

On data centric vs algorithmic centric rlhf work This year we've had two major projects for our state-of-the-art post training pipelines (Tulu 2.5 and 3 soon) at Ai2. One has been more data focussed and one was focussed on trying to get performance from PPO. It's amazing

Hila Gonen (@hila_gonen) 's Twitter Profile Photo

Do you like yellow? Then, according to LLMs, you are probably a school bus driver! Excited to share our new paper about Semantic Leakage in Language Models! Joint work with my wonderful collaborators @terra Alisa Liu luke Noah A. Smith Paper: gonenhila.github.io/files/Semantic… 1/10

Do you like yellow? Then, according to LLMs, you are probably a school bus driver!
Excited to share our new paper about Semantic Leakage in Language Models!
Joint work with my wonderful collaborators @terra <a href="/alisawuffles/">Alisa Liu</a> <a href="/luke/">luke</a> <a href="/nlpnoah/">Noah A. Smith</a>

Paper: gonenhila.github.io/files/Semantic…

1/10
Felix Hill (@felixhill84) 's Twitter Profile Photo

Do you work in AI? Do you find things uniquely stressful right now, like never before? Haver you ever suffered from a mental illness? Read my personal experience of those challenges here: docs.google.com/document/d/1aE…