hyunji amy lee(@hyunji_amy_lee) 's Twitter Profile Photo

Where does generative retrieval have a significant advantage over bi-encoder retrieval? Our paper 'Generative Multi-hop Retrieval' shows that the answer is 🦘multi-hop🦘 retrieval! (2.5x higher score when the # of hops is large, and the oracle # of hops is unknown) 1/9

Where does generative retrieval have a significant advantage over bi-encoder retrieval? Our #EMNLP2022 paper 'Generative Multi-hop Retrieval' shows that the answer is 🦘multi-hop🦘 retrieval! (2.5x higher score when the # of hops is large, and the oracle # of hops is unknown) 1/9
account_circle
Yixiao Song(@yixiao_song) 's Twitter Profile Photo

How much Chinese linguistic knowledge do large pretrained language models encode? SLING (to appear at ) investigates this question and presents a high-quality dataset with 38K minimal pairs covering 9 Chinese linguistic phenomena. (1/6)

📄Paper: arxiv.org/abs/2210.11689

How much Chinese linguistic knowledge do large pretrained language models encode? SLING (to appear at #EMNLP2022) investigates this question and presents a high-quality dataset with 38K minimal pairs covering 9 Chinese linguistic phenomena. (1/6)

📄Paper: arxiv.org/abs/2210.11689
account_circle
Huiyin Xue(@HuiyinXue) 's Twitter Profile Photo

👏Happy to announce that our paper 'HashFormers: Towards Vocabulary-independent Pre-trained Transformers' got accepted at . All thanks to my favourite supervisor Nikos Aletras🙏🍺

Paper👀👇: arxiv.org/abs/2210.07904.
[1/k]

👏Happy to announce that our paper 'HashFormers: Towards Vocabulary-independent Pre-trained Transformers' got accepted at #EMNLP2022. All thanks to my favourite supervisor @nikaletras🙏🍺

Paper👀👇: arxiv.org/abs/2210.07904.
[1/k]
account_circle
Yucheng Li(@liyucheng_2) 's Twitter Profile Photo

Mind-blowing paper I met on . I highly recommend this paper. The idea is so cool that I can't help to check out the code directly after having a chat with the cool author Oren Sultan!
livetweet

Mind-blowing paper I met on #EMNLP2022. I highly recommend this paper. The idea is so cool that I can't help to check out the code directly after having a chat with the cool author @oren_sultan!
#EMNLP2022livetweet
account_circle
Han Guo(@HanGuo97) 's Twitter Profile Photo

While I'm not at , we have two works on the intersection of RL + NLP.

RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
(arxiv.org/abs/2205.12548)

Efficient (Soft) Q-Learning for Text Generation with Limited Good Data
(arxiv.org/abs/2106.07704)

While I'm not at #EMNLP2022, we have two works on the intersection of RL + NLP.

RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
(arxiv.org/abs/2205.12548)

Efficient (Soft) Q-Learning for Text Generation with Limited Good Data
(arxiv.org/abs/2106.07704)
account_circle
Mingkai Deng(@mdeng34) 's Twitter Profile Photo

RLPrompt uses Reinforcement Learning to optimize *discrete* prompts for any LM

w/ many nice properties:
* prompts transfer trivially across LMs
* gradient-free for LM
* strong improve vs manual/soft prompts

Paper arxiv.org/abs/2205.12548
Code github.com/mingkaid/rl-pr…

#EMNLP2022 RLPrompt uses Reinforcement Learning to optimize *discrete* prompts for any LM

w/ many nice properties:
* prompts transfer trivially across LMs
* gradient-free for LM
* strong improve vs manual/soft prompts

Paper arxiv.org/abs/2205.12548
Code github.com/mingkaid/rl-pr…
account_circle
Swaroop Mishra(@Swarooprm7) 's Twitter Profile Photo

Excited about EMNLP 2022 in-person this week!
Here's the summary of my posters (thanks to all my collaborators). Please DM if you have any question or just want to chat.

Excited about EMNLP 2022 in-person this week! 
Here's the summary of my posters (thanks to all my collaborators). Please DM if you have any question or just want to chat.
#EMNLP2022 #NLProc
account_circle
Nazneen Rajani(@nazneenrajani) 's Twitter Profile Photo

Our paper on Systematic Error Analysis and Labeling (SEAL) 🦭 has been accepted at EMNLP demo track 🎉

Problem: How can we help users find systematic bugs in their models?

Eg: Image classification model on low light images, sentiment classifier on gym reviews

Our paper on Systematic Error Analysis and Labeling (SEAL) 🦭 has been accepted at EMNLP demo track 🎉

Problem: How can we help users find systematic bugs in their models?

Eg: Image classification model on low light images, sentiment classifier on gym reviews

#emnlp2022
account_circle
Jiacheng Liu (Gary)(@liujc1998) 's Twitter Profile Photo

Can LMs introspect the commonsense knowledge that underpins the reasoning of QA?

In our paper, we show that relatively small models (<< GPT-3), trained with RLMF (RL with Model Feedback), can generate natural language knowledge that bridges reasoning gaps. ⚛️

(1/n)

Can LMs introspect the commonsense knowledge that underpins the reasoning of QA?

In our #EMNLP2022 paper, we show that relatively small models (<< GPT-3), trained with RLMF (RL with Model Feedback), can generate natural language knowledge that bridges reasoning gaps. ⚛️

(1/n)
account_circle
Haoran Xu(@fe1ixxu) 's Twitter Profile Photo

How easy is it to have big gains for your model? Just pass the model multiple times and minimize their difference! The secrete is more balanced parameter contribution. Check our paper 'The Importance of Being Parameters: An Intra-Distillation Method for Serious Gains'!

How easy is it to have big gains for your model? Just pass the model multiple times and minimize their difference! The secrete is more balanced parameter contribution. Check our #EMNLP2022 paper 'The Importance of Being Parameters: An Intra-Distillation Method for Serious Gains'!
account_circle
Nayeon Kim(@Koo_ony) 's Twitter Profile Photo

Finally, I met Yejin Choi at the conference!Thank you for accepting the photo request. I met her at the Ritz-Carlton with a view of the beautiful mosque last night. I was so honored that I couldn't sleep! As a Korean, I am proud of her! Yejin Choi

Finally, I met Yejin Choi at the conference!Thank you for accepting the photo request. I met her at the Ritz-Carlton with a view of the beautiful mosque last night. I was so honored that I couldn't sleep! As a Korean, I am proud of her! @YejinChoinka #EMNLP2022
account_circle
Leonie Weissweiler(@LAWeissweiler) 's Twitter Profile Photo

I'll be presenting 'The Better Your Syntax, the Better Your Semantics?' at 11am Sunday in the poster session at ! Stop by if you want to find out how Construction Grammar can contribute to LM probing.
📄arxiv.org/abs/2210.13181
🎥 underline.io/events/342/ses…

I'll be presenting 'The Better Your Syntax, the Better Your Semantics?' at 11am Sunday in the poster session at #EMNLP2022! Stop by if you want to find out how Construction Grammar can contribute to LM probing. #NLProc
📄arxiv.org/abs/2210.13181
🎥 underline.io/events/342/ses…
account_circle
Negar Foroutan(@negarforoutan) 's Twitter Profile Photo

Do multilingual language models (MultiLMs) learn different languages using the same subset of parameters? In our paper, we show that MultiLMs are composed of language-neutral representations that jointly encode multiple languages.

Do multilingual language models (MultiLMs) learn different languages using the same subset of parameters? In our #EMNLP2022 #NLProc paper, we show that MultiLMs are composed of language-neutral representations that jointly encode multiple languages.
account_circle
Hongxin Zhang(@icefox1104) 's Twitter Profile Photo

Demonstrations composed of RANDOM tokens can still work? YES!

In our paper (w/Yanzhe Zhang,Diyi Yang,Ruiyi Zhang), we design pathological demonstrations to investigate “Robustness of Demonstration-based Learning Under Limited Data Scenario” arxiv.org/abs/2210.10693

Demonstrations composed of RANDOM tokens can still work? YES!

In our #EMNLP2022 paper (w/@StevenyzZhang,@Diyi_Yang,@RoyZhang13), we design pathological demonstrations to investigate “Robustness of Demonstration-based Learning Under Limited Data Scenario” arxiv.org/abs/2210.10693
account_circle
Eric(@ericmitchellai) 's Twitter Profile Photo

🧵Language models have an unfortunate tendency to contradict themselves.

Our oral presents Consistency Correction w/ Relation Detection (ConCoRD), which overrides low-confidence LM predictions to boost self-consistency & accuracy.

Paper/code: ericmitchell.ai/emnlp-2022-con…

🧵Language models have an unfortunate tendency to contradict themselves.

Our #emnlp2022 oral presents Consistency Correction w/ Relation Detection (ConCoRD), which overrides low-confidence LM predictions to boost self-consistency & accuracy.

Paper/code: ericmitchell.ai/emnlp-2022-con…
account_circle
Sachin Kumar(@shocheen) 's Twitter Profile Photo

Super excited to introduce our paper!

MuCoLa: Gradient-based Constrained Sampling from Language Models.

With Biswajit Paria and Yulia Tsvetkov tsvetshop

arxiv.org/abs/2205.12558 (1/7)

Super excited to introduce our #EMNLP2022 paper!

MuCoLa: Gradient-based Constrained Sampling from Language Models.

With @biswajitsc and Yulia Tsvetkov @tsvetshop

arxiv.org/abs/2205.12558 (1/7)
account_circle