MIT NLP Group (@mitnlp) 's Twitter Profile
MIT NLP Group

@mitnlp

MIT Natural Language Processing group.

ID: 908046666980306947

linkhttp://nlp.csail.mit.edu calendar_today13-09-2017 19:16:10

29 Tweet

904 Followers

22 Following

David Alvarez Melis (@elmelis) 's Twitter Profile Photo

Our paper: "Gromov-Wasserstein Alignment of Word Embedding Spaces" is now available (arxiv.org/abs/1809.00013). TL;DR: The Gromov-Wasserstein distance provides a simple, principled objective to align (w/o supervision) word embedding spaces, even of different dimensionality!

Our paper: "Gromov-Wasserstein Alignment of Word Embedding Spaces" is now available (arxiv.org/abs/1809.00013). TL;DR: The Gromov-Wasserstein distance provides a simple, principled objective to align (w/o supervision) word embedding spaces, even of different dimensionality!
Yujie Qian (@yujie_qian) 's Twitter Profile Photo

Our paper "GraphIE: A Graph-Based Framework for Information Extraction" has been accepted to #NAACL2019. We study how to model the graph structure of the data in various IE tasks. Joint work with @esantus Jiang Guo Zhijing Jin and Regina Barzilay. (arxiv.org/abs/1810.13083)

MRQA Workshop (@mrqa_workshop) 's Twitter Profile Photo

Development datasets released! 6 in-domain and 6 out-of-domain including BioASQ, DROP, DuoRC, RACE, RelationExtraction, TextbookQA! Also released BERT baseline results. All the information at github.com/mrqa/MRQA-Shar…. Check out and let us know if you have questions! #mrqa2019

Tal Schuster (@talschuster) 's Twitter Profile Photo

Our #emnlp2019 paper is now on arxiv:arxiv.org/abs/1908.05267 * Extending #FEVER (fact-checking) eval dataset to eliminate bias. * Regularizing the training to alleviate the bias. Coauthors: Darsh Shah, Serene Yeo, Daniel Filizzola, @esantus, Regina Barzilay @emnlp2019 #nlproc

Our #emnlp2019 paper is now on arxiv:arxiv.org/abs/1908.05267

* Extending #FEVER (fact-checking) eval dataset to eliminate bias.
* Regularizing the training to alleviate the bias.

Coauthors: Darsh Shah, <a href="/yeodontsay/">Serene Yeo</a>, Daniel Filizzola, @esantus, Regina Barzilay

@emnlp2019 #nlproc
YujiaBao (@yujia_bao) 's Twitter Profile Photo

Few-shot Text Classification with Distributional Signatures. What happens if you take meta-learning for vision and apply it to NLP? Prototypical Networks with lexical features perform worse than nearest neighbors on new classes. How can we do better? ;) arxiv.org/abs/1908.06039

Few-shot Text Classification with Distributional Signatures. What happens if you take meta-learning for vision and apply it to NLP? Prototypical Networks with lexical features perform worse than nearest neighbors on new classes. How can we do better? ;)
arxiv.org/abs/1908.06039
Tal Schuster (@talschuster) 's Twitter Profile Photo

Are we protected from GPT-2, #GROVER style models generating fake content? What happens if they are also used legitimately as writings assistants? Check our new report: arxiv.org/abs/1908.09805 with Roei Schuster, @Darsh71307636, Regina Barzilay. #NLProc #emnlp2019 #FakeNews #GPT2

Are we protected from GPT-2, #GROVER style models generating fake content?
What happens if they are also used legitimately as writings assistants?
Check our new report: arxiv.org/abs/1908.09805
with <a href="/RoeiSchuster/">Roei Schuster</a>, @Darsh71307636, Regina Barzilay.
#NLProc #emnlp2019 #FakeNews #GPT2
Darsh J Shah (@darshj_shah) 's Twitter Profile Photo

Check-out our new paper - arxiv.org/pdf/1909.13838… Automatic Fact-guided sentence modification. Method to automatically modify the factual information in a sentence. Joint work with old account , Prof. Regina Barzilay.

MIT NLP Group (@mitnlp) 's Twitter Profile Photo

If you're at @emnlp2019, don't miss our talks: Towards Debiasing Fact Verification Models * Wednesday 15:42 (2B) * Tal Schuster Darsh J Shah Working Hard or Hardly Working: Challenges of Integrating Typology into Neural Dependency Parsers * Thursday 15:30 (201A) * Adam Fisch

Shiyu Chang (@codeterminator) 's Twitter Profile Photo

#NeurIPS2019 Our work with MIT improves the interpretability of NLP models with an adversarial class-wise rationalization technique, which can find explanations towards any given class. Poster: Tue @ East Exhibition Hall B + C #1. MIT-IBM Watson AI Lab David Cox MIT CSAIL Mo Yu

#NeurIPS2019 Our work with MIT improves the interpretability of NLP models with an adversarial class-wise rationalization technique, which can find explanations towards any given class. Poster: Tue @ East Exhibition Hall B + C #1. <a href="/MITIBMLab/">MIT-IBM Watson AI Lab</a> <a href="/neurobongo/">David Cox</a> <a href="/MIT_CSAIL/">MIT CSAIL</a> <a href="/Bishop_Gorov/">Mo Yu</a>
Tal Schuster (@talschuster) 's Twitter Profile Photo

In our IEEE S&P paper, led by Roei Schuster, we control the embeddings of words by introducing minimal changes to the pretraining data (e.g. #Wiki edits). This #word_embeddings attack affects many downstream #NLProc tasks! Cornell Tech MIT CSAIL link: arxiv.org/abs/2001.04935

Tal Schuster (@talschuster) 's Twitter Profile Photo

Is your Fact Verification model robust enough? Consider adding #VitaminC 🍊 Check out our new #NAACL2021 paper: "Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence" with Adam Fisch and Regina Barzilay 🔗 arxiv.org/abs/2103.08541 #NLProc #FakeNews📰 🧵1/N

Is your Fact Verification model robust enough? Consider adding #VitaminC 🍊

Check out our new #NAACL2021 paper: "Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence" with <a href="/adamjfisch/">Adam Fisch</a> and <a href="/BarzilayRegina/">Regina Barzilay</a>

🔗 arxiv.org/abs/2103.08541

#NLProc #FakeNews📰
🧵1/N
Adam Fisch (@adamjfisch) 's Twitter Profile Photo

New #NAACL2021 paper out on robust fact verification. Sources like Wikipedia are continuously edited with the latest information. In order to keep up, our models need to be sensitive to these changes in evidence when verifying claims. Work with Tal Schuster and Regina Barzilay!

Tal Schuster (@talschuster) 's Twitter Profile Photo

New Preprint with Adam Fisch, T.Jaakkola and Regina Barzilay. We present Consistent Accelerated Inference via 𝐂onfident 𝐀daptive 𝐓ransformers (CATs) CATs can speed up inference 😺 while guaranteeing consistency 😼. The code is available🙀 🔗people.csail.mit.edu/tals/static/Co… #NLProc

New Preprint with <a href="/adamjfisch/">Adam Fisch</a>,  T.Jaakkola and <a href="/BarzilayRegina/">Regina Barzilay</a>. We present Consistent Accelerated Inference via 𝐂onfident 𝐀daptive 𝐓ransformers (CATs)

CATs can speed up inference 😺 while guaranteeing consistency 😼. The code is available🙀
🔗people.csail.mit.edu/tals/static/Co…

 #NLProc
Adam Fisch (@adamjfisch) 's Twitter Profile Photo

Large pre-trained Transformers are great, but expensive to run. But making them more efficient (e.g., early exits) can give undesirable performance hits. In our new work, we speed up inference while guaranteeing consistency with the original model up to a specifiable tolerance.

YujiaBao (@yujia_bao) 's Twitter Profile Photo

Need to debias your new task? Learn how, from your old one. Check out our #ICML2022 paper "Learning Stable Classifiers by Transferring Unstable Features" with Shiyu Chang and Regina Barzilay Paper -> arxiv.org/abs/2106.07847 Code -> github.com/YujiaBao/tofu

Need to debias your new task? Learn how, from your old one.

Check out our #ICML2022 paper "Learning Stable Classifiers by Transferring Unstable Features" with <a href="/CodeTerminator/">Shiyu Chang</a> and <a href="/BarzilayRegina/">Regina Barzilay</a> 

Paper -&gt; arxiv.org/abs/2106.07847
Code -&gt; github.com/YujiaBao/tofu
Anastasios Nikolas Angelopoulos (@ml_angelopoulos) 's Twitter Profile Photo

I’m thrilled to announce Conformal Risk Control: a way to bound quantities other than coverage with conformal prediction. arxiv.org/abs/2208.02814 Check out the worked examples in CV and NLP! The best part is: it’s exactly the same algorithm as split conformal prediction🤯🧵1/5

I’m thrilled to announce Conformal Risk Control: a way to bound quantities other than coverage with conformal prediction.

arxiv.org/abs/2208.02814

Check out the worked examples in CV and NLP!

The best part is: it’s exactly the same algorithm as split conformal prediction🤯🧵1/5