John Wieting (@johnwieting2) 's Twitter Profile
John Wieting

@johnwieting2

Senior Research Scientist @GoogleDeepMind 🧠. PhD @LTIatCMU.

ID: 4341731914

calendar_today01-12-2015 16:48:47

105 Tweet

548 Followers

298 Following

Jiao Sun (@sunjiao123sun_) 's Twitter Profile Photo

Can LLMs generate exact 5 words? No How about 5 sentences? No How about 5 paragraphs? No 🤷🏻‍♀️ In arxiv.org/abs/2310.14542, we evaluate the performance of LLMs on various controlled generation tasks including numerical planning, story generation, paraphrase generation, and etc. (1/n)

Benjamin Muller (@ben_mlr) 's Twitter Profile Photo

Excited to be presenting our work on **Evaluating and Modeling Attribution for Cross-Lingual Question Answering** at #EMNLP2023 in Singapore. Updated Paper: arxiv.org/abs/2305.14332 We're also releasing the XOR-AttriQA dataset: github.com/google-researc… 🧵

ACL 2025 (@aclmeeting) 's Twitter Profile Photo

ACL announcement: "The ACL Executive Committee has voted to significantly change ACL's approach to protecting anonymous peer review. The change is effective immediately. (1/4) #NLPRoc

John Wieting (@johnwieting2) 's Twitter Profile Photo

There are so many accounts (bots?) on X/Twitter posting ChatGPTish responses or the same irrelevant responses across posts. It seems fairly obvious how these could be filtered out. I wonder why nothing is done. I thought it'd improve over time, but it really hasn't.

Nandan Thakur (@beirmug) 's Twitter Profile Photo

Excited to share that SWIM-IR has been accepted at #NAACL2024! 🍻 I'm quite delighted with this work as it was completed during my internship at Google AI! Thanks to all my mentors and colleagues! ❤️ Time to celebrate and hopefully see you all in Mexico! 🇲🇽 🏖️

Yapei Chang (@yapeichang) 's Twitter Profile Photo

Is it possible to have a watermark that reliably detects LLM-generated text, is robust to paraphrasing attacks, preserves quality, and can be applied to any LLM without access to logits? Check out PostMark, a method with all these properties! arxiv.org/abs/2406.14517  🧵below:

Is it possible to have a watermark that reliably detects LLM-generated text, is robust to paraphrasing attacks, preserves quality, and can be applied to any LLM without access to logits? Check out PostMark, a method with all these properties!

arxiv.org/abs/2406.14517 

🧵below: