dilara
@dilarafsoylu
member of cooking staff @StanfordNLP
ID: 1485454823281463297
24-01-2022 03:33:37
63 Tweet
288 Followers
1,1K Following
I'm joining Cornell University this fall as an Assistant Professor of Computer Science! Looking forward to work with students and colleagues Cornell Computer Science, @cornellCIS on generative models, controllable generation, and creative applications like #musictechnology
Come see #ACL2024‘s most beautiful poster, being presented by Julie Kallini ✨ right now at poster 7! Refuting Chomsky‘s NYT Op-Ed with experimental work as an added bonus ✨🌈
Congratulations to Julie Kallini ✨ and coauthors for an #ACL2024 Best Paper Award for their paper Mission: Impossible Language Models!
I'm incredibly proud that Aya received #ACL2024 Best Paper Award 🥹. Huge congratulations to the Aya team and Cohere For AI community who make this possible by for extending frontiers of LLMs to multilingual, building Aya Model and Aya Dataset 🌿🌏
causalgym won an area chair award and outstanding paper award at ACL 😁 thanks to my very cool advisors Christopher Potts and Dan Jurafsky
.Stanford NLP Group awards at #ACL2024 ▸ Best paper award Julie Kallini ✨ et al ▸ Outstanding paper award Aryaman Arora et al ▸ Outstanding paper award Weiyan Shi et al ▸ Best societal impact award Weiyan Shi et al ▸ 10 year test of time award Christopher Manning et al Congratulations! 🥂
Honored to receive Best Paper Award at #ACL2024 for “Mission: Impossible Language Models”! 🚀 Big thank you to my co-authors Isabel Papadimitriou Richard Futrell Kyle Mahowald Christopher Potts!
Some personal news: I'm thrilled to have joined @Databricks Databricks Mosaic Research as a Research Scientist last month, before I start as MIT faculty in July 2025! Expect increased investment into the open-source DSPy community, new research, & strong emphasis on production concerns 🧵.
The Linear Representation Hypothesis is now widely adopted despite its highly restrictive nature. Here, Csordás Róbert, Atticus Geiger, Christopher Manning & I present a counterexample to the LRH and argue for more expressive theories of interpretability: arxiv.org/abs/2408.10920