
rachneet
@rachneet4
Ph.D. Candidate @UKPLab @TUDarmstadt | NLP researcher | Explainable AI | Safe AI | rachneet.github.io
ID: 939805826
10-11-2012 19:54:45
77 Tweet
155 Followers
214 Following






We are proud to announce that the contribution »Sensitivity, Performance, Robustness: Deconstructing the Effect of Sociodemographic Prompting« by Tilman Beck, Hendrik Schuff, Anne Lauscher (she/her) (Universität Hamburg) and Iryna Gurevych (UKP Lab) has just been awarded the #EACL2024 Social Impact Award!





Happening tomorrow 9AM at the MWE-UD Workshop (MWE Workshop) @ LREC-COLING! Harish will deliver the first keynote on semantics and reasoning of LLMs 🤖📖

Andreas Waldis HSLU Hochschule Luzern Yufang Hou IBM Research Iryna Gurevych Md. Imbesat Hassan Rizvi Xiaodan Zhu ECE Queens Ingenuity Labs Indraneil Paul Goran Glavaš Universität Würzburg #UniWürzburg Jonibek Mansurov Jinyan Su Artem Shelmanov Akim Tsvigun Osama Afzal Alham Fikri Aji @nyhabash Preslav Nakov MBZUAI TU Darmstadt @IstiCnr NYU Abu Dhabi Furkan Şahinuç Ilia Kuznetsov »Are Emergent Abilities in Large Language Models just In-Context Learning?« by Sheng Lu (UKP Lab), Irina Bigoulaeva, @Rachneet4 Singh Sachdeva, @Harish Tayyar Madabushi (BathNLP) and Iryna Gurevych (8/🧵) #ACL2024NLP (arXiv coming soon) x.com/UKPLab/status/…


If you’re at ACL and interested in learning more about emergent abilities in LLMs, please meet my amazing colleagues: Bob Irina Bigoulaeva Harish Tayyar Madabushi






The hardest part about finetuning LLMs is that people generally don't have high-quality labeled data. Today, Databricks introduced TAO, a new finetuning method that only needs inputs, no labels necessary. Best of all, it actually beats supervised finetuning on labeled data.


