
Miles Williams
@miles_wil
PhD student at @SheffieldNLP
ID: 1511078683720794113
https://github.com/mlsw 04-04-2022 20:30:21
11 Tweet
127 Takipรงi
221 Takip Edilen



The generative AI race has a dirty secret. Dr Nafise Sadat Moosavi, from Sheffield Comp Sci comments on the need to make large language models like the ones used by Google and Microsoft more efficient. wired.co.uk/article/the-geโฆ


โ๏ธ Pruning Parameters = Pruning Hallucinations๐ช๏ธ! Our latest paper reveals the sweet spot: up to 50% pruned, the more you prune, the lower hallucination risk. It's a buy one get one free for #LLMs. Nikos Aletras George Chrysostomou Miles Williams ๐ arxiv.org/abs/2311.09335

#PhDstudentship available in #LLMs at Sheffield NLP, one of the UK's largest #NLP research centres Sheffield Comp Sci * 3.5yrs tuition waiver & stipend * knowledge editing, model compression, or your proposal! DM for details! Apply now: shorturl.at/xCM36 #PhD #AI #PhDposition


Job opportunity Sheffield NLP ๐จ: I'm looking for a #postdoc (24 months) in #NLProc The (very) broad topic is on addressing LLMs limitations (e.g. hallucinations, "reasoning", interpretability etc.) If you are interested drop me an email or DM Apply: jobs.ac.uk/job/DHP918/resโฆ



Synthetic calibration data (for pruning and quantization) generated by the LLM itself is a better approx of the pre-training data dist than "external" data. Really cool work by Miles (Miles Williams) and George (George Chrysostomou) to be presented at #NAACL2025 Link to the paper ๐


Gutted to miss #NAACL2025 ๐ญ but Miles Miles Williams will be there presenting the following papers: ๐ Main: Self-Calibration for Language Model Quantization and Pruning ๐ RepL4NLP: Vocabulary-Level Memory Efficiency for LM Fine-Tuning Check them out!