Liang Ding
@liangdingnlp
NLP/ML Researcher (working on developing LLM and exploring its applications) & Ex-@JD_Corporate @TencentGlobal @Sydney_Uni. Opinions are my own.
ID: 1056872928631877636
http://liamding.cc 29-10-2018 11:38:36
386 Tweet
604 Followers
1,1K Following
Nice work by Minghao Wu ! Mimicking human expert behaviours in some specific tasks (Like last year we encouraged LLMs to human-likely evaluate translations by analyzing errors, refer to x.com/liangdingNLP/s…) and implementing them with carefully designed LLM agents is a
Thanks so much for your post Aran Komatsuzaki In this work, we present a 🔥zero-shot promoting strategy🔥 to enhance the 🔥math&reason🔥performance, where we pushed the SOTA performance of several best LLMs.
Liang Ding Indeed! Well, you never know what becomes relevant. I believe diffusion models and multimodal tokenization techniques are non-autoregressive generation as well. Speculative decoding reminds me of techniques that came up in this context too. Let's keep moving and building models!