Bin Wu (@binwu_cs) 's Twitter Profile
Bin Wu

@binwu_cs

PhD Student at @UCL and WI (@ucl_wi_group), @Bloomberg
#DataScience Ph.D. Fellow. ML/NLP/IR.

ID: 1727371246180929536

calendar_today22-11-2023 16:59:38

6 Tweet

29 Takipçi

99 Takip Edilen

Xi Wang (@wangxieric) 's Twitter Profile Photo

This Friday, our PhD Student, @ZhengxiangShi Web Intelligence Group (WI) will give a talk entitled "Aligning Language Models with Downstream Tasks: Insights from a Language Modeling Perspective" based on his recent publication to Neurips 2023. Registration [tinyurl.com/uclwitalk-7]. #UCL

This Friday, our PhD Student, @ZhengxiangShi <a href="/ucl_wi_group/">Web Intelligence Group (WI)</a> will give a talk entitled "Aligning Language Models with Downstream Tasks: Insights from a Language Modeling Perspective" based on his recent publication to Neurips 2023.  Registration [tinyurl.com/uclwitalk-7]. #UCL
Zhengyan Shi (@zhengyan_shi) 's Twitter Profile Photo

anton Sebastian Raschka Hamel Husain Ashutosh Mehra Dan Becker We show that in many scenarios, applying loss to instructions could largely improve the performance of instruction tuning on various NLP and open-ended generation tasks. In the most advantageous case, its boosts AlpacaEval 1.0 performance by over 100%. arxiv.org/abs/2405.14394

Tech At Bloomberg (@techatbloomberg) 's Twitter Profile Photo

Congratulations to UCL Computer Science + Web Intelligence Group (WI)’s Bin Wu on being one of the 2023-2024 Bloomberg #DataScience Ph.D. Fellows! Learn more about Bin’s research focus and our latest cohort of Fellows: bloom.bg/4bvM8WO #AI #ML #NLProc #LLMs

Congratulations to <a href="/uclcs/">UCL Computer Science</a> + <a href="/ucl_wi_group/">Web Intelligence Group (WI)</a>’s <a href="/binwu_cs/">Bin Wu</a> on being one of the 2023-2024 <a href="/Bloomberg/">Bloomberg</a> #DataScience Ph.D. Fellows!
Learn more about Bin’s research focus and our latest cohort of Fellows: bloom.bg/4bvM8WO
#AI #ML #NLProc #LLMs
Zhengyan Shi (@zhengyan_shi) 's Twitter Profile Photo

Thanks Sebastian for sharing our work (arxiv.org/abs/2405.14394). For instruction tuning, masking user prompts is generally beneficial. However, if instruction tuning data is limited and completions are short, including the prompt loss during training might be advantageous.

Tech At Bloomberg (@techatbloomberg) 's Twitter Profile Photo

At #ACL2025NLP this week, researchers & engineers from Bloomberg's #AI Engineering group co-authored an #AgenticAI paper in the Findings of the ACL, while researchers from its CTO Office helped organize the Generation Evaluation & Metrics Workshop on #LLM evaluation bloom.bg/4534qN6 #NLProc #GenAI

At #ACL2025NLP this week, researchers &amp; engineers from <a href="/Bloomberg/">Bloomberg</a>'s #AI Engineering group co-authored an #AgenticAI paper in the Findings of the ACL, while researchers from its CTO Office helped organize the <a href="/gem_workshop/">Generation Evaluation & Metrics Workshop</a> on #LLM evaluation
bloom.bg/4534qN6
#NLProc #GenAI
Tech At Bloomberg (@techatbloomberg) 's Twitter Profile Photo

Bloomberg Data Science Fellow Bin Wu presents "A Joint Optimization Framework for Enhancing Efficiency of Tool Utilization in LLM Agents" in today's #ACL2025NLP's Session 12: IP Posters (Findings Posters 4) session (11:00-12:30 CEST) bloom.bg/3H62nzR (1/4) #AgenticAI

Bloomberg Data Science Fellow <a href="/binwu_cs/">Bin Wu</a> presents "A Joint Optimization Framework for Enhancing Efficiency of Tool Utilization in LLM Agents" in today's #ACL2025NLP's Session 12: IP Posters (Findings Posters 4) session (11:00-12:30 CEST)
bloom.bg/3H62nzR
(1/4)
#AgenticAI