Qingyun Wang (@eagle_hz) 's Twitter Profile
Qingyun Wang

@eagle_hz

ID: 865346024193351681

linkhttp://eaglew.github.io/ calendar_today18-05-2017 23:19:04

40 Tweet

154 Followers

407 Following

Carlos E. Perez (@intuitmachine) 's Twitter Profile Photo

1/n Implanting Novelty: How to Harness AI to Pump Up the Novelty of Your Next Hypothesis A lone researcher toils late into the night, scouring scientific papers for that elusive spark of inspiration for her next big hypothesis. If only there was a way to automate this tedious

1/n Implanting Novelty:  How to Harness AI to Pump Up the Novelty of Your Next Hypothesis

A lone researcher toils late into the night, scouring scientific papers for that elusive spark of inspiration for her next big hypothesis. If only there was a way to automate this tedious
Ziqian Lin (@myhakureimu) 's Twitter Profile Photo

How do we understand dual operating modes of in-context learning (ICL), task learning and retrieval? 📜 our paper: > A new probabilistic model for pretraining. > Theoretical analysis of dual operating modes. > Explanation and prediction of two real-world LLM phenomena. 1/n 🧵

How do we understand dual operating modes of in-context learning (ICL), task learning and retrieval?

📜 our paper:

> A new probabilistic model for pretraining.
> Theoretical analysis of dual operating modes.
> Explanation and prediction of two real-world LLM phenomena.

1/n 🧵
Tom Hope (@hoper_tom) 's Twitter Profile Photo

Akari Asai Interestingly, we’ve found RALM effective in several scientific tasks where inference/extrapolation is required: In 2022, we used RALM to predict clinical outcomes (Aakanksha Naik ✈️ NAACL 2024). Recently, we used retrieval of inspirations to generate hypotheses (x.com/hoper_tom/stat…) Qingyun Wang

Kangwook Lee (@kangwook_lee) 's Twitter Profile Photo

LLMs excel at in-context learning; they identify patterns from labeled examples in the prompt and make predictions accordingly. Many believe more in-context examples are better. However, that's not always true if the early ascent phenomenon occurs.

LLMs excel at in-context learning; they identify patterns from labeled examples in the prompt and make predictions accordingly.

Many believe more in-context examples are better.

However, that's not always true if the early ascent phenomenon occurs.
Qingyun Wang (@eagle_hz) 's Twitter Profile Photo

Excited to share our paper on fine-grained few-shot entity extraction accepted by #EACL2024 findings! A self-validation module, which reconstructs the original input from extracted entities, can improve performance dramatically! 📰shorturl.at/cvyEQ 💻shorturl.at/bhJK6

Kung-Hsiang Steeve Huang (@steeve__huang) 's Twitter Profile Photo

📢 Excited to share our latest work: a comprehensive survey on chart understanding! We dive into the evolution of datasets, vision-language models, challenges, and future directions in this vibrant field 📊. 📝: arxiv.org/abs/2403.12027 💻: github.com/khuangaf/Aweso… 1/n

📢 Excited to share our latest work: a comprehensive survey on chart understanding! We dive into the evolution of datasets, vision-language models, challenges, and future directions in this vibrant field 📊.

📝: arxiv.org/abs/2403.12027
💻: github.com/khuangaf/Aweso…

1/n
Yangyi Chen (@yangyichen6666) 's Twitter Profile Photo

🎉 🎉 🎉 Happy to share that our work “Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models” got accepted to #NAACL2024 main! Paper: arxiv.org/abs/2309.04461

🎉 🎉 🎉  Happy to share that our work “Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models” got accepted to #NAACL2024 main! 

Paper: arxiv.org/abs/2309.04461
Heng Ji (@hengjinlp) 's Twitter Profile Photo

ACL2024 workshop on Language+Molecules CFP: Submit a paper (deadline May 17) and play with our dataset/join the shared task! github.com/language-plus-… Created by the amazing PhD students Carl Edwards and Qingyun Wang

Language + Molecules Workshop (@lang_plus_mols) 's Twitter Profile Photo

🚀 Launching the Language + Molecules Workshop at ACL2024 ACL 2025! Dive into the integration of molecules & natural language for breakthroughs in drugs, materials, & more. 🧪⚛️ language-plus-molecules.github.io

Hongyi Liu (@lhtie) 's Twitter Profile Photo

Will source domain knowledge always improve learning in target domain? No. Check out our novel methods for named entity recognition under life science domains. Our paper is accepted by #NAACL main. Code&Datasets: shorturl.at/doIR6 Full paper: shorturl.at/xDRV7

Tom Hope (@hoper_tom) 's Twitter Profile Photo

Now accepted at ACL 2025 main conference! Looking forward to present our work on LLM scientific direction generation, and the fundamental limitations of SOTA LLM for generating truly novel, creative scientific ideas (despite the hype and poorly evaluated papers).

Qingyun Wang (@eagle_hz) 's Twitter Profile Photo

🌟 Excited to share that I had the honor of leading the tutorial on AI-augmented research! 🌟 A big thank you to everyone who participated and made this tutorial a success! Please feel free to contact me if you want to learn more about this topic!

Yangyi Chen (@yangyichen6666) 's Twitter Profile Photo

Check out our recent work on LLMs alignment regarding confidence expression and most importantly, enabling LLMs to say why they are uncertain!

Qingyun Wang (@eagle_hz) 's Twitter Profile Photo

Big thanks to @YiFung10 for presenting our transfer learning named entity recognition paper today at #NAACL2024 oral session! Please feel free to check out our paper further and contact us if you are interested in discussion! aclanthology.org/2024.naacl-lon…

May Fung (@may_f1_) 's Twitter Profile Photo

Qingyun Wang Strongly recommend checking out our Blender Lab's new paper: "Named Entity Recognition Under Domain Shift via Metric Learning for Life Sciences" (NAACL'24) if you haven't already! A great pleasure to step in for the oral pre & share research findings from Hongyi Liu + Qingyun Wang 🤗

Heng Ji (@hengjinlp) 's Twitter Profile Photo

We have won two NAACL2024 Outstanding Paper Awards! Congratulations to Chi Han, Shizhe Diao, Yi Fung, Xingyao Wang, Yangyi Chen and all students and collaborators! Chi Han Chi Han will be on academic job market next year! arxiv.org/pdf/2308.16137 arxiv.org/pdf/2311.09677

We have won two NAACL2024 Outstanding Paper Awards! Congratulations to Chi Han, Shizhe Diao, Yi Fung, Xingyao Wang, Yangyi Chen and all students and collaborators! Chi Han <a href="/Glaciohound/">Chi Han</a> will be on academic job market next year! 
arxiv.org/pdf/2308.16137
arxiv.org/pdf/2311.09677
Xinya Du (@xinya16) 's Twitter Profile Photo

Introducing MLR-Copilot: autonomous machine learning research with LLM agents, which → generate research ideas → implement experiments → execute implementation with human feedback 📑Paper arxiv.org/abs/2408.14033 🔨Code github.com/du-nlp-lab/MLR… 🤗Demo huggingface.co/spaces/du-lab/…

Xingyao Wang (@xingyaow_) 's Twitter Profile Photo

Excited to share that All Hands AI has raised $5M -- and it's finally time to announce a new chapter in my life: I'm taking a leave from my PhD to focus full-time on All Hands AI. Let's push open-source agents forward together, in the open!