Yoonjoo Lee (@yoonjoo_le2) 's Twitter Profile
Yoonjoo Lee

@yoonjoo_le2

PhD Student in @kixlab_kaist at KAIST. Prev: @allen_ai, @adobe, @LG_AI_Research. Interested in HCI + NLP

ID: 1327528147693174784

linkhttp://yoonjoolee.com/ calendar_today14-11-2020 08:26:03

150 Tweet

1,1K Takipçi

824 Takip Edilen

Mohit Iyyer (@mohitiyyer) 's Twitter Profile Photo

Lots of recent work focuses on 𝐚𝐮𝐭𝐨𝐦𝐚𝐭𝐢𝐜 detection of LLM-generated text. But how well do 𝐡𝐮𝐦𝐚𝐧𝐬 fare? TLDR: ppl who frequently use ChatGPT for writing tasks are elite at spotting AI text! See our paper for more (and congrats to Jenna Russell on her first paper!!)

Minsuk Chang (@minsuk_chang) 's Twitter Profile Photo

New paper alert! 📢 While much LLM research focuses on reasoning and planning, we asked: can we equip them with interaction capabilities? We explored this by fine-tuning LLMs to use conversational cues like backchanneling ("uh-huh," "I see"). arxiv.org/abs/2501.18103

Yoonjoo Lee (@yoonjoo_le2) 's Twitter Profile Photo

Thank you! I really enjoyed my visit to VTech and was happy to be the first guest for Virginia Tech CHCI's weekly seminar! I learned so much from inspiring chats with professors and students. Special thanks to MyounghoonJeon (전명훈) 🕯🕯 and Sang Won Lee for hosting and making my visit so smooth!😀🙌

Hyunwoo Kim (@hyunw_kim) 's Twitter Profile Photo

🚨New Paper! So o3-mini and R1 seem to excel on math & coding. But how good are they on other domains where verifiable rewards are not easily available, such as theory of mind (ToM)? Do they show similar behavior pattern?🤔What if I told you it's...interesting, like the below?🧵

🚨New Paper! So o3-mini and R1 seem to excel on math & coding. But how good are they on other domains where verifiable rewards are not easily available, such as theory of mind (ToM)? Do they show similar behavior pattern?🤔What if I told you it's...interesting, like the below?🧵
Haijun Xia (@haijunxia) 's Twitter Profile Photo

🎥 New Talk: “Generative, Malleable, and Personal User Interfaces”. This talk describes our work toward the long-held vision in the field of human-computer interaction. Talk Link: youtu.be/MbWgRuM-7X8 We are committed to making it as real as we can.

🎥 New Talk: “Generative, Malleable, and Personal User Interfaces”.

This talk describes our work toward the long-held vision in the field of human-computer interaction.

Talk Link: youtu.be/MbWgRuM-7X8

We are committed to making it as real as we can.
hyunji amy lee (@hyunji_amy_lee) 's Twitter Profile Photo

🤔MoE models show high performance in language modeling. What about retrieval tasks? In our AAAI paper, “RouterRetriever: Routing over a Mixture of Expert Embedding Models,” we show that combining multiple domain-specific experts consistently outperforms single embedding models.

🤔MoE models show high performance in language modeling. What about retrieval tasks?

In our AAAI paper, “RouterRetriever: Routing over a Mixture of Expert Embedding Models,” we show that combining multiple domain-specific experts consistently outperforms single embedding models.
Ai2 (@allen_ai) 's Twitter Profile Photo

We’re excited to share some updates to Ai2 ScholarQA: 🗂️ You can now sign in via Google to save your query history across devices and browsers. 📚 We added 108M+ paper abstracts to our corpus - expect to get even better responses! ✨ The backbone model has been updated to the

We’re excited to share some updates to Ai2 ScholarQA:
🗂️ You can now sign in via Google to save your query history across devices and browsers.
📚 We added 108M+ paper abstracts to our corpus - expect to get even better responses!
✨ The backbone model has been updated to the
Raymond Fok (@rayrayfok) 's Twitter Profile Photo

We are looking for CS researchers to participate in a study exploring how AI can change the way we do literature reviews. 📚🧑‍🎓 Time: ~90 min, remote Compensation: $60 USD Sign up here: forms.gle/Pzw6YUhVUaZsS6… Daniel Weld Amy Zhang Joseph Chee Chang Marissa Radensky Pao Siangliulue Jonathan Bragg

John Joon Young Chung (@john_jyc) 's Twitter Profile Photo

#CHI2025 paper from Midjourney Storytelling Lab 🫱🧸 🦖🫲What if we can tell a story by playing with toys? 🫱🧸 🦖🤖What if we can collaborate with AI on toy-playing+storytelling? Toyteller enables toy-playing-based storytelling with AI. Learn more: mj-storytelling.github.io/project/toytel… (1/9)

Manya Wadhwa (@manyawadhwa1) 's Twitter Profile Photo

Evaluating language model responses on open-ended tasks is hard! 🤔 We introduce EvalAgent, a framework that identifies nuanced and diverse criteria 📋✍️. EvalAgent identifies 👩‍🏫🎓 expert advice on the web that implicitly address the user’s prompt 🧵👇

Prithviraj (Raj) Ammanabrolu (@rajammanabrolu) 's Twitter Profile Photo

Introducing TALES - Text Adventure Learning Environment Suite A benchmark of a few hundred text envs: science experiments and embodied cooking to solving murder mysteries. We test over 30 of the best LLM agents and pinpoint failure modes +how to improve 👨‍💻pip install tale-suite

Seungone Kim @ NAACL2025 (@seungonekim) 's Twitter Profile Photo

🏆Glad to share that our BiGGen Bench paper has received the best paper award at NAACL HLT 2025! x.com/naaclmeeting/s… 📅 Ballroom A, Session I: Thursday May 1st, 16:00-17:30 (MDT) 📅 Session M (Plenary Session): Friday May 2nd, 15:30-16:30 (MDT) 📅 Virtual Conference: Tuesday

Yoonseo Choi (@yoon0u0) 's Twitter Profile Photo

I'm in Yokohama for #CHI2025 to present my work #Proxona! 😊 I'll be presenting at 9:12 AM in Room G314+G315. If you're interested in user simulation & prototyping, data-driven personas, conversational interactions, or supporting sensemaking and ideation, please come!

Philippe Laban (@philippelaban) 's Twitter Profile Photo

🆕paper: LLMs Get Lost in Multi-Turn Conversation In real life, people don’t speak in perfect prompts. So we simulate multi-turn conversations — less lab-like, more like real use. We find that LLMs get lost in conversation. 👀What does that mean? 🧵1/N 📄arxiv.org/abs/2505.06120

🆕paper: LLMs Get Lost in Multi-Turn Conversation

In real life, people don’t speak in perfect prompts.
So we simulate multi-turn conversations — less lab-like, more like real use.

We find that LLMs get lost in conversation.
👀What does that mean? 🧵1/N
📄arxiv.org/abs/2505.06120
Yoonseo Choi (@yoon0u0) 's Twitter Profile Photo

안녕하세요, KIXLAB 에서 만든 📢 공약쏙쏙 시스템📢을 소개합니다! (promisescope.com) 🤔공약쏙쏙? #대선2025 공약을 바탕으로 공약이 가져올 수 있는 사회적 영향, 그리고 '나'에게 어떤 변화가 생길 수 있을지 LLM (GPT-4o) 을 활용해 자동으로 보여주는 시스템입니다. (1/3)

Peiling Jiang (@peilingjiang) 's Twitter Profile Photo

AI should not replace browsing, but scale it. #Orca turns the web into your canvas and personal workspace. Work across dozens of pages, delegate to AI agents by your side, and synthesize on the fly. Welcome to 𝗕𝗿𝗼𝘄𝘀𝗶𝗻𝗴 𝗮𝘁 𝗦𝗰𝗮𝗹𝗲 hci.ucsd.edu/orca w/Haijun Xia

Sherry Tongshuang Wu (@tongshuangwu) 's Twitter Profile Photo

We all agree that AI models/agents should augment humans instead of replace us in many cases. But how do we pick when to have AI collaborators, and how do we build them? Come check out our #ACL2025NLP tutorial on Human-AI Collaboration w/ Diyi Yang Joseph Chee Chang, 📍7/27 9am@ Hall N!

We all agree that AI models/agents should augment humans instead of replace us in many cases. But how do we pick when to have AI collaborators, and how do we build them? Come check out our #ACL2025NLP tutorial on Human-AI Collaboration w/ <a href="/Diyi_Yang/">Diyi Yang</a> <a href="/josephcc/">Joseph Chee Chang</a>, 📍7/27 9am@ Hall N!
Hita K (@_hitakam) 's Twitter Profile Photo

Are you a researcher in CS or a CS-adjacent field who could use help in refining your research ideas? Want to try our new AI-powered tool that helps with just that in a paid user study? Details and sign up here! forms.gle/UPFjyJ59uuZ5Zb…