Meng Chen (@mengchen_24) 's Twitter Profile
Meng Chen

@mengchen_24

phd @UTCompSci • interests={hci, ai, creativity} • prev. @DesignLabUCSD @NotreDame

ID: 1436518638127828995

linkhttp://meng-chen.com calendar_today11-09-2021 02:37:07

105 Tweet

181 Followers

204 Following

Meng Chen (@mengchen_24) 's Twitter Profile Photo

Super interesting work by Tuhin Chakrabarty. It is kinda shocking yet reasonable to see how genAI performs poorly in creative writing. Thought: rather than relying ai for generating creative content, it makes more sense to use it atomically as a tool to refine/inspire/…

Super interesting work by <a href="/TuhinChakr/">Tuhin Chakrabarty</a>. It is kinda shocking yet reasonable to see how genAI performs poorly in creative writing. 
Thought: rather than relying ai for generating creative content, it makes more sense to use it atomically as a tool to refine/inspire/…
Meng Chen (@mengchen_24) 's Twitter Profile Photo

from Hawaii shirt to cap and gown: 1) #CHI2024. chatting with new people & old friends. 2) Commencement. Graduated from University of Notre Dame. What a crazy week…

from Hawaii shirt to cap and gown:
1) #CHI2024. chatting with new people &amp; old friends.
2) Commencement. Graduated from <a href="/NotreDame/">University of Notre Dame</a>. 
What a crazy week…
Jeongeon Park (@jeongeonp_) 's Twitter Profile Photo

[Please RT] We are looking for {grad students, researchers} who will be giving a {conference, defense, or any academic} talk soon, and planning a rehearsal! Participate in our study and receive a $20 Gift Card. Sign up here: ucsd.co1.qualtrics.com/jfe/form/SV_di…

Meng Chen (@mengchen_24) 's Twitter Profile Photo

Excited to see Ningzhi Tang presenting our latest work at #VLHCC2024! Interested in how developers validate and repair LLM-generated code & how the provenance of code (AI-generated or not) affects it? Check out our paper📄! arxiv.org/pdf/2405.16081

Ananya (@xananyagm) 's Twitter Profile Photo

New vision-to-language models can provide detailed image descriptions on-demand but are context-free. How can we make them context-aware? Check out our #ASSETS2024 paper: “Context-Aware Image Descriptions for Web Accessibility” 🔗ananyagm.com/context-aware-… 📄arxiv.org/abs/2409.03054…

New vision-to-language models can provide detailed image descriptions on-demand but are context-free. How can we make them context-aware? Check out our #ASSETS2024 paper: “Context-Aware Image Descriptions for Web Accessibility”
🔗ananyagm.com/context-aware-…
📄arxiv.org/abs/2409.03054…
Meng Chen (@mengchen_24) 's Twitter Profile Photo

Sangho is THE BEST mentor and collaborator! As a mentor, he’s always down to answer any questions from students and to offer advice. As a researcher, he asks valuable questions and produces top-notch works. Check his post out!!

Amy Pavel (@amypavel) 's Twitter Profile Photo

Vision language models that power apps like BeMyAI now give really long responses to Q's about images ("What is this?", "Can I wear these together?"). What is in these long answers? Are the long answers useful? Check out Mina Huh's 🎉Oral Spotlight🎉 talk today at #COLM2024!

Zheng Zhang (@roryzzhang) 's Twitter Profile Photo

📢After years at Notre Dame CSE with amazing advisor Toby J. Li😺 (he/him), I'm on the job market now! My research lies at developing interactive AI-powered systems that provide adaptive, context-sensitive support in cognitive tasks. Please learn more about my work here: zhengzhang.me

Mina Huh (@mina1004h) 's Twitter Profile Photo

Recent AI models can suggest endless video edits, offering many alternatives to video creators. But how can we easily sift through them all? In our #CHI2025 paper, we propose VideoDiff, an AI video editing tool designed for editing with alternatives.

Recent AI models can suggest endless video edits, offering many alternatives to video creators. But how can we easily sift through them all? 

In our #CHI2025 paper, we propose VideoDiff, an AI video editing tool designed for editing with alternatives.
Yue Jiang (on the job market) (@yuejiang_nj) 's Twitter Profile Photo

I’m hiring students who are interested in multimodal generative AI / UI agent-related topics. My current vision is that we should have human-in-the-loop controllable UI generation models that adapt UIs to the diverse needs of designers and users + enhance human creativity.

Mina Huh (@mina1004h) 's Twitter Profile Photo

Last week, we visited the Texas School for the Blind and Visually Impaired (TSBVI) and introduced AVscript to their Film and Media class. 🎥🎬 Students tried editing videos with our tool!

Last week, we visited the Texas School for the Blind and Visually Impaired (TSBVI) and introduced AVscript to their Film and Media class. 🎥🎬 Students tried editing videos with our tool!
Fangcong Yin (@fangcong_y10593) 's Twitter Profile Photo

Solving complex problems with CoT requires combining different skills. We can do this by: 🧩Modify the CoT data format to be “composable” with other skills 🔥Train models on each skill 📌Combine those models Lead to better 0-shot reasoning on tasks involving skill composition!

Solving complex problems with CoT requires combining different skills.

We can do this by:
🧩Modify the CoT data format to be “composable” with other skills
🔥Train models on each skill
📌Combine those models

Lead to better 0-shot reasoning on tasks involving skill composition!
Leo Liu (@zeyuliu10) 's Twitter Profile Photo

LLMs trained to memorize new facts can’t use those facts well.🤔 We apply a hypernetwork to ✏️edit✏️ the gradients for fact propagation, improving accuracy by 2x on a challenging subset of RippleEdit!💡 Our approach, PropMEND, extends MEND with a new objective for propagation.

LLMs trained to memorize new facts can’t use those facts well.🤔

We apply a hypernetwork to ✏️edit✏️ the gradients for fact propagation, improving accuracy by 2x on a challenging subset of RippleEdit!💡

Our approach, PropMEND, extends MEND with a new objective for propagation.
Sangho Suh (@sangho_suh) 's Twitter Profile Photo

The #DynamicAbstractions Reading Group returns this Friday! 🎉 📍Topic: Simulations and Understanding 🗓️ Date & Time: June 27, 12pm EST / 9am PST 🎙️ Speaker: Andy Matuschak (andymatuschak.org) ✉️ Full letter: buttondown.com/dynamic_abstra… 🔁 Please #RT to share! 🧵 More:

Manya Wadhwa (@manyawadhwa1) 's Twitter Profile Photo

Happy to share that EvalAgent has been accepted to #COLM2025 Conference on Language Modeling 🎉🇨🇦 We introduce a framework to identify implicit and diverse evaluation criteria for various open-ended tasks! 📜 arxiv.org/pdf/2504.15219