
Robin Jia
@robinomial
Assistant Professor @CSatUSC | Previously Visiting Researcher @facebookai | Stanford CS PhD @StanfordNLP
ID: 1012392833834029056
https://robinjia.github.io/ 28-06-2018 17:50:35
273 Tweet
3,3K Takipçi
865 Takip Edilen





🎉Congrats to Aryan Gulati & Ryan Wang for receiving Honorable Mentions for the CRA Outstanding Undergraduate Researcher Awards! Aryan, a former CAIS++ co-president, was mentored by CAIS Associate Director Swabha Swayamdipta. Ryan worked with CAIS faculty Robin Jia. viterbischool.usc.edu/news/2025/03/f…




Really proud of this interdisciplinary LLM evaluation effort led by Wang Bill Zhu . We teamed up with oncologists from USC Keck SOM to understand LLM failure modes on realistic patient questions. Key finding: LLMs consistently fail to correct patients’ misconceptions!

I’ll be at NAACL HLT 2025 this week. Excited to meet old and new friends!

At NAACL HLT 2025 this week! I’ll be presenting our work on LLM domain induction with Jesse Thomason on Thu (5/1) at 4pm in Hall 3, Section I. Would love to connect and chat about LLM planning, reasoning, AI4Science, multimodal stuff, or anything else. Feel free to DM!


Check out Wang Bill Zhu ‘s excellent work on combining LLMs with symbolic planners at NAACL on Thursday! I will also be at NAACL Friday-Sunday, looking forward to chatting about LLM memorization, interpretability, evaluation, and more








If an LLM’s hallucinated claim contradicts its own knowledge, it should be able to retract the claim. Yet, it often reaffirms the claim instead. Why? Yuqing Yang dives deep to show that faulty model internal beliefs (representations of “truthfulness”) drive retraction failures!
