
Chirag Agarwal
@_cagarwal
Assistant Professor @UVA; PI of Aikyam Lab; Prev - @Harvard, @Adobe @BoschGlobal @thisisUIC ; Increasing the sample size of my thoughts
ID: 2202622188
https://chirag-agarwall.github.io/ 19-11-2013 06:50:42
295 Tweet
1,1K Takipçi
511 Takip Edilen

“We found that if you ask the LLM, surprisingly it always says that I'm 100% confident about my reasoning.” Chirag Agarwal examines the (un)reliability of chain-of-thought reasoning, highlighting issues in faithfulness, uncertainty & hallucination.






Google DeepMind India is hiring for research scientist role in multicultural & multimodal modeling. Strong candidates with proven research experience are encouraged to apply I shall be at #icassp2025 Hyderabad on Apr 8, happy to meet and chat, pls DM job-boards.greenhouse.io/deepmind/jobs/…


Thank you for summarizing this work, Rohan Paul 🙏

ICML Conference is around the corner! Are you presenting any papers or hot takes in Trustworthy ML? Share your work in this thread and we’ll retweet! 🚀

Congrats LOGML Summer School for running an in-person grad ML summer school at Imperial College London and opening up research opportunities for students worldwide! 👏 logml.ai Thanks to organizers, Valentina Giunchiglia, and mentors Guadalupe Gonzalez Chirag Agarwal Ruthie Johnson Yasha Ektefaie


Had a great time interacting with the students from IIIT Hyderabad, discussing the (un)reliability of CoT reasoning and multimodal explainability. Thank you for the invite, Ponnurangam Kumaraguru “PK”!


⏰ ONLY 9 DAYS LEFT! ⏰ Submit to 3rd Regulatable ML Workshop NeurIPS Conference Call for Papers: regulatableml.github.io/cfp/ Also seeking reviewers! Fill out our form: forms.gle/JxHmzEZuwKgtSd… Don't wait - submit now! #CallForPapers #CallForReviewers #NeurIPS2025 #MLSafety #TechPolicy

