infoseeking (@infoseeking1) 's Twitter Profile
infoseeking

@infoseeking1

Welcome to the official Information Seeking/Retrieval/Behavior group Twitter at Rutgers University.

ID: 887535601

linkhttp://infoseeking.org calendar_today17-10-2012 20:45:36

697 Tweet

266 Takipçi

32 Takip Edilen

infoseeking (@infoseeking1) 's Twitter Profile Photo

How can we make AI decisions more understandable? Excited to share our latest publication! We review existing studies on counterfactual explanations and propose a rubric for evaluating algorithms👉dl.acm.org/doi/10.1145/36… By our amazing InfoSeekers and lab director Chirag Shah

infoseeking (@infoseeking1) 's Twitter Profile Photo

Prof. Chirag Shah, quoted in EDUCAUSEreview, breaks down AI search issues: Retrieval: Returns answers even when none exist Generation: Creates responses without fact-checking Natural Language Interface: Builds false trust Read more: er.educause.edu/articles/2024/… #AI #AISearch

infoseeking (@infoseeking1) 's Twitter Profile Photo

New Research Alert! Curious about how LLMs can better understand human intentions & emotions? Want to explore their limitations in Theory of Mind reasoning and ways to bridge the gap? Check out the study by our InfoSeeker Maryam Amirizaniani &Prof. Chirag Shah dl.acm.org/doi/10.1145/36…

infoseeking (@infoseeking1) 's Twitter Profile Photo

How do people trust AI? Our latest paper explores cognitive and affective trust in human-AI interactions, introducing validated scales for these two types and how they shape overall user trust. Check it out by InfoSeeker Ruoxi (Anna) Shang &Prof. Chirag Shah 🔗ojs.aaai.org/index.php/AIES…

infoseeking (@infoseeking1) 's Twitter Profile Photo

Our lab director Chirag Shah receiving the "Research in Information Science Award" at #ASIST24 in Calgary. Congratulations! asist.org/2024/06/24/chi…

Our lab director <a href="/chirag_shah/">Chirag Shah</a> receiving the "Research in Information Science Award" at #ASIST24 in Calgary. Congratulations! asist.org/2024/06/24/chi…
infoseeking (@infoseeking1) 's Twitter Profile Photo

Lab Director Chirag Shah sits down with University of Washington News to talk about AI Agents - what they are, what they could do, and what challenges lie ahead. washington.edu/news/2024/12/1…

Chirag Shah (@chirag_shah) 's Twitter Profile Photo

New book alert - targeted at researchers and graduate students in information retrieval or GenAI applications "Information Access in the Era of Generative AI" Edited by Ryen White and Chirag Shah with chapters written by many prominent IR & AI scholars link.springer.com/book/10.1007/9…

Communications of the ACM (@cacmmag) 's Twitter Profile Photo

"Panmodal Information Interaction," by @Ryen_White (Microsoft Research) & @Chirag_Shah (@UW_iSchool), says emerging modalities fueled by foundation models such as Google Google Gemini App and OpenAI GPT-4o have created new possibilities for information interaction. bit.ly/3DImDWL

"Panmodal Information Interaction," by @Ryen_White (<a href="/MSFTResearch/">Microsoft Research</a>) &amp; @Chirag_Shah (@UW_iSchool), says emerging modalities fueled by foundation models such as <a href="/Google/">Google</a> <a href="/GeminiApp/">Google Gemini App</a> and <a href="/OpenAI/">OpenAI</a> GPT-4o have created new possibilities for information interaction. bit.ly/3DImDWL
infoseeking (@infoseeking1) 's Twitter Profile Photo

New Research Alert! How can we make recommender systems smarter about what users actually want? Check out this new study by our InfoSeeker Maryam Amirizaniani &Prof. Chirag Shah introducing a Task-based Graph Neural Network (TGNN) dl.acm.org/doi/abs/10.114… #RecommendationSystems

infoseeking (@infoseeking1) 's Twitter Profile Photo

What's so different about AI agents this time around? Watch our lab director Chirag Shah talk about "AI Agents Renaissance" for University of Washington AI Perspectives Speaker Series: youtube.com/watch?v=KWPx4o…

Chirag Shah (@chirag_shah) 's Twitter Profile Photo

It's time to think beyond prompt engineering and look at sustainable, replicable, generalizable, explainable, and trustworthy use of LLMs. See my latest article in Communications of the ACM that demonstrates how we can start to do this with humans in the loop: cacm.acm.org/research/from-…