
Max Lamparth
@mlamparth
Postdoc at @Stanford, @StanfordCISAC, Stanford Center for AI Safety, SERI. | Focusing on interpretable, safe, and ethical AI/LLM decision-making. Find me on 🦋
ID: 1588663024969125888
http://www.maxlamparth.com 04-11-2022 22:43:21
536 Tweet
684 Followers
679 Following


The Helpful, Honest, and Harmless (HHH) principle is key for AI alignment but current interpretations miss contextual nuances. CISAC postdoc Max Lamparth & colleagues propose an adaptive framework to prioritize values, balance trade-offs, & enhance AI ethics arxiv.org/abs/2502.06059


Thank you for featuring our work! Great collaboration with Declan Grabb, MD and the team. We created a dataset that goes beyond medical exam-style questions and studies the impact of patient demographic on clinical decision-making in psychiatric care on fifteen language models


In their latest blog post for Stanford AI Lab, CISAC Postdoc @mlamparth and colleague Declan Grabb dive into MENTAT, a clinician-annotated dataset tackling real-world ambiguities in psychiatric decision-making. ai.stanford.edu/blog/mentat/


Elvis Dohmatob Let's reframe your narrative : what I get is that you were very well aware of the paper incl the final updated version that got submitted and then accepted at COLM and you refused to cite it because you were upset of an earlier draft of that paper that was sent to you for


