
Daniel Zhang
@dzhang105
AI governance, geopolitics, tech policy research @StanfordHAI | Previously @indexingai | Occasional movie tweet | Views mine.
ID: 588179581
23-05-2012 10:18:22
451 Tweet
522 Followers
843 Following

We are hiring a postdoc to work on policy x foundation models! stanford.io/4aqP7jb Come work with us! We do rigorous and influential work as a small team with big impact! Advisers: Percy Liang + Dan Ho Appointments: Center for Research on Foundation Models, Stanford RegLab, Stanford HAI

New paper alert! Caroline Meinhardt and I lay out a set of provocations regarding the impact of existing data privacy regulations in the US and EU on artificial intelligence. Our motivating question: can we have both data privacy and AI?


Just announced: Susan Rice and Michael Kratsios have joined Stanford HAI as Distinguished Visiting Fellows. They will contribute to HAI’s research efforts, exploring AI's global implications and understanding the fundamentals of human-centered AI. stanford.io/3U6M1Lf

How can we foster human-centered AI in ASEAN? Stanford HAI and the The Asia Foundation convened scholars, policymakers, and other global stakeholders in Cambodia to discuss how ASEAN can align its policies and coordination mechanisms to maximize AI benefits and minimize risks. 1/2


Are you attending this year’s #ICML2024? Stanford HAI is co-hosting a policy social on Thursday at 5:30 pm. Learn more here ↘️


New: Stanford HAI, Stanford Online, and Apolitical recently launched a new education program for public servants. This course equips participants with the knowledge to navigate AI’s opportunities and challenges. Enroll now: bit.ly/4e8kKPC

As someone who is associated with evidence-based AI policy, and is a frequent target of David Krueger's ire, let me share what I agree with in this thread (a lot) and what I disagree with (some important stuff). Points of strong agreement Calling for evidence-based policy — on

We are excited to welcome Yejin Choi to Stanford HAI's vibrant community of scholars! In this Q&A, she talks about what she hopes to accomplish as our latest senior fellow: stanford.io/4jkogda




During her keynote at the Paris #AIActionSummit, our Co-Director Fei-Fei Li challenged us to rethink how we approach the governance of AI. "It’s essential that we govern on the basis of science, not science fiction." More from her Financial Times op-ed: ft.com/content/3861a3…

📢 New white paper: Scholars from Stanford HAI, The Asia Foundation, and University of Pretoria map the current landscape of technical approaches to developing LLMs that better perform for and represent low-resource languages. (1/4) ↘️ hai.stanford.edu/policy/mind-th…


Interested in LLM evaluation reliability & efficiency? Check our ICML’25 paper Reliable and Efficient Amortized Model-based Evaluation arxiv.org/abs/2503.13335 w/ Percy Liang Bo Li Sanmi Koyejo Yuheng Tu Virtue AI Stanford AI Lab Stanford Trustworthy AI Research (STAIR) Lab Center for Research on Foundation Models 🧵1/9


The White House’s AI Action Plan lays out a market-driven vision for U.S. AI leadership — prioritizing infrastructure, open innovation, technical evals while limiting oversight. HAI scholars assess impacts on public sector, governance, workforce, and more. hai.stanford.edu/news/inside-tr…


In a new Science paper, top scholars from Stanford University, UC Berkeley, Princeton University, and other leading institutions urge policymakers to adopt an evidence-based approach to AI policy. Lead author rishi explains what it entails and why it matters: hai.stanford.edu/news/top-schol…

“When only a few have the resources to build and benefit from AI, we leave the rest of the world waiting at the door,” said Stanford HAI Senior Fellow Yejin Choi during her address to the United Nations Security Council. Read her full speech here: hai.stanford.edu/policy/yejin-c…
