Max Kleiman-Weiner (@maxhkw) 's Twitter Profile
Max Kleiman-Weiner

@maxhkw

professor @UW computational cognitive scientist working on social and artificial intelligence. cofounder @CSM_ai. priors: PhD @MIT founder @diffeo (acquired)

ID: 288194586

linkhttp://faculty.washington.edu/maxkw/ calendar_today26-04-2011 12:13:13

847 Tweet

4,4K Takipçi

786 Takip Edilen

Common Sense Machines (@csm_ai) 's Twitter Profile Photo

🚀 We are excited to release PBR textured meshes in CSM Cube, taking us one step closer to production-ready assets! ✨🔥 🖼️ Inputs: images, sketches, or text 🗺️ Outputs: 3D Mesh; Albedo, metallic, and roughness maps Available now to all Maker+ users on CSM AI!

Natasha Jaques (@natashajaques) 's Twitter Profile Photo

Uploaded a recent "talk" / rant about RL fine-tuning of LLMs to youtube: youtube.com/watch?v=NTSYgb…. Covers some of my lab's recent work on personalized RLHF, as well as some mild Schmidhubering about my own early contributions to this space

Zhijing Jin✈️ ICLR Singapore (@zhijingjin) 's Twitter Profile Photo

🌍 How do #LLMs handle trolley problems across cultures? We test them with 98K dilemmas in 107 languages, grounded in 40M+ human moral judgments. 💡 Spotlight @ICLR2025 in Singapore✈️| Best Paper Pluralistic Alignment Workshop Workshop #NeurIPS2024 📄 Paper: arxiv.org/abs/2407.02273 🧵👇

🌍 How do #LLMs handle trolley problems across cultures? We test them with 98K dilemmas in 107 languages, grounded in 40M+ human moral judgments.
💡 Spotlight @ICLR2025 in Singapore✈️| Best Paper <a href="/pluralistic_ai/">Pluralistic Alignment Workshop</a> Workshop #NeurIPS2024
📄 Paper: arxiv.org/abs/2407.02273 🧵👇
Kunal Jha (@kjha02) 's Twitter Profile Photo

Our new paper (first one of my PhD!) on cooperative AI reveals a surprising insight: Environment Diversity > Partner Diversity. Agents trained in self-play across many environments learn cooperative norms that transfer to humans on novel tasks. shorturl.at/fqsNN🧵

Our new paper (first one of my PhD!) on cooperative AI reveals a surprising insight: Environment Diversity &gt; Partner Diversity.

Agents trained in self-play across many environments learn cooperative norms that transfer to humans on novel tasks.

shorturl.at/fqsNN🧵
Max Kleiman-Weiner (@maxhkw) 's Twitter Profile Photo

Awesome new work from my lab led by Kunal Jha scaling cooperative AI! True cooperation requires adapting to both unfamiliar partners and novel environments simultaneously. Agents trained with Cross-Environment Cooperation (CEC) tackle this challenge and get us closer to agents that

Zhijing Jin✈️ ICLR Singapore (@zhijingjin) 's Twitter Profile Photo

I will present our Spotlight #ICLR2025 paper this Sat April 26 at 10am-12:30pm in Singapore! Welcome to drop by Poster#535 at Hall 3 + Hall 2B. Title: "Language Model Alignment in Multilingual Trolley Problems." Location: iclr.cc/virtual/2025/p… Paper PDF: openreview.net/forum?id=VEqPD…

I will present our Spotlight #ICLR2025 paper this Sat April 26 at 10am-12:30pm in Singapore! Welcome to drop by Poster#535 at Hall 3 + Hall 2B. Title: "Language Model Alignment in Multilingual Trolley Problems." Location: iclr.cc/virtual/2025/p… Paper PDF: openreview.net/forum?id=VEqPD…
Natasha Jaques (@natashajaques) 's Twitter Profile Photo

"What Makes ChatGPT Chat?" youtu.be/KvTGUI4Tznw. Gave this talk for new UW CSE undergrad admits and their parents. Explains AI, LLMs, and RLHF for a layperson audience. Somehow it absolutely killed with the parents, so maybe your mom would like it if you want her to understand

Vincent Liu (@vincentjliu) 's Twitter Profile Photo

The future of robotics isn't in the lab – it's in your hands. Can we teach robots to act in the real world without a single robot demonstration? Introducing EgoZero. Train real-world robot policies from human-first egocentric data. No robots. No teleop. Just Aria glasses and

Jing-Jing Li (@drjingjing2026) 's Twitter Profile Photo

Excited to share that our SafetyAnalyst paper has been accepted to #icml2025! SafetyAnalyst provides a novel way to determine if some AI behavior would be safe. It’s accurate, interpretable, transparent, and steerable. 1/7

Excited to share that our SafetyAnalyst paper has been accepted to #icml2025! 

SafetyAnalyst provides a novel way to determine if some AI behavior would be safe. It’s accurate, interpretable, transparent, and steerable. 1/7
Tianyi (Alex) Qiu (@tianyi_alex_qiu) 's Twitter Profile Photo

Is AI locking humanity into false beliefs?🔒 "The Lock-in Hypothesis," our new paper at ICML, asks if human-LLM feedback loops erode belief diversity and entrench false ideas, causing "stagnation by algorithm." We model it, simulate it, and find early evidence in the wild. [1/n]

Is AI locking humanity into false beliefs?🔒

"The Lock-in Hypothesis," our new paper at ICML, asks if human-LLM feedback loops erode belief diversity and entrench false ideas, causing "stagnation by algorithm." We model it, simulate it, and find early evidence in the wild. [1/n]
Kevin Ellis (@ellisk_kellis) 's Twitter Profile Photo

New paper: World models + Program synthesis by Wasu Top Piriyakulkij 1. World modeling on-the-fly by synthesizing programs w/ 4000+ lines of code 2. Learns new environments from minutes of experience 3. Positive score on Montezuma's Revenge 4. Compositional generalization to new environments

Wilka Carvalho (@cogscikid) 's Twitter Profile Photo

To help computational cognitive scientist engage with more naturalistic experiments, I've made NiceWebRL. NiceWebRL is a Python library for designing human subject experiments that leverage machine reinforcement learning environments. github.com/KempnerInstitu…