Meaning Alignment Institute (@meaningaligned) 's Twitter Profile
Meaning Alignment Institute

@meaningaligned

The Meaning Alignment Institute is a research organization with the goal of ensuring human flourishing in the age of AGI.

ID: 1323223144522489856

linkhttps://meaningalignment.org/ calendar_today02-11-2020 11:19:49

40 Tweet

1,1K Takipçi

17 Takip Edilen

RadicalxChange (@radxchange) 's Twitter Profile Photo

Happy Friday! Check out our latest episode with Joe Edelman of Meaning Alignment Institute! ▪️Simplecast: bit.ly/xCsJE23 ▪️Apple Podcasts: apple.co/49nWbNi ▪️Spotify: spoti.fi/3B44x08 🎙️ Can AI shape our moral decisions? In the latest RadicalxChange(s) ep, Matt & Joe

Happy Friday! Check out our latest episode with <a href="/edelwax/">Joe Edelman</a> of <a href="/meaningaligned/">Meaning Alignment Institute</a>! 

▪️Simplecast: bit.ly/xCsJE23
▪️Apple Podcasts: apple.co/49nWbNi
▪️Spotify: spoti.fi/3B44x08

🎙️ Can AI shape our moral decisions? In the latest RadicalxChange(s) ep, Matt &amp; Joe
David Duvenaud (@davidduvenaud) 's Twitter Profile Photo

New paper: What happens once AIs make humans obsolete? Even without AIs seeking power, we argue that competitive pressures will fully erode human influence and values. gradual-disempowerment.ai with Jan Kulveit Raymond Douglas Nora Ammann Deger Turan David Krueger 🧵

New paper: What happens once AIs make humans obsolete?

Even without AIs seeking power, we argue that competitive pressures will fully erode human influence and values.

gradual-disempowerment.ai

with <a href="/jankulveit/">Jan Kulveit</a> <a href="/raymondadouglas/">Raymond Douglas</a> <a href="/AmmannNora/">Nora Ammann</a> <a href="/degerturann/">Deger Turan</a> <a href="/DavidSKrueger/">David Krueger</a> 🧵
Meaning Alignment Institute (@meaningaligned) 's Twitter Profile Photo

We're very excited to collaborate with ARIA on this program 🚀 Expect some big announcements from us soon, detailing the work we'll be doing together. Stay tuned!

Joe Edelman (@edelwax) 's Twitter Profile Photo

A big part of why AI is threatening is: market forces. Just look at what the 'attention economy' did to social media, or the short-term wins of LLM sycophancy, or the product races among AI labs, or the markets for AI boyfriends and girlfriends. What can we do about this?

Ryan Lowe (@ryan_t_lowe) 's Twitter Profile Photo

yay!!!!! a concrete proposal by Meaning Alignment Institute for how we can re-align markets with what people really care about, using 'market intermediaries' more to come soon 😈

Ryan Lowe (@ryan_t_lowe) 's Twitter Profile Photo

Introducing: Full-Stack Alignment 🥞 A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵

Introducing: Full-Stack Alignment 🥞

A research program dedicated to co-aligning AI systems *and* institutions with what people value.

It's the most ambitious project I've ever undertaken.

Here's what we're doing: 🧵
Iason Gabriel (@iasongabriel) 's Twitter Profile Photo

Check out this great new initiative + paper led by Ryan Lowe 🥞, Joe Edelman 🥞, xuan (ɕɥɛn / sh-yen), Oliver Klingefjord 🥞 & the fine folks Meaning Alignment Institute! Using rich representations of value we aim to make headway on some of the most pressing AI alignment challenges! See: full-stack-alignment.ai

Check out this great new initiative + paper led by <a href="/ryan_t_lowe/">Ryan Lowe 🥞</a>, <a href="/edelwax/">Joe Edelman 🥞</a>, <a href="/xuanalogue/">xuan (ɕɥɛn / sh-yen)</a>, <a href="/klingefjord/">Oliver Klingefjord 🥞</a> &amp; the fine folks <a href="/meaningaligned/">Meaning Alignment Institute</a>!

Using rich representations of value we aim to make headway on some of the most pressing AI alignment challenges!

See: full-stack-alignment.ai
Joe Edelman (@edelwax) 's Twitter Profile Photo

In 2017, I was working to change FB News Feed's recommender to use “thick models of value” (per the paper we just released). Mark Zuckerberg even promised he'd make Facebook “Time Well Spent”. That effort was thwarted by the (1) market dynamics of the attention economy, (2) the US

Oliver Klingefjord (@klingefjord) 's Twitter Profile Photo

Aligning an AI system, or a recommender system, in isolation is playing whack-a-mole. The real issue we're facing is "full-stack", and requires solutions that tackles the problems on all levels

Ryan Lowe (@ryan_t_lowe) 's Twitter Profile Photo

I guess now is also a good time to announce that I've officially joined Meaning Alignment Institute!! I'll be working on field building for full-stack alignment -- helping nurture this effort into a research community with excellent vibes that gets shit done weeeeeeeeeee 🚀🚀

xuan (ɕɥɛn / sh-yen) (@xuanalogue) 's Twitter Profile Photo

Ever since I started thinking seriously about AI value alignment in 2016-7, I've been frustrated by the inadequacy of utility+RL theory to account for the richness of human values. Glad to be part of a larger team now moving beyond those thin theories towards thicker ones.

Ever since I started thinking seriously about AI value alignment in 2016-7, I've been frustrated by the inadequacy of utility+RL theory to account for the richness of human values.

Glad to be part of a larger team now moving beyond those thin theories towards thicker ones.
David Duvenaud (@davidduvenaud) 's Twitter Profile Photo

Ryan Lowe 🥞 of Meaning Alignment Institute spoke on "Co-Aligning AI and Institutions". Their “Full-stack Alignment” work argues that alignment strategy needs to consider the institutions in which AI is developed and deployed. paper: full-stack-alignment.ai video: youtube.com/watch?v=8AUDmo…