Gus Docker
@gusdocker
Podcast Host @FLI_org
ID: 1484131048183238659
https://futureoflife.org 20-01-2022 11:50:33
110 Tweet
230 Takipçi
328 Takip Edilen
📻 New on the FLI Podcast! Composer & Fairly Trained CEO Ed Newton-Rex joins Gus Docker to discuss: 📚 The issue of AI models trained on copyrighted data; 🎵 AI music; 🏢 The industry’s attitude towards rights; ⚖️ What fairer approaches to creatives' work and AI could look like;
What happens after superintelligence? 👇 📻 New on the FLI Podcast, Anders Sandberg from the Mimir Center for Long Term Futures Research joins host Gus Docker for an episode to discuss what superintelligence will mean for human psychology, markets, and governance; physical bottlenecks to it; and more. 🔗 ⬇️
📻🆕 Daniel Kokotajlo, AI Futures Project Executive Director who co-wrote AI 2027, joined Gus Docker for an episode of the FLI Podcast on: ❌ Why the AI race ends in disaster ⌨️ The implications of automated coding ❓ If AI is inherently risky And much more. 🔗 Listen at the link below:
📻 New on the FLI Podcast! ➡️ Forethought's Tom Davidson and host Gus Docker discuss the growing threat of AI-enabled coups: how AI could empower small groups to overthrow governments and seize power. 🔗 Listen to the full interview at the link in the replies below:
📻 New on the FLI Podcast, Calum Chace joins Gus Docker to discuss: 💼 How AI could replace humans, especially in the workforce; 🧠 How "cognitive automation" compares to historical tech revolutions; 🏫 AI redefining education; And much more. 🔗Listen at the link in the replies:
"This is still quite early... Between now and five years from now, AI's gonna go from what it is now, to being the #1 economic, political, social issue. The front page every day will be to do with AI." -80,000 Hours founder Benjamin Todd, on the latest FLI Podcast. 🔗 Link in the
"I'm very confused about what the rationale is of AGI company leadership at the moment... mindlessly scaling and mindlessly accelerating is not the answer to solving our problems, it's more of a 'hopium'". 📻Esben Kran (Apart Research, Seldon) joined host Gus Docker on the FLI
📻 Economist Basil Halperin on the latest FLI podcast episode: 📢 "It's hard to get away from the idea that there will be skyrocketing inequality in a truly transformative AI scenario, but skyrocketing inequality might still be consistent with everyone being better off." 🔗
🆕 "As we continue to build technology that is designed to replace rather than to augment, we move closer and closer towards a world where people just don't matter. And then of course you're reliant on other forces [...] it's a very precarious situation to be in." -Luke Drago
Listen to @Parmy's full conversation with host Gus Docker: youtube.com/watch?v=ROC-0k…
"If you feel like, 'hey, we're actually not hitting certain alignment things right now and we're using misaligned models to try and align models of the future'... probably good to speak up now." -Karl Koch, founder of the AI Whistleblower Initiative AIWI - The AI Whistleblower Initiative, on the latest
"Better futures - namely, trying to make the future better, conditional on there being no catastrophe - is in at least the same ballpark of priority as reducing existential risk itself." New on the FLI Podcast, Forethought senior research fellow William MacAskill joins host
🆕"If the final input at the end of the day that informs regulation is what the public wants and who they vote for, then at some point the money stops working for you." -The Midas Project's Tyler Johnston on the FLI Podcast w/ Gus Docker, discussing how to hold Big AI accountable 🔗👇
🚨 "Why are companies building these things? The REAL reason, the goal, is to not give people the tools that will just make them more productive, but to replace people." -Anthony Aguirre on the FLI Podcast with host Gus Docker ⬇️ 🎥