
Leonard Dung
@leonarddung1
Philosopher of cognition at the Ruhr-University Bochum. I work mainly on consciousness, AI, and animals.
ID: 1428294477211480067
https://sites.google.com/view/leonard-dung/home 19-08-2021 09:55:26
178 Tweet
470 Followers
593 Following










Matthew Barnett Suppose we automate all human knowledge workers completely. The only economic power a person has is derived from financial and social capital. What happens to those humans who lack either? Are they serfs forever? UBI is a bad answer because it can be taken away as easily as it



There is zero evidence that China is seriously pursuing superintelligence, or is pursuing the global dominance described here. Meta is paying these organisations to scare American policymakers into opposing regulation of AI. This is commercial propaganda. papers.ssrn.com/sol3/papers.cf…



The “Manhattan Project” framing of AI alignment--as a binary, technical challenge that can be solved such that AI takeover is averted--is misleading. It's neither clear-cut nor fully operationalizable. New paper with Leonard Dung in Mind and Language: onlinelibrary.wiley.com/doi/10.1111/mi…

Great paper by Simon Friederich and Leonard Dung! We agree that "solving the alignment problem" is not obviously the solution to reduce existential risk. We also agree that we should not frame xrisk reduction in terms of a Manhattan Project. "One possible outcome is to conclude



Can benefits for species with small welfare ranges outweigh significant human goods? My new paper explores this issue and argues that the answer is plausibly ‘yes’. Now available online first: journals.publishing.umich.edu/jpe/news/207/ For the preprint-pdf see philpapers.org/rec/LOHTMI-2