New paper:
The possibility of AI welfare and moral patienthood—that is, of AI systems with their own interests and moral significance—is no longer a sci-fi issue. It's a very real possibility in the near term.
And we need to start taking it seriously.
We’re starting a Fellows program to help engineers and researchers transition into doing frontier AI safety research full-time.
Beginning in March 2025, we'll provide funding, compute, and research mentorship to 10–15 Fellows with strong coding and technical backgrounds.
As AI models become more complex and more capable, is it possible that they’ll have experiences of their own?
It’s an open question. We recently started a research program to investigate it.
đź’ˇLeading researchers and AI companies have raised the possibility that AI models could soon be sentient.
I’m worried that too few people are thinking about this. Let’s change that.
I’m excited to announce a Digital Sentience Consortium. Check out these funding opps.👇