Rory Greig
@rorygreig1
Research Engineer at Google DeepMind, interested in AI Alignment and Complexity Science.
ID: 2654227392
17-07-2014 16:32:02
1,1K Tweet
605 Takipçi
4,4K Takip Edilen
Great talk by Wolf Barfuss for Cooperative AI Foundation - there's a lot of value in bringing together the machine leaning community and the complex systems/multi agent econ fields, and this is currently heavily neglected. youtu.be/gHR6xv3xiqE?si…
Foundations: Why Britain Has Stagnated. A new essay by Ben Southwood, Samuel Hughes & me. Why the UK's ban on investment in housing, infrastructure and energy is not just a problem. It is *the* problem. And how fixing it is the defining task of our generation. ukfoundations.co
I’ve been saying for a while that stagnation is a policy choice: the UK can choose to be rich! Sam Bowman Ben Southwood + Samuel Hughes lay out the case in brilliant, horrifying detail. Read it, send it to your MP and ask them what they’re going to do about it
When I joined Google DeepMind last year, I came across this incredible group of people working on deliberative alignment, and managed to convince them to join my team in a quest to account for viewpoint and value pluralism in AI. Their Science paper is on AI-assisted deliberation
We're looking for strong ML researchers and software engineers. You *don't* need to be an expert on AGI safety; we're happy to train you. Learn more: alignmentforum.org/posts/wqz5CRzq… Research Engineer role: boards.greenhouse.io/deepmind/jobs/… Research Scientist role: boards.greenhouse.io/deepmind/jobs/…
Excited to share Google DeepMind's AGI safety and security strategy to tackle risks like misuse and misalignment. Rather than high-level principles, this 145-page paper outlines a concrete, defense-in-depth technical approach: proactively evaluating & restricting dangerous