
Jason Hoelscher-Obermaier (in Singapore)
@jasonobermaier
Co-Director @apartresearch | AI safety research lead | Physics PhD | co-designing a better future
_
ID: 1029109598228295680
https://www.linkedin.com/in/jas-ho/ 13-08-2018 20:57:02
347 Tweet
163 Followers
685 Following

Joe O'Brien ...also, we should stop limiting the research dollars so much.




To me, Apart Research stands as an example of what free, global, and accessible AI safety research can really look like, empowering individuals across the world to push for truly independent research agendas within impactful arenas of AI safety. This is proven both in our

another insightful top prio from the report: "Strengthening talent pipelines: Alleviate limited availability of specialized talent by sponsoring upskilling efforts" exactly what Apart Research does


I want to respectfully (I hope) disagree with Kareem Carr, Statistics Person here. The quest for AGI is not comparable to earlier silicon valley chases for big data or big data analysis or in fact any other technological hype we have seen before. Space travel, nuclear fusion, AR -- none of them

We are already seeing AI agents like OpenAI's Codex in production. The number of agents will keep increasing. But what risks do multi-agent systems represent? Our guest, Jason Hoelscher-Obermaier , Director of Research at AI safety research lab Apart Research, highlighted 3 major risks:





Just made the biggest donation I could afford to Apart Research. Fantastic org building a meritocratic talent pipeline & real impact towards AI safety. Check them out & make a donation if you also want the future w/ AI to go well. Link below.

Neel Nanda I suspect though that in <=5 years' time automated research assistants/scientists will get so good and fast that we'll all mostly be doing AI-assisted peer review
