Roger Grosse
@rogergrosse
ID: 3301643341
30-07-2015 14:30:37
996 Tweet
10,10K Takipçi
782 Takip Edilen
With a powerful technology like AI, training in ethics and safety is vital for emerging AI researchers and developers. #DLRL2024 yesterday featured an engaging panel on AI safety with Canada CIFAR AI Chairs at the Vector Institute Sheila McIlraith and Roger Grosse.
One of the joys of teaching is seeing your students' projects turn into interesting papers. Here's some very nice work by david glukhov and collaborators on the challenges of ensuring harmlessness of LLMs that can be queried obliquely and repeatedly.
The field of AI safety emphasizes AI should “do no harm.” But lethal autonomous systems used in warfare are already causing harm. How should we think about purposely harmful AI? SRI Grad Fellow Michael Zhang writes about a panel exploring this topic: uoft.me/aJh
What is "safe" AI? Why is it difficult to achieve? Can LLMs be hacked? Are the existential risks of advanced AI exaggerated—or justified? Join us next week on Sept. 10 to hear from AI experts Karina Vold,Roger Grosse,Sedef Akinli Kocak, and Sheila McIlraith. 🔗 uoft.me/aLB