Roger Grosse
@rogergrosse
ID: 3301643341
30-07-2015 14:30:37
996 Tweet
10,10K Followers
782 Following
With a powerful technology like AI, training in ethics and safety is vital for emerging AI researchers and developers. #DLRL2024 yesterday featured an engaging panel on AI safety with Canada CIFAR AI Chairs at the Vector Institute Sheila McIlraith and Roger Grosse.
One of the joys of teaching is seeing your students' projects turn into interesting papers. Here's some very nice work by david glukhov and collaborators on the challenges of ensuring harmlessness of LLMs that can be queried obliquely and repeatedly.
The field of AI safety emphasizes AI should โdo no harm.โ But lethal autonomous systems used in warfare are already causing harm. How should we think about purposely harmful AI? SRI Grad Fellow Michael Zhang writes about a panel exploring this topic: uoft.me/aJh
Look, hereโs the thing about free speech: YES, itโs not โabsoluteโ. Even the most hardcore free speech advocates agree that there are exceptions. Extreme case: telling e.g. Russia about UK military secrets is โjustโ a speech act, but it is (and should be) illegal in UK law.
What is "safe" AI? Why is it difficult to achieve? Can LLMs be hacked? Are the existential risks of advanced AI exaggeratedโor justified? Join us next week on Sept. 10 to hear from AI experts Karina Vold,Roger Grosse,Sedef Akinli Kocak, and Sheila McIlraith. ๐ uoft.me/aLB