Mike Kiser (@_mikekiser) 's Twitter Profile
Mike Kiser

@_mikekiser

Privacy advocate, identity aficionado, chronoptimist. A poor man's cross between Ira Glass and Peter Sagal. Wearer of many, many hats. (Thou/Thee/Thine)

ID: 2344555644

linkhttps://mikekiser.org calendar_today15-02-2014 04:45:04

1,1K Tweet

931 Followers

1,1K Following

Mike Kiser (@_mikekiser) 's Twitter Profile Photo

A5: AI is great as an educated assistant, but not as a source of truth. Cite the use of AI tools whenever they’re used; it’s ethical and also helps others gauge risk more appropriately. Use the right AI model for the right task and keep the use cases clear and direct. #eWEEKchat

Mike Kiser (@_mikekiser) 's Twitter Profile Photo

A6: Other than a boon for the chip industry, it means that dedicated hardware will continue to evolve. Wonder what this means for the aforementioned chome book phenomenon? (I know that I was thrilled when I could access GPUs on my personal laptop.) #eWEEKchat

Mike Kiser (@_mikekiser) 's Twitter Profile Photo

A8: My personal concerns about using AI tools center around what I would call the "death of authenticity." Issues with the accuracy of responses weaken what is true or authentic, and there's also a temptation to claim human authorship of AI creations—to be inauthentic. #eWEEKchat

Mike Kiser (@_mikekiser) 's Twitter Profile Photo

A9: Democratization of AI is well underway; our personal machines are providing gateway access to using AI. We're being directed into using AI as a default option, sometimes without us even knowing of its use. #eWEEKchat

Mike Kiser (@_mikekiser) 's Twitter Profile Photo

A10: Stay practical: identify your needs, keep your use cases narrow, and then use AI tools ethically, citing their use when appropriate. Remember that ultimately, technology is neither good or bad – it’s what we do with it that can determine how it’s used or abused. #eWEEKchat

Mike Kiser (@_mikekiser) 's Twitter Profile Photo

A2. There is an assumption that it will increase productivity and produce better results than the human equivalents #eWEEKchat

Mike Kiser (@_mikekiser) 's Twitter Profile Photo

A3. Security itself. Cost and training are fairly well-known, but we’re still feeling our way around the risk model for AI usage (including data, agents acting “on behalf of” others, and the like) #eWEEKchat

Mike Kiser (@_mikekiser) 's Twitter Profile Photo

Q4. Being clear on use cases (complete with side effects) and tracking down the provenance of training data. Isolation of models for both of these factors is key. #eWEEKchat

Mike Kiser (@_mikekiser) 's Twitter Profile Photo

A5. Don’t lose sight of transparency. If you don’t understand how your model came to the conclusions it did, then you’re flying blind. #eWEEKChat

Mike Kiser (@_mikekiser) 's Twitter Profile Photo

A6. They’ll need less staff to do the mundane – writing of documents, summarizing reports. We’ll still need humans to analyze and interpret results, just like we do with automation from the past. #eWEEKchat

Mike Kiser (@_mikekiser) 's Twitter Profile Photo

A7. AI is democratizing the hacker community and expanding it to “normal people” —easiest example of this is today’s easy access to deepfakes. So no, it won’t make things safer, per se. #eWEEKchat

Mike Kiser (@_mikekiser) 's Twitter Profile Photo

A8. Oddly, a lack of human-created content to train models on; without real-world human input, models tend to regress and eventually collapse.#eWEEKchat