Eoin Wickens (@enwckns) 's Twitter Profile
Eoin Wickens

@enwckns

Security for AI @ HiddenLayer

All words are, well, they're just, like, my opinion, man.

ID: 740610559675117568

calendar_today08-06-2016 18:24:52

279 Tweet

216 Takipçi

426 Takip Edilen

Steve YARA Synapse Miller (@stvemillertime) 's Twitter Profile Photo

Great detection rules are about hitting a "sweet spot" that is somewhere before the point of diminishing returns, after which a rule can become "overfit" and functionally no better than a hash. #100daysofYARA

HiddenLayer (@hiddenlayersec) 's Twitter Profile Photo

📅 SAVE THE DATE: HiddenLayer’s 2024 AI Threat Landscape Report will be released on March 6th. Sign up to be the first to preview the report & join us in a webinar discussion as we share some of the report’s most important findings 👉 hubs.ly/Q02kGr2Q0 #Security4AI

HiddenLayer (@hiddenlayersec) 's Twitter Profile Photo

In our latest publication, Eoin Wickens & Kasimir Schulz show how an attacker could send malicious pull requests to any repository on Hugging Face by hijacking the Safetensors conversion bot — with a single malicious model, the conversion service can be compromised.

Tom Bonner (@thomas_bonner) 's Twitter Profile Photo

Our researchers discovered that the Hugging Face PyTorch to Safetensors conversion service could easily be compromised by attackers, who could tamper with models and leak the token used to create pull requests from the official bot. hiddenlayer.com/research/silen…

HiddenLayer (@hiddenlayersec) 's Twitter Profile Photo

📅 SAVE THE DATE: HiddenLayer’s 2024 AI Threat Landscape Report will be released on March 6th. We're excited to have Eoin Wickens, our Technical Research Director and one of the authors of our 2024 AI Threat Landscape Report, on the webinar. Pre-register 👉 hubs.ly/Q02lqKfN0

📅 SAVE THE DATE: HiddenLayer’s 2024 AI Threat Landscape Report will be released on March 6th. We're excited to have <a href="/enwckns/">Eoin Wickens</a>, our Technical Research Director and one of the authors of our 2024 AI Threat Landscape Report, on the webinar. Pre-register 👉 hubs.ly/Q02lqKfN0
The Hacker News (@thehackersnews) 's Twitter Profile Photo

🤖 Security researchers have uncovered a new #vulnerability in Hugging Face's Safetensors conversion service that could lead to supply chain attacks, compromising user-submitted models. Read details: thehackernews.com/2024/02/new-hu… #cybersecurity #hacking #technews

HiddenLayer (@hiddenlayersec) 's Twitter Profile Photo

🚀 Product Launch: Introducing HiddenLayer's AI Detection & Response for Generative AI. We're thrilled to bring this new capability to our award-winning platform, extending our end-to-end security to orgs deploying LLM-based applications 📄 hubs.ly/Q02pY8M_0 #genai #LLM

AI Village @ DEF CON (@aivillage_dc) 's Twitter Profile Photo

AI Village is back for DEF CON 32! We're looking for talks on all things ML + Security, but this year we're getting small! "Smart" devices, AVs, on-device facial recognition, and more! Show us how you broke them! Submission deadline is 12-May-2024! aiv2024.hotcrp.com

HiddenLayer (@hiddenlayersec) 's Twitter Profile Photo

We're thrilled to have Marta & Eoin Wickens returning to #BSides SF this year. Make sure you catch their new presentation on 5/5, "Insane in the Supply Chain: Threat modeling for attacks on AI Systems." 🎬 hubs.ly/Q02sjhqf0 Our full #RSAC schedule 👉 hubs.ly/Q02sjgCX0

Noah Giansiracusa (@profnoahgian) 's Twitter Profile Photo

(a) this is fascinating (b) I hate to think how messed up science is going to get as people use LLMs for things they really shouldn’t, which evidently includes any kind of random sampling.

Tom Bonner (@thomas_bonner) 's Twitter Profile Photo

Very nice work from Abraxus and Kieran Evans in discovering CVE-2024-27322, a vulnerability in R's deserialization library that can lead to "R-bitrary" code execution when deserializing untrusted data. hiddenlayer.com/research/r-bit…

HiddenLayer (@hiddenlayersec) 's Twitter Profile Photo

Our SAI team uncovered a #0day deserialization vulnerability in the popular statistical programming language R, widely used within #government and #MedicalResearch. This could be used as part of a #supplychainattack. Learn more 👇hubs.ly/Q02vkG4w0 #Security4AI

Mihai Maruseac (@mihaimaruseac) 's Twitter Profile Photo

Model storage under attack (techcrunch.com/2024/05/31/hug…). Models are uninspectable, so the only solution to prevent tampering is to sign them. OpenSSF has a model signing SIG as part of the AI/ML WG. Both biweekly meetings are in the OpenSSF calendar. Also, github.com/sigstore/model…

LABScon (@labscon_io) 's Twitter Profile Photo

✍️ #LABScon24 workshop The AI-talian Job: Hands-on attacks on AI Systems - by Travis Smith, Eoin Wickens (HiddenLayer) Travis Smith Eoin Wickens HiddenLayer labscon.io/speakers/eoin-… labscon.io/speakers/travi…

cje (@caseyjohnellis) 's Twitter Profile Photo

i was pretty bummed to miss LABScon actual this year on account of ❤️‍🩹🇦🇺 things #iykyk …but then this showed up out of the blue yesterday 🤩🥹🙏

i was pretty bummed to miss <a href="/labscon_io/">LABScon</a> actual this year on account of ❤️‍🩹🇦🇺 things #iykyk

…but then this showed up out of the blue yesterday 🤩🥹🙏
HiddenLayer (@hiddenlayersec) 's Twitter Profile Photo

Our latest research highlights that even well-intentioned solutions can have vulnerabilities. We found that the watermarking service used by AWS to combat misinformation in digital content generated by its Titan AI model had a vulnerability. Read more 👉 hiddenlayer.com/research/attac…

Our latest research highlights that even well-intentioned solutions can have vulnerabilities. We found that the watermarking service used by AWS to combat misinformation in digital content generated by its Titan AI model had a vulnerability. Read more 👉 hiddenlayer.com/research/attac…
Mihai Maruseac (@mihaimaruseac) 's Twitter Profile Photo

All of this has happened before (vulnerabilities, lack of auth/autz, data leaks) All of this is happening again (ML security issues) I really recommend Eoin Wickens 's keynote at SCORED 24 about all the security issues found in ML and what we can do to not get to a bleak future