Walter H. Haydock (@walter_haydock) 's Twitter Profile
Walter H. Haydock

@walter_haydock

Security leader and entrepreneur | @HarvardHBS grad | @USMC veteran | Tweets at the intersection of AI, security, privacy, and compliance

ID: 1519659916448997376

linkhttp://policy.stackaware.com calendar_today28-04-2022 12:51:21

1,1K Tweet

298 Followers

380 Following

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

Companies, with vulnerability management policy, that requires fixing "high and critical CVEs in 30 days": 50+% Of 1.6M orgs, those actually fixing all "high and critical CVEs (56% of them, per the NVD) in 30 days": ~0% (Cyentia Institute & SecurityScorecard data below)

Companies, with vulnerability management policy, that requires fixing "high and critical CVEs in 30 days": 50+%

Of 1.6M orgs, those actually fixing all "high and critical CVEs (56% of them, per the NVD) in 30 days": ~0%

(Cyentia Institute & SecurityScorecard data below)
Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

Tune in TODAY to learn how ISO 42001 and HITRUST's AI security certification compare! Ryan Patrick and I will discuss the frameworks, how they differ, and how they complement each other. Register below: hitrustalliance.net/webinars/stack…

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

The American AI regulation wave has already started. Colorado leads with SB-205. But there are issues: -> Its governor signed the state AI Act with reservations -> A bipartisan task force highlighted many gaps -> Legislators talked amending it, but didn't -> Enforcement starts

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

Last week, the CISO of a $500 million ARR SaaS company told me this about AI risk assessments: "Right now it's a pretty time-consuming effort." With engineering teams requesting 3-4 new AI tools a week, his security analysts were completely under water. And there was an even

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

What makes an AI system reliable? Not just whether it works, but that you KNOW it worked. As we build out the StackAware AI governance standard, one of the first categories we looked at is: -> Reliability -> Monitoring -> Observability -> Logging These characteristics of AI

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

Copilot Studio is powerful, but can be dangerous. It makes every employee an AI engineer: -> Great for productivity -> Nerve-wracking for security And Microsoft makes it hard to turn off! Check out this clip for an overview of the biggest risk.

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

With the U.S. state AI regulation moratorium dead, a wave of new laws is coming. Most impactful? The Colorado Artificial Intelligence Act (SB24-205). It puts in place a range of requirements for "High-risk artificial intelligence systems." This describes any AI system whose

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

AI transparency, explainability, and interpretability. Common terms. Rarely defined. Here's how I do it: 1. Transparency Disclosure of an AI system’s: -> development processes -> operational use -> data sources -> limitations in a way that allows stakeholders to understand:

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

If I want to get ISO 42001 certified, do all my 3rd party AI services also need to be ISO 42001 compliant? No. Unlike HITRUST or other standards, ISO 42001 does not have a clearly-defined inheritance model. While the (mostly optional) Annex A controls have requirements

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

StackAware's completely revamped data sheet is live! Check it out, and if: -> You are a security, privacy, or compliance leader -> In healthcare, life sciences, or B2B SaaS -> Who needs to manage AI risk DM me "DATA" to discuss how we can help.

StackAware's completely revamped data sheet is live!

Check it out, and if:

-> You are a security, privacy, or compliance leader
-> In healthcare, life sciences, or B2B SaaS
-> Who needs to manage AI risk

DM me "DATA" to discuss how we can help.
Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

Who can achieve ISO 42001 certification? Basically any organization, including AI: -> Model developers and trainers -> Service providers -> Users StackAware itself doesn't train or fine-tune AI systems, but we make heavy use of them on a daily basis. See this video for

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

How much transparency is "enough" for an AI system? The StackAware AI governance standard tells you. It maps to ISO/IEC 42001:2023 Annex A controls: -> 5.4 -> 6.2.4 -> 7.2 -> 9.3 and aligns with regulations like the EU AI Act and Colorado SB-205. Here’s how we implement this

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

Regulation drives activity. But does it drive progress? In cybersecurity and AI governance, that's questionable. I've served in 2 out of 3 branches of the U.S. government, and "best practices" that are actually "best" aren't coming from there. As a wave of State-level

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

Why the CISO of a $500 million ARR software company says he needs ISO 42001 certification: "When we talk about AI governance...it's just hearsay." Simply claiming to have AI policies, procedures, and controls was essentially "marketing at that point." He wanted external

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

When do AI-powered firms work with StackAware? The top 3 reasons CISOs reach out: Emerging technologies, especially AI, shake up the security landscape. As companies rush to integrate AI, they must address key challenges: 1. Product launches. Security and compliance teams are

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

AI 10x's your data governance problems. It adds a whole dimension of things to track like: -> Preparation and cleaning -> Intended use -> Provenance -> Quality -> Bias In this clip I propose a way for companies to tackle this problem. And this free guide gives you a

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

ISO 42001 as a concept? Revolutionary. As a structured document? A disaster. Obviously written by committee (where everyone had his/her "say"), it's nearly impossible to track every requirement in an organized way. Especially because the Annex A controls overlap with each

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

The AI attack surface is expanding—fast. Models can: -> Leak data -> Misbehave under pressure -> Create new attack surfaces StackAware's answer: Relentless AI Red Teaming. -> It’s not a scan -> It’s not a one-time audit -> It’s a continuous, full-coverage assault on your AI

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

Healthcare is high-risk, high-reward in terms of AI use. Here are my top 3 security recommendations: 1. Zero Data Retention by 3rd party systems Keeping protected health information (PHI) and other sensitive data in as few places as possible reduces the risk of breach. 2.

Walter H. Haydock (@walter_haydock) 's Twitter Profile Photo

StackAware passed its ISO 42001 surveillance audit! And the ANAB witnessed it. My key learnings: 1. Process management We had a minor non-conformity (now corrected) due to a failure to discuss a change in external issues during management review (and document it). This came