Aaron Grattafiori (@dyn___) 's Twitter Profile
Aaron Grattafiori

@dyn___

Ex-GenAI Red Team Lead @ Meta. Ex-Security Red Team Lead. Ex-Principal Consultant and Researcher @ iSEC Partners/NCC Group.

ID: 2369252168

linkhttp://hack.industries calendar_today02-03-2014 18:45:27

19,19K Tweet

5,5K Takipçi

2,2K Takip Edilen

doomer (@uncledoomer) 's Twitter Profile Photo

the engineers who design these machines for weird random physical tasks in manufacturing are so cool. "hey we need a bottle stander upper machine, can you do it?" "sure thing, itll be a big wheel and we'll slap em. we'll just the slap the shit out of em"

BSidesDenver (@bsidesden) 's Twitter Profile Photo

⚠️ “One month out!” We’re in the final stretch before BSides Denver — are you in? 🔐 Reserve your badge → bsidesden.org/event-details-…

vx-underground (@vxunderground) 's Twitter Profile Photo

That Israeli intelligence dude who got arrested in Las Vegas, while attending BLACKHAT, for (allegedly) trying to lure a minor, had the arrest documents unsealed and released online today. The documents are so cooked, dude mentioned to the detective (and on the record) he has

That Israeli intelligence dude who got arrested in Las Vegas, while attending BLACKHAT, for (allegedly) trying to lure a minor, had the arrest documents unsealed and released online today.

The documents are so cooked, dude mentioned to the detective (and on the record) he has
solst/ICE (@icesolst) 's Twitter Profile Photo

Blackhat training: ADVANCED ADVERSARIAL TACTICS IN NATION STATE OFFENSIVE TRADECRAFT - AI THRUNTING APPROACH ($4,800) Real APT: hey click on this

Alex Plaskett (@alexjplaskett) 's Twitter Profile Photo

Looks like the high risk V8 bug (CVE-2025-9132) found by Big Sleep AI was an OOB write: chromium.googlesource.com/v8/v8/+/848d7f… chromium.googlesource.com/v8/v8/+/848d7f…

Suha (@suhackerr) 's Twitter Profile Photo

New post and tool! Attackers can break production AI systems by using image scaling to hide multi-modal prompt injections from users. 🧵for more info on what broke, how this works, and our new tool to try this out yourself

Aaron Grattafiori (@dyn___) 's Twitter Profile Photo

"The batch script file contains several functionalities but most notably, it creates a fake subdirectory %SystemDrive%\Windows<space>\System32\. Note there is a <space> between Windows and System32" Nice.

Rachel Tobac (@racheltobac) 's Twitter Profile Photo

If you're building an AI tool like ChatGPT and you do not have safeguards to STOP the simulation and yes-man behavior when a user tells you they're planning on harming themselves or others, you've failed. This should be at the very top of the priority list.

If you're building an AI tool like ChatGPT and you do not have safeguards to STOP the simulation and yes-man behavior when a user tells you they're planning on harming themselves or others, you've failed.
This should be at the very top of the priority list.
Ivan Fratric 💙💛 (@ifsecure) 's Twitter Profile Photo

If you're keeping an eye on the Big Sleep issue tracker (goo.gle/bigsleep) you might have noticed that the detailed reports for some bugs (e.g. issuetracker.google.com/issues/4351567…) are now public. Note however that all reports are lovingly crafted by a human and not AI-generated.

Riley Walz (@rtwlz) 's Twitter Profile Photo

I reverse engineered the San Francisco parking ticket system. I can see every ticket seconds after it's written So I made a website. Find My Friends? AVOID THE PARKING COPS.

I reverse engineered the San Francisco parking ticket system. I can see every ticket seconds after it's written

So I made a website. Find My Friends? AVOID THE PARKING COPS.
AISecHub (@aisechub) 's Twitter Profile Photo

How does the AI Kill Chain break attacks into actionable stages? - developer.nvidia.com/blog/modeling-… by Rich Harang AI-powered applications are introducing new attack surfaces that traditional security models don’t fully capture, especially as these agentic systems gain autonomy. The

How does the AI Kill Chain break attacks into actionable stages? - developer.nvidia.com/blog/modeling-… by <a href="/rharang/">Rich Harang</a> 

AI-powered applications are introducing new attack surfaces that traditional security models don’t fully capture, especially as these agentic systems gain autonomy. The
clearbluejar (@clearbluejar) 's Twitter Profile Photo

Breaking down the patch for CVE-2025-43400, a FontParser vulnerability in the latest macOS Tahoe and iOS 26.0.1 update. The issue: A malicious font could cause an out-of-bounds write, leading to memory corruption. Let's look at the fix. 🧵

blasty (@bl4sty) 's Twitter Profile Photo

can we please get the libxml2 and ffmpeg people some cold cash, lambo's and decent quality blow as a token of appreciation for all the ASAN splats we throw over the fence and want to have fixed pronto? I know one man's trash (CVE's) is another man's treasure, but we gotta respect

Jackson Atkins (@jacksonatkinsx) 's Twitter Profile Photo

This just dropped and nobody's talking about it yet. 🥇 Nvidia's "GenCluster" won IOI 2025 gold with open-source gpt-oss-120b. The first time an open model ever matched the big labs. The breakthrough? Scalable test-time compute for hard problems. Here's how it works: -

This just dropped and nobody's talking about it yet. 🥇

Nvidia's "GenCluster" won IOI 2025 gold with open-source gpt-oss-120b.

The first time an open model ever matched the big labs.

The breakthrough? Scalable test-time compute for hard problems.

Here's how it works:

-