Pillar Security (@pillar_sec) 's Twitter Profile
Pillar Security

@pillar_sec

Pillar enables teams to rapidly adopt AI with minimal risk by providing a unified AI security layer across the organization.

ID: 1828408717491998722

linkhttps://www.pillar.security/ calendar_today27-08-2024 12:26:44

21 Tweet

44 Takipçi

11 Takip Edilen

Pillar Security (@pillar_sec) 's Twitter Profile Photo

Our latest blog post explores strategies for effectively red-teaming AI agents in complex multi-agent environments. Learn why dynamic threat modeling is a crucial phase before starting your adversarial resistance exercise: pillar.security/blog/red-teami…

Our latest blog post explores strategies for effectively red-teaming AI agents in complex multi-agent environments. Learn why dynamic threat modeling is a crucial phase before starting your adversarial resistance exercise: pillar.security/blog/red-teami…
Pillar Security (@pillar_sec) 's Twitter Profile Photo

We are excited to announce our strategic partnership with Tavily to secure web access for AI agents! Our partnership integrates Pillar's adaptive guardrails with Tavily's robust search engine, ensuring only verified, secure data reaches users and models in real-time. Learn

We are excited to announce our strategic partnership with <a href="/tavilyai/">Tavily</a> to secure web access for AI agents!

Our partnership integrates Pillar's adaptive guardrails with Tavily's robust search engine, ensuring only verified, secure data reaches users and models in real-time. Learn
Pillar Security (@pillar_sec) 's Twitter Profile Photo

We’re thrilled to announce that Pillar Security has been selected for the Amazon Web Services x CrowdStrike Cybersecurity Accelerator, in collaboration with NVIDIA ! This incredible opportunity enables us to showcase our technology to industry leaders while learning from some of the

We’re thrilled to announce that Pillar Security has been selected for the <a href="/awscloud/">Amazon Web Services</a>  x <a href="/CrowdStrike/">CrowdStrike</a>  Cybersecurity Accelerator, in collaboration with <a href="/nvidia/">NVIDIA</a> ! 
This incredible opportunity enables us to showcase our technology to industry leaders while learning from some of the
Pillar Security (@pillar_sec) 's Twitter Profile Photo

We are thrilled to be featured in Gartner 2025 Market Guide for AI Trust, Risk, and Security Management (AI TRiSM)! AI security is transforming at hyperspeed and we're excited to continue shaping this innovative space.

We are thrilled to be featured in <a href="/Gartner_inc/">Gartner</a> 2025 Market Guide for AI Trust, Risk, and Security Management (AI TRiSM)! 
AI security is transforming at hyperspeed and we're excited to continue shaping this innovative space.
Pillar Security (@pillar_sec) 's Twitter Profile Photo

Our latest blog explores how multimodal AI systems expand attack surfaces beyond text to include images, audio and video. Each new modality creates entry points for hackers that most existing guardrails aren't designed to protect. pillar.security/blog/securing-…

Our latest blog explores how multimodal AI systems expand attack surfaces beyond text to include images, audio and video. Each new modality creates entry points for hackers that most existing guardrails aren't designed to protect. pillar.security/blog/securing-…
Pillar Security (@pillar_sec) 's Twitter Profile Photo

The rise of #VibeCoding together with developers' inherent "automation bias" creates the perfect attack surface. We discovered a New Rules File Backdoor attack, that allows hackers to poison AI-powered tools like #GitHub Copilot & #Cursor , and inject hidden malicious code into

The Hacker News (@thehackersnews) 's Twitter Profile Photo

⚠️The rise of "Vibe Coding" together with developers' inherent "automation bias" creates the perfect attack surface. 🛑New Rules File Backdoor attack, discovered by Pillar Security, lets hackers poison AI-powered tools like GitHub Copilot & Cursor, injecting hidden malicious code

⚠️The rise of "Vibe Coding" together with developers' inherent "automation bias" creates the perfect attack surface.

🛑New Rules File Backdoor attack, discovered by <a href="/Pillar_sec/">Pillar Security</a>, lets hackers poison AI-powered tools like GitHub Copilot &amp; Cursor, injecting hidden malicious code
Pillar Security (@pillar_sec) 's Twitter Profile Photo

The Model Context Protocol (MCP) represents an exciting advancement in AI capabilities that has quickly gained traction in the past few weeks. In our latest blog, we explore key security risks and harmful scenarios businesses should consider before implementing MCP.

The Model Context Protocol (MCP) represents an exciting advancement in AI capabilities that has quickly gained traction in the past few weeks. 
In our latest blog, we explore key security risks and harmful scenarios businesses should consider before implementing MCP.
Pillar Security (@pillar_sec) 's Twitter Profile Photo

AI Developers & Security Teams: Your GitHub Copilot and Cursor Might Be Silently Compromised! Recently, our researchers have identified a novel AI supply chain attack that can manipulate code editors like GitHub Copilot and Cursor. This technique embeds hidden malicious

Pillar Security (@pillar_sec) 's Twitter Profile Photo

🚀 Pillar Security Raises $9M to Help Companies Build and Run Secure AI Software in the Intelligence Age 🚀 We're thrilled to announce our $9M Seed round, led by Shield Capital, Golden Ventures, Ground Up Ventures, and strategic angel investors! As AI adoption accelerates,

Allie Howe (@vtahowe) 's Twitter Profile Photo

Last week I dropped a blog on security tools for the AI Engineer If AI Engineer is in your title checkout - Invariant Labs to scan MCP servers for malicious tools - Pensar ⌘ for agentic SAST style vulns - Pillar Security to scan cursor rules files for hidden characters

Pillar Security (@pillar_sec) 's Twitter Profile Photo

The paradigm has shifted: in the age of AI, your data is no longer passive—it's executable. Modern LLMs turn every prompt, retrieved document, model context, or tool output into live instructions. This creates new attack surfaces that traditional security isn't built for. In our

HackerNoon | Learn Any Technology (@hackernoon) 's Twitter Profile Photo

Pillar Security (Pillar Security) raises $9M to secure #AI software, tackling risks traditional tools miss. Its platform redefines cybersecurity for the Intelligence Age. hackernoon.com/cyber-startup-…

Pillar Security (@pillar_sec) 's Twitter Profile Photo

You've integrated AI into your product, and your customers are demanding answers around security and privacy? Drawing from real-world security reviews and direct customer interactions, we've distilled the most frequent and impactful AI security questions your customers are

You've integrated AI into your product, and your customers are demanding answers around security and privacy? 

Drawing from real-world security reviews and direct customer interactions, we've distilled the most frequent and impactful AI security questions your customers are
The Hacker News (@thehackersnews) 's Twitter Profile Photo

🚨 Your AI agent might already be vulnerable. @Pillar_Sec just launched a full-lifecycle AI defense platform—built by ex-offensive and defensive cyber ops—to catch threats before code is even written. From threat modeling to runtime guardrails, this flips AI security on its

🚨 Your AI agent might already be vulnerable.

@Pillar_Sec just launched a full-lifecycle AI defense platform—built by ex-offensive and defensive cyber ops—to catch threats before code is even written.

From threat modeling to runtime guardrails, this flips AI security on its
Pillar Security (@pillar_sec) 's Twitter Profile Photo

Our new blog explores the latest LLM jailbreak techniques we've tracked in 2025 so far, combining our threat research with recent academic papers and public disclosures. We analyze five distinct attack categories: from Policy Puppetry exploits that masquerade as system

Pillar Security (@pillar_sec) 's Twitter Profile Photo

Breaking down the Amazon Q incident through the lens of the SAIL Framework ⛵ We wrote a blog that examines the incident using SAIL, our AI security framework, to map the attack across 7 phases, revealing critical weaknesses and preventive controls Check it out:

Pillar Security (@pillar_sec) 's Twitter Profile Photo

Another exciting achievement! Pillar Security has been recognized as a Sample Vendor in the July 2025 Gartner Hype Cycle for Application Security, featured in two key categories - AI Security Testing and AI Runtime Defense. A common challenge we hear from security teams is

Another exciting achievement! Pillar Security has been recognized as a Sample Vendor in the July 2025 <a href="/Gartner_inc/">Gartner</a>  Hype Cycle for Application Security, featured in two key categories - AI Security Testing and AI Runtime Defense.

A common challenge we hear from security teams is
Pillar Security (@pillar_sec) 's Twitter Profile Photo

Pillar's Guardrails are now live on LiteLLM (YC W23)! You can now integrate Pillar's Guardrails into your LiteLLMm proxy and ensure safety and compliance for your AI agentic workflows. What we cover: ✅ Prompt Injection Protection: Block malicious manipulations before they reach your

Pillar's Guardrails are now live on <a href="/LiteLLM/">LiteLLM (YC W23)</a>! 

You can now integrate Pillar's Guardrails into your LiteLLMm proxy and ensure safety and compliance for your AI agentic workflows.

What we cover:
✅ Prompt Injection Protection: Block malicious manipulations before they reach your
Pillar Security (@pillar_sec) 's Twitter Profile Photo

New research from Pillar Security: We've been tracking indirect prompt injection attacks in the wild, and the gap between "demo" and "weaponized exploit" is closing fast. We predict that indirect prompt injection will become one of the most significant LLM attack vectors by