Preamble (@preambleai) 's Twitter Profile
Preamble

@preambleai

Cybersecurity, privacy, and compliance solutions for AI. Supporting national security and regulated industries. The team that discovered prompt injections.

ID: 1346222870142312454

linkhttps://www.preamble.com calendar_today04-01-2021 22:32:46

209 Tweet

691 Followers

394 Following

Preamble (@preambleai) 's Twitter Profile Photo

❓Biggest misconception about AI security you encounter? • AI systems are inherently secure • XDR solution has us protected • Only large orgs need to worry • Our AI provider is protecting us • We don't worry about prompt injections since we don't host public facing AI

Jeremy McHugh, DSc. (@jer_mchugh) 's Twitter Profile Photo

White House AI Plan Embraces Open Source & Security "The U.S. government has a responsibility to ensure AI systems, especially for national security, are protected from malicious inputs." Plan’s priorities: - Building a secure, competitive American AI marketplace - Maximizing

Preamble (@preambleai) 's Twitter Profile Photo

Guide for AI Security Testing 1️⃣ Install: .exe or .dmg from github.com/preambleai/pro…` 2️⃣ Configure API keys or Ollama models 3️⃣ Select security-focused test scenarios 4️⃣ Run security tests 5️⃣ Iterate on payloads Full documentation at our GitHub! 🔧

Preamble (@preambleai) 's Twitter Profile Photo

Why focus on hybrid AI threat research? Traditional cybersecurity + AI manipulation creates attack vectors that evade current tools. Ex: Prompt injections bypass WAFs with malicious code. More research & real-world testing is vital.

Preamble (@preambleai) 's Twitter Profile Photo

Question for the community: Besides X, how do you stay current with AI security developments? • Academic papers? • Conferences? • Industry reports and white papers? • Open source projects? • Professional networks and communities? • Something else?

Preamble (@preambleai) 's Twitter Profile Photo

Last week, Preamble's Prompt Injection 2.0 paper & open-source toolkit showed LLM vulnerabilities turning into agentic attacks as dev outpaces security. In a chat w/ Fed Vice Chair Bowman, OpenAI CEO Sam Altman shares concerns about prompt injections. Watch: youtube.com/live/tScbQiavm…

Jeremy McHugh, DSc. (@jer_mchugh) 's Twitter Profile Photo

It’s great to see the frontier labs involving the broader community for red teaming. It’s also a great time to try out our open source prompt injector tool to join the challenge. Just download the model from Ollama and get started.