Cigdem Patlak (@cigdempatlak) 's Twitter Profile
Cigdem Patlak

@cigdempatlak

Technology professional | Discoveries about Responsible AI, AI Safety + Standards/Regulations, AI Red-Teaming, Generative AI

ID: 13684412

calendar_today19-02-2008 18:50:20

2,2K Tweet

2,2K Followers

1,1K Following

Learn Prompting (@learnprompting) 's Twitter Profile Photo

🚨FINAL COMPETITION UPDATE🚨 Our final challenges are three universal challenges and a BONUS round! 🧵 We bumped the prize pool to $65,000 and extended the competition until June 19th @ midnight EST.

🚨FINAL COMPETITION UPDATE🚨 

Our final challenges are three universal challenges and a BONUS round!  🧵

We bumped the prize pool to $65,000 and extended the competition until June 19th @ midnight EST.
Anthropic (@anthropicai) 's Twitter Profile Photo

New on the Anthropic Engineering blog: how we built Claude’s research capabilities using multiple agents working in parallel. We share what worked, what didn't, and the engineering challenges along the way. anthropic.com/engineering/bu…

IEEE WIE (@ieeewie) 's Twitter Profile Photo

🌐 Register Now! Join our expert panel on Ethical & Responsible AI – part of the 2025 IEEE WIE Emerging Innovation & Entrepreneurship Series. Speakers: Dr. Srinivas Padmanabhuni, Neelima Vobugari 📅 June 25, 2025, 11 AM EST 🔗 bit.ly/ethicalandmora… #EthicalAI #IEEEWIEDay

🌐 Register Now!
Join our expert panel on Ethical & Responsible AI – part of the 2025 IEEE WIE Emerging Innovation & Entrepreneurship Series.
Speakers:  Dr. Srinivas Padmanabhuni, Neelima Vobugari
📅 June 25, 2025, 11 AM EST
🔗 bit.ly/ethicalandmora…
#EthicalAI  #IEEEWIEDay
Andy Konwinski (@andykonwinski) 's Twitter Profile Photo

Today, I’m launching a deeply personal project. I’m betting $100M that we can help computer scientists create more upside impact for humanity. Built for and by researchers, including Jeff Dean & Joelle Pineau on the board, Laude Institute catalyzes research with real-world impact.

Today, I’m launching a deeply personal project. I’m betting $100M that we can help computer scientists create more upside impact for humanity.
Built for and by researchers, including <a href="/JeffDean/">Jeff Dean</a> &amp; <a href="/jpineau1/">Joelle Pineau</a> on the board, <a href="/LaudeInstitute/">Laude Institute</a> catalyzes research with real-world impact.
AI Security Institute (@aisecurityinst) 's Twitter Profile Photo

🧵 AI Systems are developing advanced cyber capabilities. This means they’re helping strengthen defences - but can also be used as threats. To keep on top of these risks, we need more rigorous evaluations of agentic AI, which is why we’re releasing Inspect Cyber 🔍

Cigdem Patlak (@cigdempatlak) 's Twitter Profile Photo

A practical Red Teaming Playbook just released by HumaneIntelligence and UNESCO 🏛️ #Education #Sciences #Culture 🇺🇳 - test AI systems for social good: unesdoc.unesco.org/ark:/48223/pf0… #AIRedTeaming #AIForGood

Learn Prompting (@learnprompting) 's Twitter Profile Photo

The Pliny x HackAPrompt submissions have been completely open sourced. Interested in seeing how different models performed? Check out our Model Leaderboard: hackaprompt.com/pliny-track The entire dataset can be found and downloaded on hugging face: huggingface.co/datasets/hacka…

Alex Albert (@alexalbert__) 's Twitter Profile Photo

Introducing Anthropic courses. We've launched a free educational platform to help you learn everything about Claude - from using the Anthropic API to MCP to Claude Code best practices.

Introducing Anthropic courses.

We've launched a free educational platform to help you learn everything about Claude - from using the Anthropic API to MCP to Claude Code best practices.
Cigdem Patlak (@cigdempatlak) 's Twitter Profile Photo

As part of the Spring 2025 Policy Primer cohort, We joined 4 other teams tackling big issues, from homelessness in Santa Fe to AI powered transit in Utah. Check out all 5 proposals here: aspenpolicyacademy.org/projects/

Cigdem Patlak (@cigdempatlak) 's Twitter Profile Photo

Proud to share that @aspenpolicyacad just published my Policy Primer team project! Check out - Smart Commutes, Smarter Cities: A Government and Business Partnership aspenpolicyacademy.org/project/smart-… #AIPolicy #TechPolicy

AI Security Institute (@aisecurityinst) 's Twitter Profile Photo

📢Introducing the Alignment Project: A new fund for research on urgent challenges in AI alignment and control, backed by over £15 million. ▶️ Up to £1 million per project ▶️ Compute access, venture capital investment, and expert support Learn more and apply ⬇️

HackAPrompt (@hackaprompt) 's Twitter Profile Photo

We partnered w/ OpenAI, Anthropic, & Google DeepMind to show that the way we evaluate new models against Prompt Injection/Jailbreaks is BROKEN We compared Humans on @HackAPrompt vs. Automated AI Red Teaming Humans broke every defense/model we evaluated… 100% of the time🧵

We partnered w/ <a href="/OpenAI/">OpenAI</a>, <a href="/AnthropicAI/">Anthropic</a>, &amp; <a href="/GoogleDeepMind/">Google DeepMind</a> to show that the way we evaluate new models against Prompt Injection/Jailbreaks is BROKEN

We compared Humans on @HackAPrompt vs. Automated AI Red Teaming

Humans broke every defense/model we evaluated… 100% of the time🧵
MLCommons (@mlcommons) 's Twitter Profile Photo

🚨 NEW: We tested 39 AI models for security vulnerabilities. Not a single one was as secure as it was "safe." Today, MLCommons is releasing the industry's first standardized jailbreak benchmark. Here's what we found 🧵 1/6

🚨 NEW: We tested 39 AI models for security vulnerabilities.
Not a single one was as secure as it was "safe."
Today, <a href="/MLCommons/">MLCommons</a>  is releasing the industry's first standardized jailbreak benchmark. Here's what we found 🧵
1/6
AI Security Institute (@aisecurityinst) 's Twitter Profile Photo

We collaborated with Lakera AI to design the backbone breaker benchmark (b³) – an open-source evaluation for LLM agents. It's built on more than 19,000 crowdsourced adversarial attacks and uses 'threat snapshots' to identify vulnerabilities without modelling full workflows 👇

We collaborated with <a href="/LakeraAI/">Lakera AI</a> to design the backbone breaker benchmark (b³) – an open-source evaluation for LLM agents.

It's built on more than 19,000 crowdsourced adversarial attacks and uses 'threat snapshots' to identify vulnerabilities without modelling full workflows 👇
Microsoft Research (@msftresearch) 's Twitter Profile Photo

AI agents are transforming digital marketplaces, mediating discovery and transactions between consumers and businesses. The new Magentic Marketplace provides an open-source, extensible simulation environment for studying different agentic market designs: msft.it/6014tyz9q

Anthropic (@anthropicai) 's Twitter Profile Photo

We believe this is the first documented case of a large-scale AI cyberattack executed without substantial human intervention. It has significant implications for cybersecurity in the age of AI agents. Read more: anthropic.com/news/disruptin…

Aspen Policy Academy (@aspenpolicyacad) 's Twitter Profile Photo

📅3 Days Left to Apply! 📣 Don’t miss your chance to join our annual, paid Science and Technology Policy Fellowship. Applications are due at 11:59pm PT on Thursday, February 5. 🔗Learn more and apply here: aspenpolicyacademy.org/program/scienc… #TechPolicy #AspenPolicyAcademy #ApplyNow

📅3 Days Left to Apply!
📣 Don’t miss your chance to join our annual, paid Science and Technology Policy Fellowship. Applications are due at 11:59pm PT on Thursday, February 5.
🔗Learn more and apply here: aspenpolicyacademy.org/program/scienc…
 #TechPolicy #AspenPolicyAcademy #ApplyNow
MLCommons (@mlcommons) 's Twitter Profile Photo

📢Announcing the AILuminate Global Assurance Program. MLCommons + Google, Microsoft, Qualcomm, & KPMG are building a structured framework for AI risk measurement: 🔧 Benchmarking as a Service 🏷️ AILuminate Risk Labels 🌍 A Global Framework Because verifying AI reliability

📢Announcing the AILuminate Global Assurance Program. 
MLCommons + <a href="/Google/">Google</a>, <a href="/Microsoft/">Microsoft</a>, <a href="/Qualcomm/">Qualcomm</a>, &amp; <a href="/KPMG/">KPMG</a> are building a structured framework for AI risk measurement:
🔧 Benchmarking as a Service 
🏷️ AILuminate Risk Labels 
🌍 A Global Framework
Because verifying AI reliability