Rich Harang (@rharang) 's Twitter Profile
Rich Harang

@rharang

Using bad guys to catch math since 2010. Principal Security Architect (AI/ML) at NVIDIA. He/him. Personal account and opinions: `from std_disclaimers import *`.

ID: 195915277

linkhttps://scholar.google.com/citations?user=TPkC91wAAAAJ&hl=en calendar_today27-09-2010 21:59:36

3,3K Tweet

3,3K Takipçi

721 Takip Edilen

Rich Harang (@rharang) 's Twitter Profile Photo

I'm going to go one step farther: I don't think jailbreaking / prompt injection in the LLM space is a fixable problem with LLMs as they currently exist. We have design secure applications that account for the way that LLMs *actually* work, not the way we *wish* they did.