Chenchen Gu (@chenchenygu) 's Twitter Profile
Chenchen Gu

@chenchenygu

CS @stanford

ID: 1682451437136265216

linkhttps://chenchenygu.github.io/ calendar_today21-07-2023 18:05:30

10 Tweet

71 Followers

104 Following

Chenchen Gu (@chenchenygu) 's Twitter Profile Photo

Prompt caching lowers inference costs but can leak private information from timing differences. Our audits found 7 API providers with potential leakage of user data. Caching can even leak architecture info—OpenAI's embedding model is likely a decoder-only Transformer! 🧵1/9

Prompt caching lowers inference costs but can leak private information from timing differences.

Our audits found 7 API providers with potential leakage of user data.

Caching can even leak architecture info—OpenAI's embedding model is likely a decoder-only Transformer!
🧵1/9