Prateek Mittal (@prateekmittal_) 's Twitter Profile
Prateek Mittal

@prateekmittal_

Professor at Princeton. Focused on privacy, cybersecurity, AI and machine learning, public interest technologies.

ID: 276494400

linkhttps://www.princeton.edu/~pmittal/ calendar_today03-04-2011 13:34:11

1,1K Tweet

2,2K Takipçi

352 Takip Edilen

Princeton Center for Information Technology Policy (@princetoncitp) 's Twitter Profile Photo

Congratulations to Prateek Mittal, professor in Princeton University Electrical & Computer Engineering, for being named a 2024 Association for Computing Machinery Distinguished Member🎉These are awarded for technical & professional achievements & contributions in computer science & information technology

Congratulations to <a href="/prateekmittal_/">Prateek Mittal</a>, professor in
<a href="/Princeton/">Princeton University</a> Electrical &amp; Computer Engineering, for being named a 2024 <a href="/TheOfficialACM/">Association for Computing Machinery</a> Distinguished Member🎉These are awarded for technical &amp; professional achievements &amp; contributions in computer science &amp; information technology
Prateek Mittal (@prateekmittal_) 's Twitter Profile Photo

How can we enhance trustworthiness of AI summaries? This could be a good use case for applying AI robustness techniques that were developed in the context of data poisoning. The idea is that LLM outputs should not depend too much on any single data source, say one user’s

Matthew Green is on BlueSky (@matthew_d_green) 's Twitter Profile Photo

Ok, look people: Signal as a *protocol* is excellent. As a service it’s excellent. But as an application running on your phone, it’s… an application running on your consumer-grade phone. The targeted attacks people use on those devices are well known.

Ok, look people: Signal as a *protocol* is excellent. As a service it’s excellent. But as an application running on your phone, it’s… an application running on your consumer-grade phone. The targeted attacks people use on those devices are well known.
Princeton Computer Science (@princetoncs) 's Twitter Profile Photo

Until now, pricing structure on rideshare apps has been opaque for both drivers and riders. 🚗 To help fix this, the The Workers' Algorithm Observatory and researchers from Princeton University created the FairFare app to crowdsource payment info from drivers. Now, a new law in Colorado mandates transparency.

Prateek Mittal (@prateekmittal_) 's Twitter Profile Photo

Thinking intervention is a new paradigm for controlling LLMs. The idea is deceptively simple: we can do thought engineering in model’s reasoning space. This has nicely applications to safety alignment, instruction hierarchy, instruction following, and more 👇

Massimo (@rainmaker1973) 's Twitter Profile Photo

Dennis Ritchie, the man who invented C, co-created Unix, and is largely regarded as effectively influencing every software system we use on a daily basis. His death was largely ignored, overshadowed by Steve Jobs' death, one week before.

Dennis Ritchie, the man who invented C, co-created Unix, and is largely regarded as effectively influencing every software system we use on a daily basis. 

His  death was largely ignored, overshadowed by Steve Jobs' death, one week before.
Prateek Mittal (@prateekmittal_) 's Twitter Profile Photo

Delighted to share that two papers from our group Princeton Engineering got recognized by the ICLR 2026 award committee. Our paper, "Safety Alignment Should be Made More Than Just a Few Tokens Deep", received the ICLR 2025 Outstanding Paper Award. This paper showcases that many AI

Prateek Mittal (@prateekmittal_) 's Twitter Profile Photo

Last week, I shared two #ICLR2025 papers that were recognized by their Award committee. Reflecting on the outcome, I thought it might be interesting to share that both papers were previously rejected by #NeurIPS2024. I found the dramatic difference in reviewer perception of

Princeton University (@princeton) 's Twitter Profile Photo

Princeton engineers have identified a universal weakness in AI chatbots that allows users to bypass safety guardrails and elicit directions for malicious uses, from creating nerve gas to hacking government databases. bit.ly/3SzRto7

Mengdi Wang (@mengdiwang10) 's Twitter Profile Photo

AI risk is real. Paper from Princeton AI Lab shows it’s shockingly easy to jailbreak genome-focused LLMs—opening doors to dangerous misuse. We must build strong safeguards now. Check out our call on Nature Biotech that maps out the AI guardrail technologies needed to mitigate