Nikhil Parthasarathy (@nikparth1) 's Twitter Profile
Nikhil Parthasarathy

@nikparth1

Research Scientist @GoogleDeepMind making multi-modal learning more efficient. PhD from the Simoncelli lab @NYU_CNS @FlatironCCN. BS/MS @Stanford.

ID: 940688534524088326

calendar_today12-12-2017 21:03:18

385 Tweet

678 Followers

254 Following

Tyler Zhu (@tyleryzhu) 's Twitter Profile Photo

All this talk about world models but how strong are their perception abilities really? Can they track w/ occlusions, reason over 1hr+ videos, or predict physical scenarios? Test your models in the 3rd Perception Test Challenge at #ICCV2025 w/ prizes up to 50k EUR! DDL: 6 Oct 25

All this talk about world models but how strong are their perception abilities really? Can they track w/ occlusions, reason over 1hr+ videos, or predict physical scenarios? 

Test your models in the 3rd Perception Test Challenge at #ICCV2025 w/ prizes up to 50k EUR! DDL: 6 Oct 25
Demis Hassabis (@demishassabis) 's Twitter Profile Photo

We've now been given permission to share our results and are pleased to have been part of the inaugural cohort to have our model results officially graded and certified by IMO coordinators and experts, receiving the first official gold-level performance grading for an AI system!

Nikhil Parthasarathy (@nikparth1) 's Twitter Profile Photo

By far one of the biggest things I've noticed since moving out of the US is just how much I implicitly normalized or got used to gun violence... The fact that a mass shooting is a frequent occurrence in a modern, progressive and not war-torn country is utterly embarrassing.

Nikhil Parthasarathy (@nikparth1) 's Twitter Profile Photo

Supposedly San Francisco is building the future but just flew into SFO and have now waited 1 hour for checked bags ... The inefficiency is especially jarring since we also recently came back from a trip to Zurich where our bags were out before we even got to the baggage claim..

Nikhil Parthasarathy (@nikparth1) 's Twitter Profile Photo

Incredible work from the Genie team. This level of controllable generation could be a huge step towards being able to train embodied reasoning systems.

Nikhil Parthasarathy (@nikparth1) 's Twitter Profile Photo

This is pretty awesome and striking to me right now given that I've just been reviewing from NeurIPS and it's literally impossible to improve work in a way like this given the constraints and timeline of the review process.. perhaps this is the future of paper review...

Aleksander Holynski (@holynski_) 's Twitter Profile Photo

Something fun we discovered: you can use #Genie3 to step into and explore your favorite paintings. Here's a short visit to Edward Hopper's "Nighthawks".

Kohitij Kar (@kohitijkar) 's Twitter Profile Photo

(1/7) 🚨 New preprint! 🚨 Sabine Muzellec and I, go beyond one-way ANN–brain mapping to ask: 👉 Can brain activity predict ANN units? We call this Reverse Predictivity — it reveals hidden misalignments between high-performing ANNs and the primate brain. shorturl.at/kylPl

(1/7) 🚨 New preprint! 🚨

<a href="/SabineMuzellec/">Sabine Muzellec</a> and I, go beyond one-way ANN–brain mapping to ask:
👉 Can brain activity predict ANN units?
We call this Reverse Predictivity — it reveals hidden misalignments between high-performing ANNs and the primate brain.
shorturl.at/kylPl
Nikhil Parthasarathy (@nikparth1) 's Twitter Profile Photo

What does it say about the state of AI "science" that Anthropic made a big deal about adding error bars to evals and now AI2 has to write a paper to show that SNR of benchmarks is probably a good thing to measure? 🤔

Nikhil Parthasarathy (@nikparth1) 's Twitter Profile Photo

Totally agree with one caveat: you have to have enough specialized data to train on that you are confident is similar to your test distribution. The prior in a big pre-trained generalist will help when either there is a train/test mismatch or you're living in a low-data regime.

Nikhil Parthasarathy (@nikparth1) 's Twitter Profile Photo

Great point! Another way I like to think about it is is training on more data might improve zero-shot performance by making more of your test distribution covered by your training, but unless you can guarantee that, few-shot adaptation performance is really what matters.

Nikhil Parthasarathy (@nikparth1) 's Twitter Profile Photo

The Perception Test challenge at next month's #ICCV2025 now has an interpretability track! Read Tyler Zhu's thread for more details and how to submit!

Vivek Natarajan (@vivnat) 's Twitter Profile Photo

During Thanksgiving break last year, our AI co-scientist team Google DeepMind Google Research - Juraj Gottweis Alan Karthikesalingam met Prof José R Penadés Tiago Costa of @ImperialCollege. They were nearing a breakthrough on how bacteria share resistance genes and proposed a test for our AI

During Thanksgiving break last year, our AI co-scientist team <a href="/GoogleDeepMind/">Google DeepMind</a> <a href="/GoogleResearch/">Google Research</a> - <a href="/Mysiak/">Juraj Gottweis</a> <a href="/alan_karthi/">Alan Karthikesalingam</a> met Prof <a href="/jrpenades/">José R Penadés</a> <a href="/CostaT_Lab/">Tiago Costa</a> of @ImperialCollege. 

They were nearing a breakthrough on how bacteria share resistance genes and proposed a test for our AI
Nikhil Parthasarathy (@nikparth1) 's Twitter Profile Photo

My inner monologue when reading this: "wait was o1-preview before or after 4o? And was that before or after o3? What about GPT-4.5? Was there an o2 or o4? Hmm maybe we should stop humans from naming models and let GPT 5 pro do it instead... " 😂