Oishi Deb (@deboishi) 's Twitter Profile
Oishi Deb

@deboishi

DPhil Candidate @Oxford_VGG & @OxfordTVG in @Oxengsci, @CompSciOxford & @KelloggOx, RG Chair at @ELLISforEurope
@GoogleDeepMind Scholar, Ex @RollsRoyceUK

ID: 1922373703

linkhttps://bit.ly/48glsrT calendar_today01-10-2013 05:22:30

429 Tweet

600 Followers

680 Following

Oishi Deb (@deboishi) 's Twitter Profile Photo

I am delighted to be a chair for an ELLIS Reading Group on Mathematics of Deep Learning along with Linara and Sidak Sidak Pal Singh. The link to join the group is here - bit.ly/3fjh8la, looking forward to meeting new people! Oxford Comp Sci Engineering Science, Oxford

I am delighted to be a chair for an <a href="/ELLISforEurope/">ELLIS</a> Reading Group on Mathematics of Deep Learning along with <a href="/LinaraAdylova/">Linara</a> and Sidak <a href="/unregularized/">Sidak Pal Singh</a>. The link to join the group is here - bit.ly/3fjh8la, looking forward to meeting new people! <a href="/CompSciOxford/">Oxford Comp Sci</a> <a href="/oxengsci/">Engineering Science, Oxford</a>
Jascha Sohl-Dickstein (@jaschasd) 's Twitter Profile Photo

Have you ever done a dense grid search over neural network hyperparameters? Like a *really dense* grid search? It looks like this (!!). Blueish colors correspond to hyperparameters for which training converges, redish colors to hyperparameters for which training diverges.

Carl Doersch (@carldoersch) 's Twitter Profile Photo

We're very excited to introduce TAPNext: a model that sets a new state-of-art for Tracking Any Point in videos, by formulating the task as Next Token Prediction. For more, see: tap-next.github.io 🧵

Oishi Deb (@deboishi) 's Twitter Profile Photo

Happy to share "Organoid-ICLIP: Class Imbalance-Aware Vision-Language Learning for Organoid Mitosis Classification" accepted at #ICIP2025 happening in Alaska, US - cmsworkshops.com/ICIP2025/paper…

Tim Franzmeyer (@frtimlive) 's Twitter Profile Photo

What if LLMs knew when to stop? 🚧 HALT finetuning teaches LLMs to only generate content they’re confident is correct. 🔍 Insight: Post-training must be adjusted to the model’s capabilities. ⚖️ Tunable trade-off: Higher correctness 🔒 vs. More completeness 📝 with AI at Meta 🧵

Hirokatsu Kataoka | 片岡裕雄 (@hirokatukataoka) 's Twitter Profile Photo

We’ve released the CVPR 2025 Report! hirokatsukataoka.net/temp/presen/25… Compiled during CVPR in collaboration with LIMIT.Lab, cvpaper.challenge, and Visual Geometry Group (VGG), this report offers meta insights into the trends and tendencies observed at this year’s conference. #CVPR2025

We’ve released the CVPR 2025 Report!
hirokatsukataoka.net/temp/presen/25…

Compiled during CVPR in collaboration with LIMIT.Lab, cvpaper.challenge, and Visual Geometry Group (VGG), this report offers meta insights into the trends and tendencies observed at this year’s conference.

#CVPR2025
ELLIS (@ellisforeurope) 's Twitter Profile Photo

Meet Oishi Deb, ELLIS PhD Student 🎓 at University of Oxford 🏴󠁧󠁢󠁥󠁮󠁧󠁿 & Google DeepMind. She works on computer vision, gen & responsible AI and chairs ELLIS PhD Reading Groups on CV & deep learning theory. Career high: She won a £25K grant from Sky to advance ML & AI! 👏 #WomenInELLIS

ELLIS (@ellisforeurope) 's Twitter Profile Photo

🎓 New cohort of PhD and Postdocs selected for 2024/2025 ­ 👏 Out of 3200 applicants 90 exceptional candidates were selected to join the 2024/2025 cohort, congratulations to all new students and welcome to the Program! 🔗 Read the full article: bit.ly/4nf3rld

Oishi Deb (@deboishi) 's Twitter Profile Photo

Chuanxia has excellent leadership and mentoring skills, highly recommend people apply to Chuanxia's new lab, Physical Vision Group (physicalvision.github.io)!!

Mikita Balesni 🇺🇦 (@balesni) 's Twitter Profile Photo

A simple AGI safety technique: AI’s thoughts are in plain English, just read them We know it works, with OK (not perfect) transparency! The risk is fragility: RL training, new architectures, etc threaten transparency Experts from many orgs agree we should try to preserve it:

A simple AGI safety technique: AI’s thoughts are in plain English, just read them

We know it works, with OK (not perfect) transparency!

The risk is fragility: RL training, new architectures, etc threaten transparency

Experts from many orgs agree we should try to preserve it:
philip (@philiptorr) 's Twitter Profile Photo

uper happy to be one of the organizers of this, eurips.cc, now an option to officially present your papers in Europe and save greenhouse gas!!!!! Please repost and spread the word!!! Please repost!!!!

Oishi Deb (@deboishi) 's Twitter Profile Photo

Delighted to announce Mingdeng Cao from UTokyo | 東京大学 is our guest speaker at ELLIS Reading Group on Sun, 27th July 10am, presenting "Towards Consistent Image Synthesis and Editing with Diffusion Models" - ellis.eu/events/ellis-r…

Delighted to announce Mingdeng Cao from <a href="/UTokyo_News_en/">UTokyo | 東京大学</a> is our guest speaker at <a href="/ELLISforEurope/">ELLIS</a> Reading Group on Sun, 27th July 10am, presenting "Towards Consistent Image Synthesis and Editing with Diffusion Models" - ellis.eu/events/ellis-r…
naveen manwani (@naveenmanwani17) 's Twitter Profile Photo

🚨 Paper Alert 🚨 ➡️Paper Title: Articulate3D: Zero-Shot Text-Driven 3D Object Posing 🌟Few pointers from the paper 🎯Authors of this paper proposed a training-free method, “Articulate3D”, to pose a 3D asset through language control. 🎯Despite advances in vision and language