Caner Hazirbas (@drhazirbas) 's Twitter Profile
Caner Hazirbas

@drhazirbas

Research Scientist @MetaAI

ID: 1283545242

linkhttp://hazirbas.com calendar_today20-03-2013 15:29:44

1,1K Tweet

293 Takipçi

345 Takip Edilen

Ross Wightman (@wightmanr) 's Twitter Profile Photo

Did you know that image models are sensitive to the image interpolation they are trained with? timm's default is to randomly switch between Pillow bilinear and bicubic, it results in weights that are more robust to interpolation differences.

Imtiaz Humayun (@imtiazprio) 's Twitter Profile Photo

How are Deep Neural Networks black-boxes if you can visualize them in an 'exact' manner? Our new #CVPR23 paper, presents a fast and scalable PyTorch toolbox to visualize the linear regions, aka partition+decision boundary, of any DNN (red🔻)! bit.ly/splinecam 🧵 1/N

Zhiheng Li (@zhi_heng_li) 's Twitter Profile Photo

Interested in shortcut learning, spurious correlation, bias mitigation, or OOD? Check out our #CVPR2026 2023 paper: A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others. Come and talk to us in person on Thursday morning. Our poster ID is 341.

Interested in shortcut learning, spurious correlation, bias mitigation, or OOD? Check out our <a href="/CVPR/">#CVPR2026</a> 2023 paper: A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others.

Come and talk to us in person on Thursday morning. Our poster ID is 341.
DeepAI (@deepai) 's Twitter Profile Photo

🤩Lowkey Goated When #PrivacyConscious Is The Vibe👌 Check out this groundbreaking paper from Caner Hazirbas, Cristian Canton et al. that uses Full-Body Person Synthesis to de-identify pedestrian datasets! deepai.org/publication/da…

Tom Goldstein (@tomgoldsteincs) 's Twitter Profile Photo

Cool paper from my friends at Rice. They look at what happens when you train generative models on their own outputs…over and over again. Image models survive 5 iterations before weird stuff happens. arxiv.org/abs/2307.01850 Credit: Sina Alemohammad, Imtiaz Humayun, @richbaraniuk

Cool paper from my friends at Rice. They look at what happens when you train generative models on their own outputs…over and over again. Image models survive 5 iterations before weird stuff happens.
arxiv.org/abs/2307.01850

Credit: <a href="/SinaAlmd/">Sina Alemohammad</a>, <a href="/imtiazprio/">Imtiaz Humayun</a>, @richbaraniuk
Caner Hazirbas (@drhazirbas) 's Twitter Profile Photo

Not only registration fee is A LOT, now we have to pay for lunch boxes too?!! Crazy, isn't it? iccv2023.thecvf.com/lunchboxes.ord… #iccv2023 #iccv #computervision

Caner Hazirbas (@drhazirbas) 's Twitter Profile Photo

#NeurIPS'23 spotlight award Why does the performance degrades across incomes or geopgrahies? Now, we have the opportunity to evaluate our algorithms using the factor annotations on Dollar Street. Explore our annotations on dollarstreetfactors.metademolab.com arxiv.org/abs/2304.05391

Caner Hazirbas (@drhazirbas) 's Twitter Profile Photo

I am very proud to see "Casual Conversations v2" is listed as one of the key responsible AI projects accomplished also as part of FAIR at Meta. #meta #fair #responsibleAI ai.meta.com/blog/fair-prog…

Andrew Ng (@andrewyng) 's Twitter Profile Photo

The LVM (large vision model) revolution is coming a little after the LLM (large language model) one, and will transform how we process images. But there’s an important difference between LVMs and LLMs: - Internet text is similar enough to proprietary text documents that an LLM

Megan Richards (@megan_richards_) 's Twitter Profile Photo

Excited to share our #NeurIPS2023 Spotlight paper using factor annotations to explain model mistakes across geographies / incomes (including our release of new annotations for DollarStreet!) Led by Laura Gustafson, with Melissa Hall Caner Hazirbas Diane Bouchacourt Mark Ibrahim.

Excited to share our #NeurIPS2023 Spotlight paper using factor annotations to explain model mistakes across geographies / incomes (including our release of new annotations for DollarStreet!) Led by Laura Gustafson, with <a href="/hall__melissa/">Melissa Hall</a> <a href="/drhazirbas/">Caner Hazirbas</a> <a href="/D_Bouchacourt/">Diane Bouchacourt</a> <a href="/marksibrahim/">Mark Ibrahim</a>.
elvis (@omarsar0) 's Twitter Profile Photo

Trustworthiness in LLMs A comprehensive study (100+ pages) of trustworthiness in LLMs, discussing challenges, benchmarks, evaluation, analysis of approaches, and future directions. One of the greater challenges of taking current LLMs into production is trustworthiness. This

Trustworthiness in LLMs

A comprehensive study (100+ pages) of trustworthiness in LLMs, discussing challenges, benchmarks, evaluation, analysis of approaches, and future directions.

One of the greater challenges of taking current LLMs into production is trustworthiness.

This
Caner Hazirbas (@drhazirbas) 's Twitter Profile Photo

We find vision language models are 4−13x more likely to harmfully classify individuals with darker skin tones—a bias not addressed by progress on standard vision benchmarks or model scale. arxiv.org/abs/2402.07329 Alicia Sun Mark Ibrahim #responsibleai #computervision

Caner Hazirbas (@drhazirbas) 's Twitter Profile Photo

Announcing 1st Workshop on Responsible Data #CVPR2026 Meta, Google, Princeton and Sony AI, are organizing the first Workshop on Responsible Data at CVPR'24. We will have exciting talks, panel discussions and poster sessions! Looking forward to seeing y'all! responsibledata.github.io

Caner Hazirbas (@drhazirbas) 's Twitter Profile Photo

Our paper is accepted to the ICLR 2024 Workshop on Reliable and Responsible Foundation Models #iclr #responsibleai #fairness #ccv2