Yannic Neuhaus (@neuhausyannic) 's Twitter Profile
Yannic Neuhaus

@neuhausyannic

ML PhD student at University of Tübingen

ID: 1147937004786769920

calendar_today07-07-2019 18:34:44

21 Tweet

30 Takipçi

270 Takip Edilen

francesco croce (@fra__31) 's Twitter Profile Photo

Happy to share our work “Robust Semantic Segmentation: Strong Adversarial Attacks and Fast Training of Robust Models”, where we explore various aspects of adversarial robustness in semantic segmentation. paper: arxiv.org/abs/2306.12941 1/n

Happy to share our work “Robust Semantic Segmentation: Strong Adversarial Attacks and Fast Training of Robust Models”, where we explore various aspects of adversarial robustness in semantic segmentation.

paper: arxiv.org/abs/2306.12941

1/n
Intelligent Systems (@mpi_is) 's Twitter Profile Photo

Researching #AI #MachineLearning #Robotics or #HCI? Join our elite #PhD program - a partnership with @uni_stuttgart & Universität Tübingen! Applications accepted until Nov 15, 2023 at imprs.is.mpg.de/application #Hiring #Job #KI #Tübingen #Stuttgart

Researching #AI #MachineLearning #Robotics or #HCI? Join our elite #PhD program - a partnership with @uni_stuttgart &amp; <a href="/uni_tue/">Universität Tübingen</a>! Applications accepted until Nov 15, 2023 at imprs.is.mpg.de/application #Hiring #Job #KI #Tübingen #Stuttgart
Valentyn Boreiko 🇺🇦 (@valentynepii) 's Twitter Profile Photo

Excited to share our #ICCV2023 BRAVO workshop paper on the SCROD pipeline! SCROD allows fine-granular control of object pose and appearance and by this is able to identify systematic errors of object detectors in rare situations such as the one shown below.

Excited to share our #ICCV2023 BRAVO workshop paper on the SCROD pipeline!
SCROD allows fine-granular control of object pose and appearance and by this is able to identify systematic errors of object detectors in rare situations such as the one shown below.
Christian Schlarmann (@chs20_) 's Twitter Profile Photo

📢❗[ICML 2024 Oral] We introduce FARE: A CLIP model that is adversarially robust in zero-shot classification and enables robust large vision-language models (LVLMs) Paper: arxiv.org/abs/2402.12336 Code: github.com/chs20/RobustVLM Huggingface: huggingface.co/collections/ch… 🧵1/n

📢❗[ICML 2024 Oral] We introduce FARE: A CLIP model that is adversarially robust in zero-shot classification and enables robust large vision-language models (LVLMs)

Paper: arxiv.org/abs/2402.12336
Code: github.com/chs20/RobustVLM
Huggingface: huggingface.co/collections/ch…

🧵1/n
Maximilian Müller (@mueller_mp) 's Twitter Profile Photo

TL,DR: OOD detection methods are very sensitive to training hyperparameters of ViTs I evaluated ~300 ViTs for OOD detection on ImageNet. Many ViTs with high downstream accuracy failed to detect simple noise patterns ("unit tests") when paired with Mahalanobis distance.

TL,DR: OOD detection methods are very sensitive to training hyperparameters of ViTs

I evaluated ~300 ViTs for OOD detection on ImageNet. Many ViTs with high downstream accuracy failed to detect simple noise patterns ("unit tests") when paired with Mahalanobis distance.
Václav Voráček (@vaclavvoracekcz) 's Twitter Profile Photo

Confidence intervals use fixed data, unsuitable for adaptive analysis. Confidence sequences compute new confidence intervals with new data, allowing the use of minimal amount of samples to test if, e.g., the mean exceeds a certain constant. #NeurIPS2024

Confidence intervals use fixed data, unsuitable for adaptive analysis.

Confidence sequences compute new confidence intervals with new data, allowing the use of minimal amount of samples to test if, e.g., the mean exceeds a certain constant. #NeurIPS2024
Christian Schlarmann (@chs20_) 's Twitter Profile Photo

📢 Check out our new report: we show that a recently proposed defense against adversarial attacks is not robust. We circumvent gradient masking issues of the proposed model by attacking a slightly adapted surrogate model and then transferring the perturbations.

Maximilian Müller (@mueller_mp) 's Twitter Profile Photo

Mahalanobis++: Improving OOD Detection via Feature Normalization Our latest work has been accepted to ICML and is now also on arXiv! We explain why Mahalanobis-based OOD detection led to varied results and show that l2 normalization improves its performance consistently.

Mahalanobis++: Improving OOD Detection via Feature Normalization

Our latest work has been accepted to ICML and is now also on arXiv!

We explain why Mahalanobis-based OOD detection led to varied results and show that l2 normalization improves its performance consistently.
Christian Schlarmann (@chs20_) 's Twitter Profile Photo

Excited to announce FuseLIP: an embedding model that encodes image+text into a single vector. We achieve this by tokenizing images into discrete tokens, merging these with the text tokens and subsequently processing them with a single transformer.

Excited to announce FuseLIP: an embedding model that encodes image+text into a single vector. We achieve this by tokenizing images into discrete tokens, merging these with the text tokens and subsequently processing them with a single transformer.
Christian Schlarmann (@chs20_) 's Twitter Profile Photo

🔒In our new Paper we obtain adversarially robust text encoders for CLIP! Using them together with robust image encoders from our previous work yields models that are robust on both domains. Code and models are available!