Serdar Ozsoy (@ozsoyserdar) 's Twitter Profile
Serdar Ozsoy

@ozsoyserdar

PhD student @UniBonn

ID: 1100087592

calendar_today18-01-2013 05:26:01

50 Tweet

46 Takipçi

161 Takip Edilen

Serdar Ozsoy (@ozsoyserdar) 's Twitter Profile Photo

While people have been discussing ethical side of moral decision of driverless car, now doctors have to decide who will have treatment in case of limited resources in reality.

Serdar Ozsoy (@ozsoyserdar) 's Twitter Profile Photo

Oversampling should be done after test-train splitting and only for train split. This is also valid for cross-validation. Otherwise that causes 1)data leakage due to duplicated samples, 2)biased performance measure due to test-valid set difference. The same goes for SMOTE.

Serdar Ozsoy (@ozsoyserdar) 's Twitter Profile Photo

In Pytorch, recommendation not to use bias=True for linear or conv layers before BatchNorm (BN) is not correct when using ReLU between them. Bias is useless since BN centers the values again but this is not the case when ReLU exists before BN. (Ex: Linear-ReLU-BN)

Deniz Yuret (@denizyuret) 's Twitter Profile Photo

Self supervised learning is revolutionizing AI using large unlabeled datasets. We show that maximizing mutual information between alternative representations of the same input is a practical method for self supervised learning that is immune to the dreaded collapse problem.

Self supervised learning is revolutionizing AI using large unlabeled datasets. We show that maximizing mutual information between alternative representations of the same input is a practical method for self supervised learning that is immune to the dreaded collapse problem.
Alper Erdogan (@alper_t_e) 's Twitter Profile Photo

The existing biologically plausible neural network approaches to solve the blind source separation problem typically assume independence/uncorrelatedness of sources. In our article, we propose an alternative framework that is capable of separating correlated sources.

The existing biologically plausible neural network  approaches to solve the blind source separation problem typically assume independence/uncorrelatedness of sources. In our article, we propose an alternative framework that is capable of separating correlated sources.
Sercan Arık (@sercanarik) 's Twitter Profile Photo

We’re excited to announce that TabNet is now available in Vertex AI Tabular Workflows: lnkd.in/g7jB5EMe! Tabular Workflows provides fully managed, optimized, and scalable pipelines, making it easier to use TabNet without worrying about implementation details. (1/3)

KUIS AI (@kuisaicenter) 's Twitter Profile Photo

📢We are happy to share that KUIS AI members F. Güney and Kaan Akan presented their paper StretchBEV at #ECCV2022. Congrats again👏🏽 check out #ECCV2022 magazine for their interview! rsipvision.com/ECCV2022-Wedne… #computervision

📢We are happy to share that <a href="/KuisAICenter/">KUIS AI</a> members <a href="/ftm_guney/">F. Güney</a> and  <a href="/akaan_akan/">Kaan Akan</a>  presented their paper StretchBEV at #ECCV2022.
Congrats again👏🏽
check out #ECCV2022 magazine for their interview! rsipvision.com/ECCV2022-Wedne…
#computervision
KUIS AI (@kuisaicenter) 's Twitter Profile Photo

💫We're happy to share that our MS student Serdar Ozsoy and faculty members Alper Erdogan and Deniz Yuret presented posters of the two papers at #NeurIPS2022 in USA, and our MS student Bariscan Bozkurt made a successful virtual presentation in #NeurIPS2022 Oral Session. Congrats!👏🏼

💫We're happy to share that our MS student <a href="/ozsoyserdar/">Serdar Ozsoy</a> and faculty members <a href="/Alper_T_E/">Alper Erdogan</a> and <a href="/denizyuret/">Deniz Yuret</a>  presented posters of the two papers at #NeurIPS2022 in USA, and our MS student <a href="/BozkurtBariscan/">Bariscan Bozkurt</a> made a successful virtual presentation in #NeurIPS2022 Oral Session.
Congrats!👏🏼
KUIS AI (@kuisaicenter) 's Twitter Profile Photo

Here are the papers: "Biologically-Plausible Determinant Maximization Neural Networks for Blind Separation of Correlated Sources" #NeurIPS2022 (github.com/bariscanbozkur…)

Alper Erdogan (@alper_t_e) 's Twitter Profile Photo

Confused about #NeurIPS2023 rebuttal policy. Can we submit multiple threads for each reviewer? With 6000 characters limitation, it is hard to respond to all reviewer comments/questions. Can we also use "Official Comment" to provide our responses.

Bariscan Bozkurt (@bozkurtbariscan) 's Twitter Profile Photo

🎉Thrilled to announce that our paper for #NeurIPS2023 titled “Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry” has been accepted! (arxiv.org/abs/2306.04810)

🎉Thrilled to announce that our paper for #NeurIPS2023 titled “Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry” has been accepted! (arxiv.org/abs/2306.04810)
Alper Erdogan (@alper_t_e) 's Twitter Profile Photo

Join us at the poster session for our #Neurips2023 article "Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry", our joint work with Bariscan Bozkurt & Cengiz Pehlevan. 🗓️ Wed, 5-7 p.m. 📍 Hall B1+B2 #423

Join us at the poster session for our #Neurips2023 article "Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry", our joint work with <a href="/BozkurtBariscan/">Bariscan Bozkurt</a> &amp; <a href="/CPehlevan/">Cengiz Pehlevan</a>.
🗓️ Wed, 5-7 p.m.
📍 Hall B1+B2 #423
Serdar Ozsoy (@ozsoyserdar) 's Twitter Profile Photo

Great compilation of his insights on diffussion models. For a deeper dive, head over to his blog. Highly recommended! -> sander.ai

Serdar Ozsoy (@ozsoyserdar) 's Twitter Profile Photo

My new habit, starting with reasoning models: not going to the final answer without reading the chain of thought. Less efficient, more enjoyable.

Deniz Yuret (@denizyuret) 's Twitter Profile Photo

Have you ever seen a learning curve that looks like a step function? It turns out a few hundred negative examples flips a switch inside an LLM and gives a discrete jump in accuracy. "How much do LLMs learn from negative examples?" (arxiv.org/abs/2503.14391) with Shadi Hamdan.

Have you ever seen a learning curve that looks like a step function? It turns out a few hundred negative examples flips a switch inside an LLM and gives a discrete jump in accuracy. "How much do LLMs learn from negative examples?" (arxiv.org/abs/2503.14391) with <a href="/ShadiSHamdan/">Shadi Hamdan</a>.