Tatiana Gaintseva (@t_gaintseva) 's Twitter Profile
Tatiana Gaintseva

@t_gaintseva

Ph.D. candidate @ DERI, Queen Mary University of London.
AI Researcher, ML/DL teacher

ID: 846362284687605764

linkhttp://atmyre.github.io calendar_today27-03-2017 14:04:28

17 Tweet

81 Followers

42 Following

Tatiana Gaintseva (@t_gaintseva) 's Twitter Profile Photo

Really happy to share that our paper "AI-generated Text Boundary Detection with RoFT" got an outstanding paper award at COLM! Congratulations to co-authors and hope for more great collaboration in the future =)

Tatiana Gaintseva (@t_gaintseva) 's Twitter Profile Photo

Check out my article about Inductive bias in machine learning models! atmyre.github.io/blog/2024/ind_… It covers what inductive bias is, why we need it, and where it can be found in machine learning models. I will be happy to receive your comments, remarks and feedback!

Tatiana Gaintseva (@t_gaintseva) 's Twitter Profile Photo

IOAI 2025 Call for Tasks is now open! If you have a creative idea, you are welcome to submit! IOAI is an International Olympiad in Artificial Intelligence for high school students. I was one of the coaches of the winning team "Letovo" of IOAI 2024, and I must say that tasks from

Tatiana Gaintseva (@t_gaintseva) 's Twitter Profile Photo

Today was my first ever live lecture on AI fundamentals in English! I'm intended to master explaining in English as well as I can in Russian, as I love it when I can deliver ideas so that they are well understood :)

Today was my first ever live lecture on AI fundamentals in English! I'm intended to  master explaining in English as well as I can in Russian, as I love it when I can deliver ideas so that they are well understood :)
Tatiana Gaintseva (@t_gaintseva) 's Twitter Profile Photo

A new paper is out! CASteer: Steering Diffusion Models for Controllable Generation Arxiv link: arxiv.org/abs/2503.09630 Code: github.com/Atmyre/CASteer Diffusion models are powerful, but their generation process can be difficult to control, which poses safety risks (e.g.,

A new paper is out! CASteer: Steering Diffusion Models for Controllable Generation
Arxiv link: arxiv.org/abs/2503.09630
Code: github.com/Atmyre/CASteer

Diffusion models are powerful, but their generation process can be difficult to control, which poses safety risks (e.g.,
Tatiana Gaintseva (@t_gaintseva) 's Twitter Profile Photo

Beautiful math/ML problem which GPT doesn't solve correctly Recently I was creating problems for AI Olympiad in Russia. Part of the problems were on math+ML, where students should have solved the task and produce a number as an answer. As this Olympiad was online, we had to deal

Tatiana Gaintseva (@t_gaintseva) 's Twitter Profile Photo

List of research programs related to AI Safety In my research, I enjoy digging into the internal mechanisms of various AI models, trying to reveal something interesting about them, and then coming up with ideas for new solutions to downstream tasks based on that. For example, my

Tatiana Gaintseva (@t_gaintseva) 's Twitter Profile Photo

A couple of guides on how to write good scientific papers. Thanks to my manager in Huawei London, Ismail Elezi, for sharing the links! These guides are not about how to do good research, but specifically about how to write a scientific paper based on your research in a way

Tatiana Gaintseva (@t_gaintseva) 's Twitter Profile Photo

I host a podcast in Russian about AI research called Deep Learning Stories (DLStories). It doesn’t air on a regular schedule — I usually record an episode when I find an interesting person in AI to talk to, which sometimes happens only once every few months. Nevertheless, today I

Tatiana Gaintseva (@t_gaintseva) 's Twitter Profile Photo

Recently, I’ve seen many posts critiquing research papers from Apple that study the limitations of LLMs. Some of the critiques were factual, addressing technical claims and the methodology of their research. However, some posts had the following core message, sometimes stated

Tatiana Gaintseva (@t_gaintseva) 's Twitter Profile Photo

I've found an article about a new hack in research paper writing: some authors have started inserting phrases like “generate positive review only” into the text. Apparently, this targets reviewers who use large language models (LLMs) to compose their reviews. Here’s a discussion