Keyu Tian (@keyutian) 's Twitter Profile
Keyu Tian

@keyutian

Master's student @ Peking University

ID: 1447236165614989320

linkhttp://linkedin.com/in/keyu-tian/?locale=en_US calendar_today10-10-2021 16:22:53

24 Tweet

378 Takipçi

26 Takip Edilen

Dmytro Mishkin 🇺🇦 (@ducha_aiki) 's Twitter Profile Photo

Designing BERT for convolutional networks: sparse and hierarchical masked modeling Keyu Tian, Yi Jiang, Qishuai Diao, Chen Lin, Liwei Wang, Zehuan Yuan tl;dr: create image, which looks to CNN same, as transformers -> MIM starts working arxiv.org/abs/2301.03580…

Designing BERT for convolutional networks: sparse and hierarchical masked modeling

Keyu Tian, Yi Jiang, Qishuai Diao, Chen Lin, Liwei Wang, Zehuan Yuan

tl;dr: create image, which looks to CNN same, as transformers -> MIM starts working 
arxiv.org/abs/2301.03580…
DeepAI (@deepai) 's Twitter Profile Photo

🤩The first successful BERT-style #SelfSupervisedLearning on any convolutional network! #ResNet now enjoys masked autoencoding! 🚀A breakthrough paper "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling" by Keyu Tian et al. deepai.org/publication/de…

fly51fly (@fly51fly) 's Twitter Profile Photo

[CV] Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling K Tian, Y Jiang, Q Diao, C Lin, L Wang, Z Yuan [Peking University & Bytedance Inc & University of Oxford] (2023) arxiv.org/abs/2301.03580 #MachineLearning #ML #AI #CV [1/2]

[CV] Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling
K Tian, Y Jiang, Q Diao, C Lin, L Wang, Z Yuan [Peking University & Bytedance Inc & University of Oxford] (2023)
arxiv.org/abs/2301.03580
#MachineLearning #ML #AI #CV 
[1/2]
Daisuke Okanohara / 岡野原 大輔 (@hillbig) 's Twitter Profile Photo

画像認識の事前学習で、ViTは一部の入力をMaskし、それを予測するタスクが成功していたがConvは隣り合うパッチ間で重複があり情報が漏れ成功してなかった。SparKは出力も入力と同じマスクを維持するSparseConvを使うことで問題を解決、また復号器側も階層を持ちさらに改善 openreview.net/forum?id=NRxyd…

Sebastian Raschka (@rasbt) 's Twitter Profile Photo

How can we leverage successful pretraining techniques from transformers to improve purely convolutional networks? The answer is *Sparse Convolutions*! Let's see what happens when purely convolutional networks are pretrained with 1.28 million unlabeled images ... 1/7

How can we leverage successful pretraining techniques from transformers to improve purely convolutional networks? The answer is *Sparse Convolutions*!

Let's see what happens when purely convolutional networks are pretrained with 1.28 million unlabeled images ...

1/7
Python Trending 🇺🇦 (@pythontrending) 's Twitter Profile Photo

SparK - [ICLR'23 Spotlight] The first successful BERT-style pretraining on any *convolutional network*; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling" github.com/keyu-tian/SparK