Byungsoo Ko (@byungsooko1) 's Twitter Profile
Byungsoo Ko

@byungsooko1

Research and engineering on deep stuff :)

ID: 1169878424199946240

linkhttps://github.com/kobiso calendar_today06-09-2019 07:42:14

20 Tweet

54 Followers

161 Following

ICLR 2025 (@iclr_conf) 's Twitter Profile Photo

Countdown to #ICLR2020. Today's blog lets you take a visual tour of the conference portal to see what's the virtual format looks like and how to navigate our first virtual machine learning conference. Meet you at a poster or social soon 🎉 medium.com/@iclr_conf/cou…

Denny Britz (@dennybritz) 's Twitter Profile Photo

CNN Explainer is an interactive visualization tool for learning purposes. It runs a pre-tained CNN in the browser and lets you explore the layers and operations: poloclub.github.io/cnn-explainer/ Video: youtube.com/watch?v=udVN7f… Code: github.com/poloclub/cnn-e… Paper: arxiv.org/abs/2004.15004

CNN Explainer is an interactive visualization tool for learning purposes. It runs a pre-tained CNN in the browser and lets you explore the layers and operations: poloclub.github.io/cnn-explainer/

Video: youtube.com/watch?v=udVN7f…
Code: github.com/poloclub/cnn-e…
Paper: arxiv.org/abs/2004.15004
AK (@_akhaliq) 's Twitter Profile Photo

Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding pdf: arxiv.org/pdf/2103.15358… abs: arxiv.org/abs/2103.15358

Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding
pdf: arxiv.org/pdf/2103.15358…
abs: arxiv.org/abs/2103.15358
AK (@_akhaliq) 's Twitter Profile Photo

CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification pdf: arxiv.org/pdf/2103.14899… abs: arxiv.org/abs/2103.14899

CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification
pdf: arxiv.org/pdf/2103.14899…
abs: arxiv.org/abs/2103.14899
Aran Komatsuzaki (@arankomatsuzaki) 's Twitter Profile Photo

Rethinking Spatial Dimensions of Vision Transformers Proposes PiT, a self-attention model whose spatial dimension progressively shrinks like CNN, outperforms ViT. abs: arxiv.org/abs/2103.16302 code: github.com/naver-ai/pit

Rethinking Spatial Dimensions of Vision Transformers

Proposes PiT, a self-attention model whose spatial dimension progressively shrinks like CNN, outperforms ViT.

abs: arxiv.org/abs/2103.16302
code: github.com/naver-ai/pit
Jung-Woo Ha (@jungwooha2) 's Twitter Profile Photo

Happy to share our “Rainbow Memory“ to propose a new method and evaluation protocol for more realistic CIL, #blurryCIL, which will appear @cvpr2021. Congrats to Jihwan Bang, Heesu Kim, Youngjoon Yoo, Jonghyun Choi arxiv.org/abs/2103.17230 github.com/clovaai/rainbo…

Happy to share our “Rainbow Memory“ to propose a new method and evaluation protocol for more realistic CIL, #blurryCIL, which will appear @cvpr2021. 
Congrats to Jihwan Bang, Heesu Kim, Youngjoon Yoo, <a href="/ppolon/">Jonghyun Choi</a> 
arxiv.org/abs/2103.17230
github.com/clovaai/rainbo…
Jia-Bin Huang (@jbhuang0604) 's Twitter Profile Photo

Sharing ideas on how to disseminate your research. "I am THRILLED to share that our paper is accepted to ..." Congrats! So what's next? No one is going to browse through the list of thousands of accepted papers. Ain't nobody got time for that. Check out 🧵below for examples.

Byungsoo Ko (@byungsooko1) 's Twitter Profile Photo

Our mobile line segment detector (M-LSD) seems to generalize to anime and sketch figures pretty well! Try it out with Gradio web demo :) Paper: arxiv.org/abs/2106.00186 Github: github.com/navervision/ml… Gradio demo: gradio.app/g/AK391/mlsd (thanks to AK moved to @_akhaliq )

Byungsoo Ko (@byungsooko1) 's Twitter Profile Photo

Happy to announce that our paper got accepted in #ICCV2021 ! We are preparing to release the code. - Learning with Memory-based Virtual Classes for Deep Metric Learning arxiv.org/abs/2103.16940

Byungsoo Ko (@byungsooko1) 's Twitter Profile Photo

Happy to announce that our paper "Towards Light-weight and Real-time Line Segment Detection" has been accepted to #AAAI2022! Try it out in the below links :) Paper: arxiv.org/abs/2106.00186 Github: github.com/navervision/ml… Gradio demo: gradio.app/g/AK391/mlsd

Byungsoo Ko (@byungsooko1) 's Twitter Profile Photo

Happy to share our KELIP -- Korean and English bilingual multimodal model. KELIP is trained with 1.1B image-text pairs, which are three times larger than CLIP. Try out pre-trained KELIP! paper: arxiv.org/abs/2203.14463 github: github.com/navervision/KE… demo: huggingface.co/spaces/navervi…