Takeshi Koshizuka (越塚 毅) (@koshstorm) 's Twitter Profile
Takeshi Koshizuka (越塚 毅)

@koshstorm

PhD student @ UTokyo, Neural Differential Equations, Physics-Informed ML, Machine Learning Theory, Generative Modeling, CS, DC1

ID: 2750806944

linkhttps://sites.google.com/view/takeshi-koshizuka/home calendar_today21-08-2014 00:19:19

230 Tweet

798 Followers

799 Following

Takeshi Koshizuka (越塚 毅) (@koshstorm) 's Twitter Profile Photo

「Schrödinger Bridge問題に基づく拡散生成モデル学習」という題目で講演します。基礎から最新の研究動向まで網羅的に解説します。他にも魅力的な講演がたくさんあるので、奮ってご参加ください!

Takeshi Koshizuka (越塚 毅) (@koshstorm) 's Twitter Profile Photo

先日の講演資料を公開しました。ML分野でのSchrödinger Bridge問題に関する研究がざっと把握できる内容になっています。 speakerdeck.com/takeshi_koshiz…

Stat.ML Papers (@statmlpapers) 's Twitter Profile Photo

A Survey on Statistical Theory of Deep Learning: Approximation, Training Dynamics, and Generative Models. (arXiv:2401.07187v1 [stat.ML]) ift.tt/VHwOmbC

Brandon Amos (@brandondamos) 's Twitter Profile Photo

📢 In our new UAI 2025 paper, we do neural optimal transport with costs defined by a Lagrangian (e.g., for physical knowledge, constraints, and geodesics) Paper: arxiv.org/abs/2406.00288 JAX Code: github.com/facebookresear… (w/ A. Pooladian, C. Domingo-Enrich, Ricky T. Q. Chen)

Brandon Amos (@brandondamos) 's Twitter Profile Photo

Some related papers for our recent Lagrangian OT: 0. On amortizing convex conjugates for OT 1. Neural Lagrangian Schrödinger Bridge 2a. Deep Generalized Schrödinger Bridge 2b. DGSB Matching 3. Wasserstein Lagrangian Flows 4. Metric learning via OT A 🧵 summarizing these ❤️

しんくん (@nobo0409) 's Twitter Profile Photo

階層的な構造を持つ最適輸送問題を、圏論的なアイディアで定式化したら、めちゃくちゃ爆速なアルゴリズムが作れた、という話をプレプリントとして公開しました: arxiv.org/abs/2408.08550

enjoy my life (@issei_sato) 's Twitter Profile Photo

Our paper on mean-field analysis of Fourier neural operators has been accepted at NeurIPS 2024! Stay tuned for the updated version. arxiv.org/abs/2310.06379

Takeshi Koshizuka (越塚 毅) (@koshstorm) 's Twitter Profile Photo

Our paper on Neural Operator: 'Understanding the Expressivity and Trainability of Fourier Neural Operator: A Mean-Field Perspective' was accepted at #NeurIPS2024 🎉 🇨🇦 paper link: arxiv.org/abs/2310.06379 Huge thanks to Masahiro Fujisawa , Yusuke TANAKA , enjoy my life

Our paper on Neural Operator: 'Understanding the Expressivity and Trainability of Fourier Neural Operator: A Mean-Field Perspective' was accepted at #NeurIPS2024 🎉 🇨🇦
paper link: arxiv.org/abs/2310.06379

Huge thanks to <a href="/fujisawa0211/">Masahiro Fujisawa</a> , <a href="/yusuk_et/">Yusuke TANAKA</a> , <a href="/issei_sato/">enjoy my life</a>
enjoy my life (@issei_sato) 's Twitter Profile Photo

以下の論文がICLR2025に採択されました!長谷川さん、M2で2年連続ICLRに採択されててすごい。

enjoy my life (@issei_sato) 's Twitter Profile Photo

The following paper has been accepted to ICML 2025! Benign Overfitting in Token Selection of Attention Mechanism Keitaro Sakamoto, Issei Sato arxiv.org/abs/2409.17625

enjoy my life (@issei_sato) 's Twitter Profile Photo

The following paper has been accepted to ICML 2025! On Expressive Power of Looped Transformers: Theoretical Analysis and Enhancement via Timestep Encoding Kevin Xu, Issei Sato arxiv.org/abs/2410.01405

Kevin Xu (@kevin671xu) 's Twitter Profile Photo

🎉 Our paper on "the expressive power of Looped Transformers" was accepted at #ICML2025 ! To the best of our knowledge, this is the first study to analyze their function approximation capabilities, including approximation rates and universality. arxiv.org/abs/2410.01405

🎉 Our paper on "the expressive power of Looped Transformers" was accepted at #ICML2025 !  
To the best of our knowledge, this is the first study to analyze their function approximation capabilities, including approximation rates and universality. 
arxiv.org/abs/2410.01405
Kirill Neklyudov (@k_neklyudov) 's Twitter Profile Photo

1/ Where do Probabilistic Models, Sampling, Deep Learning, and Natural Sciences meet? 🤔 The workshop we’re organizing at #NeurIPS2025! 📢 FPI@NeurIPS 2025: Frontiers in Probabilistic Inference – Learning meets Sampling Learn more and submit → fpiworkshop.org

Lorenz Richter @ICLR'25 (@lorenz_richter) 's Twitter Profile Photo

Solving control problems can be hard. This is why we introduce trust region methods, approaching them iteratively in a systematic way. In fact, this can be understood as a geometric annealing from prior to target with adaptive steps. More at NeurIPS Conference, arxiv.org/pdf/2508.12511.

Solving control problems can be hard. This is why we introduce trust region methods, approaching them iteratively in a systematic way. In fact, this can be understood as a geometric annealing from prior to target with adaptive steps. More at <a href="/NeurIPSConf/">NeurIPS Conference</a>, arxiv.org/pdf/2508.12511.