Theoretical Foundations of Foundation Models (@tf2m_workshop) 's Twitter Profile
Theoretical Foundations of Foundation Models

@tf2m_workshop

Workshop on Theoretical Foundations of Foundation Models @icmlconf 2024.

ID: 1776030152788758528

linkhttps://sites.google.com/view/tf2m calendar_today04-04-2024 23:33:59

40 Tweet

274 Followers

18 Following

Berivan Isik (@berivanisik) 's Twitter Profile Photo

Excited to share the program and list of accepted papers for our ICML Conference workshop Theoretical Foundations of Foundation Models: sites.google.com/view/tf2m/sche… Looking forward to discussing efficiency, responsibility, and principled foundations of foundation models in Vienna soon!

Nirjhar Das (@stochastic_nir) 's Twitter Profile Photo

📢 Excited to share our work "Active Preference Optimization for Sample Efficient RLHF" accepted at #ICML2024 Theoretical Foundations of Foundation Models (Theoretical Foundations of Foundation Models) Workshop! Joint work with Sayak Ray Chowdhury Souradip Chakraborty Aldo Pacchiano arxiv.org/pdf/2402.10500 🧵(1/6)

📢 Excited to share our work "Active Preference Optimization for Sample Efficient RLHF"  accepted at #ICML2024 Theoretical Foundations of Foundation Models (<a href="/tf2m_workshop/">Theoretical Foundations of Foundation Models</a>)  Workshop! Joint work with <a href="/Sayakrayc/">Sayak Ray Chowdhury</a> <a href="/SOURADIPCHAKR18/">Souradip Chakraborty</a> <a href="/aldopacchiano/">Aldo Pacchiano</a> 

arxiv.org/pdf/2402.10500
🧵(1/6)
Juno KIM (@junokim_ai) 's Twitter Profile Photo

Also giving a contributed talk on the learning-theoretic complexity and optimality of ICL at the Theoretical Foundations of Foundation Models happy to share our results with the ML theory and LLM community!

Also giving a contributed talk on the learning-theoretic complexity and optimality of ICL at the <a href="/tf2m_workshop/">Theoretical Foundations of Foundation Models</a> happy to share our results with the ML theory and LLM community!
Ziteng Sun (@sziteng) 's Twitter Profile Photo

Check out our TF2M workshop on Sunday with an amazing list of speakers, panelists, and contributed works: sites.google.com/view/tf2m/sche….

Federico Barbero (@fedzbar) 's Twitter Profile Photo

We will be presenting this work tomorrow at the Theoretical Foundations of Foundation Models in Vienna. Please feel free to come check it out or to DM/email me if interested. Looking forward to it! Tagging co-authors that are on this app :) Andrea Banino @_joaogui1 Petar Veličković 👓 👓 👓

Petar Veličković (@petarv_93) 's Twitter Profile Photo

"The great ICML poster bingo" makes a triumphant return on the last day of #ICML2024 🎲 Six posters, three workshops (GRaM Workshop at ICML 2024, Workshop on Data-centric Machine Learning Research, Theoretical Foundations of Foundation Models) and this time I don't have to present three at once 😅 Hope to see you there for some of them! More details below! 🧵

Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) (@rao2z) 's Twitter Profile Photo

Missed the speaker dinner but the panel at the Theoretical Foundations of Foundation Models Theoretical Foundations of Foundation Models at #ICML2024 was a blast.. Particularly liked the audience question about the implications of Benchmark culture in LLM evaluations..

Yuandong Tian (@tydsh) 's Twitter Profile Photo

Thanks for inviting me! Theoretical Foundations of Foundation Models I am surprised to see a large audience for an invited talk in the theoretical workshop! Maybe it is time to understand the models better in addition to blindly scaling up.

Sanae Lotfi (@lotfisanae) 's Twitter Profile Photo

Excited and honored that our new work on token-level generalization bounds for LLMs won a Best Paper Award Theoretical Foundations of Foundation Models at ICML! We investigate generalization in LLMs, e.g., memorization vs. reasoning, through compression bounds at the LLaMA2-70B scale. A 🧵, 1/8

Excited and honored that our new work on token-level generalization bounds for LLMs won a Best Paper Award <a href="/tf2m_workshop/">Theoretical Foundations of Foundation Models</a> at ICML!

We investigate generalization in LLMs, e.g., memorization vs. reasoning, through compression bounds at the LLaMA2-70B scale.

A 🧵, 1/8