Liam Bai (@liambai21) 's Twitter Profile
Liam Bai

@liambai21

writing about math, AI, and biology | software @ginkgo, math + CS @brownuniversity

ID: 1283909499162693632

linkhttp://liambai.com calendar_today16-07-2020 23:41:17

262 Tweet

1,1K Followers

348 Following

Yaoyu Yang (@yaoyuyang) 's Twitter Profile Photo

I started Cypher last month after 8 years Ginkgo Bioworks working on the intersection of software and biology! At Cypher, I am building AI enabled software tools to supercharge scientists productivity and capabilities. Check this out! loom.com/share/2e18e219…

John Yang @ ICLR 2025 (@johnyang100) 's Twitter Profile Photo

If you’ve ever - thought AI protein folding is magical ✨ - wanted more than a pLDDT score 🔎 - or just think mech interp in bio is cool 🤓 then read the 🧵 👇 on our first paper towards interpretable protein structure prediction just accepted to workshops at ICLR

Tony Kulesa (@kulesatony) 's Twitter Profile Photo

🚀 Fellowship applications are OPEN for Encode: AI for Science. What if you could use AI to - Design shape-shifting robots - See through solid materials - Decode language of the brain - Create advanced materials

Liam Bai (@liambai21) 's Twitter Profile Photo

Proud that this work was accepted at ICML as a spotlight poster! Reminder that you can just do things, especially in interpretability. Etowah Adams and I did most of this work in colab notebooks that cost a few dollars a month.

Etowah Adams (@etowah0) 's Twitter Profile Photo

You can take a ESM's final-layer projection matrix (used to get logits) and apply it to intermediate layer embeddings. This can roughly tell you what the model “thinks” the sequence is at different layers. x.com/liambai21/stat…

Etowah Adams (@etowah0) 's Twitter Profile Photo

Update to our ICML paper on interpreting features in protein language models: We ask human raters to assess the interpretability of SAE feature activations and ESM activations. Human raters found SAE features far more interpretable! x.com/etowah0/status…

Update to our ICML paper on interpreting features in protein language models: We ask human raters to assess the interpretability of SAE feature activations and ESM activations. Human raters found SAE features far more interpretable! 
x.com/etowah0/status…
Rishabh Anand 🧬 (@rishabh16_) 's Twitter Profile Photo

MechInterp for pLMs has been considered a “dark art” for some time due to spurious correlations and pseudo-insights into biological mechanisms Happy to have contributed a bit to this ongoing effort, containing many stellar, honest results. Plenty of work yet to be done!