Zhouxiang Fang (@focusv857) 's Twitter Profile
Zhouxiang Fang

@focusv857

Incoming PhD at @RiceUniversity
Master at @JohnsHopkins

ID: 1692703509886443520

linkhttps://zhouxiangfang.github.io/ calendar_today19-08-2023 01:02:36

16 Tweet

44 Followers

51 Following

Dongwei Jiang (@dongwei__jiang) 's Twitter Profile Photo

Process supervision for reasoning is πŸ”₯! While previous approaches often relied on human annotation and struggled to generalize across different reasoning tasks, we're now asking: Can we improve this? Introducing π‘π€π“πˆπŽππ€π‹π˜π’π“: a new model pre-trained on implicit

Process supervision for reasoning is πŸ”₯! While previous approaches often relied on human annotation and struggled to generalize across different reasoning tasks, we're now asking: Can we improve this?

Introducing π‘π€π“πˆπŽππ€π‹π˜π’π“: a new model pre-trained on implicit
Ruidi Chang (@ruidichang) 's Twitter Profile Photo

πŸš€ Thrilled to announce SAFR is here! #NAACL2025 Superposition is powerful β€” but it buries interpretability. We control that! 🧠 Neurons often mix too many features (superposition) β€” making models a black box. 🎯 SAFR strategically redistributes neurons: Β· 🧩 Monosemantic for

πŸš€ Thrilled to announce SAFR is here! #NAACL2025
Superposition is powerful β€” but it buries interpretability. We control that!
🧠 Neurons often mix too many features (superposition) β€” making models a black box.
🎯 SAFR strategically redistributes neurons:
 · 🧩 Monosemantic for
Daniel Khashabi πŸ•ŠοΈ (@danielkhashabi) 's Twitter Profile Photo

There have been various efforts on disentangling "task learning" vs "task recall" in LLMs. We've recently explored a fresh angle by borrowing from cryptography: with substitution ciphers, we transform a given task into an equivalent, but cryptic (no pun intended!!) forms.

There have been various efforts on disentangling "task learning" vs "task recall" in LLMs. We've recently explored a fresh angle by borrowing from cryptography: with substitution ciphers, we transform a given task into an equivalent, but cryptic (no pun intended!!) forms.
Chunyuan Deng (@chunyuandeng) 's Twitter Profile Photo

I’m at #ICML this week !🍁🍁🍁 Hanjie Chen and I will present the work on Wed 4:30 pm poster session (July 16th), feel free to stand by if you are also interested in steering & controlling! πŸ˜ƒ

Isabel Cachola (@isabelcachola) 's Twitter Profile Photo

Our work on readability evaluation for Plain Language Summarization will appear at #EMNLP2025!! Daniel Khashabi πŸ•ŠοΈ Mark Dredze Paper: arxiv.org/pdf/2508.19221 TLDR: Traditional readability metrics correlate poorly with human judgements & LMs consider deeper readability features. 1/6

Our work on readability evaluation for Plain Language Summarization will appear at #EMNLP2025!! <a href="/DanielKhashabi/">Daniel Khashabi πŸ•ŠοΈ</a> <a href="/mdredze/">Mark Dredze</a>

Paper: arxiv.org/pdf/2508.19221

TLDR: Traditional readability metrics correlate poorly with human judgements &amp; LMs consider deeper readability features.  1/6