Zhouxiang Fang (@focusv857) 's Twitter Profile
Zhouxiang Fang

@focusv857

Incoming PhD at @RiceUniversity
Master at @JohnsHopkins

ID: 1692703509886443520

linkhttps://zhouxiangfang.github.io/ calendar_today19-08-2023 01:02:36

16 Tweet

44 Takipçi

51 Takip Edilen

Dongwei Jiang (@dongwei__jiang) 's Twitter Profile Photo

Process supervision for reasoning is 🔥! While previous approaches often relied on human annotation and struggled to generalize across different reasoning tasks, we're now asking: Can we improve this? Introducing 𝐑𝐀𝐓𝐈𝐎𝐍𝐀𝐋𝐘𝐒𝐓: a new model pre-trained on implicit

Process supervision for reasoning is 🔥! While previous approaches often relied on human annotation and struggled to generalize across different reasoning tasks, we're now asking: Can we improve this?

Introducing 𝐑𝐀𝐓𝐈𝐎𝐍𝐀𝐋𝐘𝐒𝐓: a new model pre-trained on implicit
Ruidi Chang (@ruidichang) 's Twitter Profile Photo

🚀 Thrilled to announce SAFR is here! #NAACL2025 Superposition is powerful — but it buries interpretability. We control that! 🧠 Neurons often mix too many features (superposition) — making models a black box. 🎯 SAFR strategically redistributes neurons: · 🧩 Monosemantic for

🚀 Thrilled to announce SAFR is here! #NAACL2025
Superposition is powerful — but it buries interpretability. We control that!
🧠 Neurons often mix too many features (superposition) — making models a black box.
🎯 SAFR strategically redistributes neurons:
 · 🧩 Monosemantic for
Daniel Khashabi 🕊️ (@danielkhashabi) 's Twitter Profile Photo

There have been various efforts on disentangling "task learning" vs "task recall" in LLMs. We've recently explored a fresh angle by borrowing from cryptography: with substitution ciphers, we transform a given task into an equivalent, but cryptic (no pun intended!!) forms.

There have been various efforts on disentangling "task learning" vs "task recall" in LLMs. We've recently explored a fresh angle by borrowing from cryptography: with substitution ciphers, we transform a given task into an equivalent, but cryptic (no pun intended!!) forms.
Chunyuan Deng (@chunyuandeng) 's Twitter Profile Photo

I’m at #ICML this week !🍁🍁🍁 Hanjie Chen and I will present the work on Wed 4:30 pm poster session (July 16th), feel free to stand by if you are also interested in steering & controlling! 😃

Isabel Cachola (@isabelcachola) 's Twitter Profile Photo

Our work on readability evaluation for Plain Language Summarization will appear at #EMNLP2025!! Daniel Khashabi 🕊️ Mark Dredze Paper: arxiv.org/pdf/2508.19221 TLDR: Traditional readability metrics correlate poorly with human judgements & LMs consider deeper readability features. 1/6

Our work on readability evaluation for Plain Language Summarization will appear at #EMNLP2025!! <a href="/DanielKhashabi/">Daniel Khashabi 🕊️</a> <a href="/mdredze/">Mark Dredze</a>

Paper: arxiv.org/pdf/2508.19221

TLDR: Traditional readability metrics correlate poorly with human judgements &amp; LMs consider deeper readability features.  1/6