Haw-Shiuan Chang (@haw_shiuan) 's Twitter Profile
Haw-Shiuan Chang

@haw_shiuan

UMass CIIR Postdoc

ID: 914598299260334086

linkhttps://ken77921.github.io/ calendar_today01-10-2017 21:10:01

79 Tweet

167 Takipçi

238 Takip Edilen

Chau Minh Pham (@chautmpham) 's Twitter Profile Photo

I'll be presenting Suri 🦙 at #EMNLP2024 on Thursday (10:30am) and #wnu2024 on Friday! Please reach out if you want to talk about: 1️⃣ Long-form text generation/evaluation 2️⃣ Synthetic data/Instruction tuning or anything else! Looking forward to meeting old and new friends!

I'll be presenting Suri 🦙 at #EMNLP2024 on Thursday (10:30am) and #wnu2024 on Friday! Please reach out if you want to talk about:

1️⃣ Long-form text generation/evaluation
2️⃣ Synthetic data/Instruction tuning
or anything else!

Looking forward to meeting old and new friends!
Ameya Godbole (@ameya_godbole1) 's Twitter Profile Photo

Drop by our poster tomorrow!! EMNLP 2025 #EMNLP2024 Nov 12 (Tue) at 11:00-12:30 Session: 02 Sub-session: Generation Looking forward to chatting with everyone!

Yixiao Song (@yixiao_song) 's Twitter Profile Photo

Are you at EMNLP '24, and looking for an accurate metric for factuality evaluation? ✨ Check out our poster presentation of VeriScore tomorrow (Nov 12) from 16:00 to 17:30 in the Riverfront Hall in Miami! 🌴🌴

Haw-Shiuan Chang (@haw_shiuan) 's Twitter Profile Photo

One superpower🦸 of scientists🧑‍🔬 comes from imagination💡. So does contrastive decoding➖! By unleashing the superpower, we might spend billions of dollars for LLM pretraining💸💸💸 more wisely. Come to our talk at #EMNLP2024 Special Theme (Tue. 2:15PM at Terrace Room Monroe)

Marzena Karpinska (@mar_kar_) 's Twitter Profile Photo

If you are at #EMNLP2024 you should really check out this work on factuality evaluation tomorrow! If you are not, you should still check out the paper! This is a great work and I see many possibilities to build on it!

Mathew Jacob (@mat_jacob1002) 's Twitter Profile Photo

It's time to revisit common assumptions in IR! Embeddings have improved drastically, but mainstream IR evals have stagnated since MSMARCO and BEIR. We ask: on private or tricky IR tasks, are current rerankers even better? Surely, reranking as many docs as you can afford is best?

It's time to revisit common assumptions in IR! Embeddings have improved drastically, but mainstream IR evals have stagnated since MSMARCO and BEIR.

We ask: on private or tricky IR tasks, are current rerankers even better? Surely, reranking as many docs as you can afford is best?
Ximing Lu (@gximing) 's Twitter Profile Photo

Are LLMs 🤖 as creative as humans 👩‍🎓? Not quite! Introducing CREATIVITY INDEX: a metric that quantifies the linguistic creativity of a text by reconstructing it from existing text snippets on the web. Spoiler: professional human writers like Hemingway are still far more creative

Are LLMs 🤖 as creative as humans 👩‍🎓? Not quite!

Introducing CREATIVITY INDEX: a metric that quantifies the linguistic creativity of a text by reconstructing it from existing text snippets on the web. Spoiler: professional human writers like Hemingway are still far more creative
Heng Ji (@hengjinlp) 's Twitter Profile Photo

Vote for Violet Violet Peng ! She is one of the most responsible, reliable, dedicated and creative collaborators who I’ve have ever worked with!

Hamed Zamani (@hamedzamani) 's Twitter Profile Photo

📢 An excellent opportunity for PhD students in IR and NLP: The Center for Intelligent Information Retrieval (CIIR) at the UMass Amherst is initiating an exciting Research Internship program for Summer 2025. See the thread for more info. 👇 #SIGIR #NLProc

Haw-Shiuan Chang (@haw_shiuan) 's Twitter Profile Photo

😃Very glad to see that our Softmax-CPR method, which will improve your recommendation model by ~20%, is formally integrated into RecBole v1.2.1, a very popular recommendation research framework. Try SASRecCPR (recbole.io/docs/user_guid…) and GRU4RecCPR (recbole.io/docs/user_guid…)!

Alex Gurung (@alexaag1234) 's Twitter Profile Photo

Preprint: Can we learn to reason for story generation (~100k tokens), without reward models? Yes! We introduce an RLVR-inspired reward paradigm VR-CLI that correlates with human judgements of quality on the 'novel' task of Next-Chapter Prediction. Paper: arxiv.org/abs/2503.22828

Preprint: Can we learn to reason for story generation (~100k tokens), without reward models?

Yes! We introduce an RLVR-inspired reward paradigm VR-CLI that correlates with human judgements of quality on the 'novel' task of Next-Chapter Prediction.

Paper: arxiv.org/abs/2503.22828