William Wang(@WilliamWangNLP) 's Twitter Profileg
William Wang

@WilliamWangNLP

UCSB NLP Lab + ML Center. https://t.co/6TOnqbk6YT https://t.co/KJYhnav3Et Mellichamp Chair Prof. at UCSB CS. PhD @ CMU SCS. Areas: #NLProc, Machine Learning, AI.

ID:503452360

linkhttp://www.cs.ucsb.edu/~william calendar_today25-02-2012 19:40:12

2,2K Tweets

13,9K Followers

716 Following

William Wang(@WilliamWangNLP) 's Twitter Profile Photo

Happy to announce my upcoming South Korea 🇰🇷 tour next week 🤩

KAIST - Monday 4/15, 2:30pm E3-1, Rm 4443. Host: Alice Oh
SKKU - Suwon, Tuesday 4/16, TBD, Engineering Hall 2. Host: JinYeong Bak
SNU - Wednesday 4/17, 1pm see below. Host: Jay-Yoon Lee

I hope to meet new + old friends!

Happy to announce my upcoming South Korea 🇰🇷 tour next week 🤩 KAIST - Monday 4/15, 2:30pm E3-1, Rm 4443. Host: @aliceoh SKKU - Suwon, Tuesday 4/16, TBD, Engineering Hall 2. Host: @NoSyu SNU - Wednesday 4/17, 1pm see below. Host: Jay-Yoon Lee I hope to meet new + old friends!
account_circle
JJ McCammon(@jjmccammon) 's Twitter Profile Photo

open.spotify.com/show/1zpZxyZOQ…

Can off-the-shelf large language models help translate low-resource and endangered languages despite not seeing these languages in their training data? Maybe! I spoke to Kexun Zhang about LingoLLM, a workflow and pipeline that helps upgrade the language…

account_circle
VIU (Miguel Eckstein) Lab UC Santa Barbara(@LabViu) 's Twitter Profile Photo

New preprint analyzing 1.8 million neurons of CNNs with emergent behavioral and neuronal signatures of covert attention without incorporating any explicit attention mechanism. With Sudhanshu Srivastava William Wang biorxiv.org/content/10.110…

New preprint analyzing 1.8 million neurons of CNNs with emergent behavioral and neuronal signatures of covert attention without incorporating any explicit attention mechanism. With @sudh8887 @WilliamWangNLP biorxiv.org/content/10.110…
account_circle
Wenhu Chen(@WenhuChen) 's Twitter Profile Photo

Now all the slides and recorded videos are uploaded to the course website now: cs.uwaterloo.ca/~wenhuche/teac…

Kudos to all the great students taking the course!

account_circle
William Wang(@WilliamWangNLP) 's Twitter Profile Photo

Surprising results by Michael/Mahsa/Fatima suggest that classic feature-space metrics like CLIPScore may outperform advanced LM-based metrics in evaluating text-to-image fidelity. Despite high human preference correlations, LLM metrics struggle in real-world scenarios. 🤔🤔🤔

account_circle
William Wang(@WilliamWangNLP) 's Twitter Profile Photo

Observation: Relying too heavily on prompt engineering can stifle the creativity and exploration spirit of PhD students. It's crucial to remember that breakthroughs and fundamental innovations come from diving deep, questioning, and reimagining the boundaries of what's possible.

account_circle
UCSB Mind and Machine Intelligence(@ucsbmmi) 's Twitter Profile Photo

Join us for the Mellichamp Mind & Machine Intelligence annual summit, April 18th and 19th. We’ll explore examples of using AI in the creative process, discuss pressing questions, and ignite debates around the interplay of AI and Human Creativity.
mind-machine.ucsb.edu/events/all/202…

Join us for the Mellichamp Mind & Machine Intelligence annual summit, April 18th and 19th. We’ll explore examples of using AI in the creative process, discuss pressing questions, and ignite debates around the interplay of AI and Human Creativity. mind-machine.ucsb.edu/events/all/202…
account_circle
William Wang(@WilliamWangNLP) 's Twitter Profile Photo

Why would you register now to apply for a visa? Well, if your visa is denied, then you can apply for full refund of your registration: neurips.cc/FAQ/Cancellati…

account_circle
Aran Komatsuzaki(@arankomatsuzaki) 's Twitter Profile Photo

Long-context LLMs Struggle with Long In-context Learning

Suggests a notable gap in current LLM capabilities for processing and understanding long, context-rich sequences.

arxiv.org/abs/2404.02060

Long-context LLMs Struggle with Long In-context Learning Suggests a notable gap in current LLM capabilities for processing and understanding long, context-rich sequences. arxiv.org/abs/2404.02060
account_circle
Lei Li(@lileics) 's Twitter Profile Photo

LLM often could not correct its own mistakes. However, using a fine-grained feedback model, we could teach LLM how to correct its incorrect generation. Introducing LLMRefine: the power of simulated annealing on top of fine-grained feedback!

Check out: arxiv.org/abs/2311.09336

account_circle
Yi-Lin Tuan(@pascaltuan) 's Twitter Profile Photo

How can we control helpfulness and safety levels in LLMs? We demonstrate that solely automatically revising the same data for a pretrained model can unlock the model’s controllability in these attributes.

My last summer internship paper is finally out arxiv.org/abs/2404.01295!

How can we control helpfulness and safety levels in LLMs? We demonstrate that solely automatically revising the same data for a pretrained model can unlock the model’s controllability in these attributes. My last summer internship paper is finally out arxiv.org/abs/2404.01295!
account_circle
William Wang(@WilliamWangNLP) 's Twitter Profile Photo

NeurIPS 2024 Registration is OPEN! neurips.cc We have notified the Canadian government about the conference, but due to the challenges of getting Canadian visa in the past, we strongly encourage participants register early to obtain their visa invitation letter. 🇨🇦

account_circle
Wenda Xu(@WendaXu2) 's Twitter Profile Photo

When LLMs make mistakes, can we build a model to pinpoint error, indicate its severity and error type? Can we incorporate this fine-grained info to improve LLM? We introduce LLMRefine [NAACL 2024], a simulated annealing method to revise LLM output at inference. Google AI UCSB NLP Group

When LLMs make mistakes, can we build a model to pinpoint error, indicate its severity and error type? Can we incorporate this fine-grained info to improve LLM? We introduce LLMRefine [NAACL 2024], a simulated annealing method to revise LLM output at inference. @GoogleAI @ucsbNLP
account_circle
William Wang(@WilliamWangNLP) 's Twitter Profile Photo

Describing the recent advancements in NLP by calendar years feels nearly impossible. Yet, when we frame these developments in terms of LSTM time, BERT/GLUE time, GPT-3, ChatGPT, and GPT-4/LLM eras, the narrative of progress becomes remarkably clear and intuitive. 😎

account_circle
William Wang(@WilliamWangNLP) 's Twitter Profile Photo

Not an expert on leaderboarding, but I'm genuinely curious what's the tweak between 0.2 and 0.3, such that it gave a 20 point boost on MMLU while not changing other performance? 🤔

account_circle
Alice Oh(@aliceoh) 's Twitter Profile Photo

Let's be honest. For everyone in ML/NLP, it's a really exciting time but also very stressful with so many new papers, models, benchmarks, deadlines, reviews, talks, workshops, conferences...

How do you keep up and stay sane?

For me, the only solution is collaboration. 1/n

account_circle