Lucia Zheng (@lucia__zheng) 's Twitter Profile
Lucia Zheng

@lucia__zheng

CS PhD Student @StanfordNLP @StanfordAILab

ID: 2910202553

calendar_today08-12-2014 01:17:23

11 Tweet

115 Takipçi

176 Takip Edilen

Neel Guha (@neelguha) 's Twitter Profile Photo

Just how much does domain-specific pretraining help for legal NLP tasks? We studied this by creating “CaseHOLD” -- a new benchmark for precedential reasoning in law. Paper: arxiv.org/abs/2104.08671 Blog: reglab.stanford.edu/data/casehold-… 1/6

Just how much does domain-specific pretraining help for legal NLP tasks? We studied this by creating “CaseHOLD” -- a new benchmark for precedential reasoning in law. 

Paper: arxiv.org/abs/2104.08671
Blog: reglab.stanford.edu/data/casehold-…
1/6
Peter Henderson (@peterhndrsn) 's Twitter Profile Photo

Our models for legal-bert (base), bert-double (base, trained on 1M more timesteps of wiki), and legal-bert (base, with a custom vocab) now have a hosted inference widget on Hugging Face. Check it out! Links: huggingface.co/zlucia/custom-… huggingface.co/zlucia/legalbe… huggingface.co/zlucia/bert-do…

L. Thorne McCarty (@lthornemccarty) 's Twitter Profile Photo

This paper just won the Carole Hafner Best Paper Award at ICAIL 2021. Congratulations! I read the paper and watched the presentation, and both were excellent. Highly recommended for anyone interested in this field.

Neel Guha (@neelguha) 's Twitter Profile Photo

Really chuffed to share that this work won the Carole Hafner Best Paper Award! Thank you so much to everyone who helped us on this, and to the ICAIL organizers for putting on such a great conference.

Peter Henderson (@peterhndrsn) 's Twitter Profile Photo

We wrote about legal applications of foundation models in § 3.2. There are a lot of opportunities, but also challenges for FMs in these domains. Big thanks to co-authors on this section: Lucia Zheng, Jenny Hong, Neel Guha, Mark Krass, Julian Nyarko, @DanHo1 Thread 👇

Peter Henderson (@peterhndrsn) 's Twitter Profile Photo

So thrilled to finally release “Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset” w/ Mark Krass Lucia Zheng Neel Guha Christopher Manning Dan Jurafsky & @DanHo1. Paper: arxiv.org/abs/2207.00220 Dataset: huggingface.co/datasets/pile-… 🧵👇

Joël Niklaus (@joelniklaus) 's Twitter Profile Photo

I am very happy to announce our new work "FLawN-T5: An Empirical Examination of Effective Instruction-Tuning Data Mixtures for Legal Reasoning"! 📜: arxiv.org/abs/2404.02127 💾: huggingface.co/datasets/lawin… 🧵👇 1/7

I am very happy to announce our new work "FLawN-T5: An Empirical Examination of Effective Instruction-Tuning Data Mixtures for Legal Reasoning"!

📜: arxiv.org/abs/2404.02127
💾: huggingface.co/datasets/lawin…

🧵👇 1/7
Neel Guha (@neelguha) 's Twitter Profile Photo

We’re really excited to share two new benchmark datasets for measuring end-to-end legal RAG systems–forthcoming at CS&Law 2025 (links below). With: Lucia Zheng, Javokhir Arifov, Sarah Zhang, Michal Skreta, Christopher Manning, Peter Henderson, and Daniel E. Ho.