Nan Hu (@sp1derng) 's Twitter Profile
Nan Hu

@sp1derng

Ph.D. Candidate at @SEU1902_NJ, visiting student at @EdinburghNLP currently. Knowledge Graphs, Question Answering, Natural Language Processing.

ID: 1565571870753075201

calendar_today02-09-2022 05:27:01

8 Tweet

11 Takipçi

86 Takip Edilen

Akari Asai (@akariasai) 's Twitter Profile Photo

Don't miss our #ACL2023 tutorial on Retrieval-based LMs and Applications this Sunday! acl2023-retrieval-lm.github.io with Sewon Min, Zexuan Zhong, Danqi Chen We'll cover everything from architecture design and training to exploring applications and tackling open challenges! [1/2]

Don't miss our #ACL2023 tutorial on Retrieval-based LMs and Applications this Sunday! 
acl2023-retrieval-lm.github.io
with <a href="/sewon__min/">Sewon Min</a>, <a href="/ZexuanZhong/">Zexuan Zhong</a>, <a href="/danqi_chen/">Danqi Chen</a> 
We'll cover everything from architecture design and training to exploring applications and tackling open challenges! [1/2]
Jiaoyan Chen (@chenjiaoyan1) 's Twitter Profile Photo

Try DeepOnto: A Python package recently implemented for ontology engineering with deep learning and language models (arxiv.org/abs/2307.03067), Github: github.com/KRR-Oxford/Dee……. By @lawhy_AI Jiaoyan Chen Hang Dong Ian Horrocks, etc. #ontology #KnowledgeGraphs #LLM #PLM #OWL

CLS (@chengleisi) 's Twitter Profile Photo

How can we humans verify the truthfulness of LLM outputs (or any claims you see on the Internet)? Should we ask ChatGPT (#LLMs)? Search on Google (retrieval)? Are they complementary? Tldr: LLMs Help Humans Verify Truthfulness - Except When They Are Convincingly Wrong! 1/n

How can we humans verify the truthfulness of LLM outputs (or any claims you see on the Internet)? Should we ask ChatGPT (#LLMs)? Search on Google (retrieval)? Are they complementary?

Tldr: LLMs Help Humans Verify Truthfulness - Except When They Are Convincingly Wrong!

1/n
Xinyi Wang @ ICLR (@xinyiwang98) 's Twitter Profile Photo

Happy to share our new preprint on understanding how reasoning emerges from language model pre-training: arxiv.org/abs/2402.03268 We hypothesize that language models can aggregate reasoning paths seen in pre-training data to draw new conclusions at inference time.

Happy to share our new preprint on understanding how reasoning emerges from language model pre-training: arxiv.org/abs/2402.03268
We hypothesize that language models can aggregate reasoning paths seen in pre-training data to draw new conclusions at inference time.