yihan (@0xyihan) 's Twitter Profile
yihan

@0xyihan

rational optimist | SCS@CMU
@GreenpillCN @PlanckerDao |
rep @gitcoin GR15 GCC UDC |
AI alignment, longevity
zhihu.com/people/zh3036

ID: 1570074700000432128

linkhttp://linktr.ee/zh3036 calendar_today14-09-2022 15:39:22

264 Tweet

1,1K Followers

718 Following

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

# automating software engineering In my mind, automating software engineering will look similar to automating driving. E.g. in self-driving the progression of increasing autonomy and higher abstraction looks something like: 1. first the human performs all driving actions

方庭 Fangting ☀️ (@fangtingeth) 's Twitter Profile Photo

写完这篇收到了各种各样的反馈,但也收到了很不错的 proposal!(有一个意外得特别好)。也谢谢主理人 Venkatesh 和 Tim Beiko 如此支持。虽然但是留给华语区的时间真的有限了😶来申请吧,至少这五天可以无偿客服

noahdgoodman (@noahdgoodman) 's Twitter Profile Photo

When I first saw Tree of Thoughts, I asked myself: If language models can reason better by searching, why don't they do it themselves during Chain of Thought? Some possible answers (and a new paper): 🧵

Yann LeCun (@ylecun) 's Twitter Profile Photo

As long as AI systems are trained to reproduce human-generated data (e.g. text) and have no search/planning/reasoning capability, performance will saturate below or around human level. Furthermore, the amount of trials needed to reach that level will be far larger than the

yihan (@0xyihan) 's Twitter Profile Photo

I thought I couldn't understand this release event because of cultural differences. Then I looked at the comments, and it seems some feelings are universal.

yihan (@0xyihan) 's Twitter Profile Photo

Evals are emphasized so much at AI Engineer. As foundation model becomes more powerful, system design paradigm will shift from imperative to declarative. The ability to clearly declare a measurable and operational goal will become more important.

Evals are emphasized so much at <a href="/aiDotEngineer/">AI Engineer</a>.
As foundation model becomes more powerful, system design paradigm will shift from imperative to declarative.
The ability to clearly declare a measurable and operational goal will become more important.
yihan (@0xyihan) 's Twitter Profile Photo

We were actually using the credit splitting algorithm for our cofounders' yearly cash dividend. Luckily we can use it again.

yihan (@0xyihan) 's Twitter Profile Photo

Observing two trends in LLM architectures attempts: 1 Models with RNN-style infinite long context 2 Models with search and backtracking abilities for reasoning.

yihan (@0xyihan) 's Twitter Profile Photo

I recall in one session of AI Engineer, the speaker mentioned that there are more searches with long keywords these days than before. I suspect this is because searches made by Perplexity tend to have very long keywords.

速溶猪 (@yuxinzhu4736) 's Twitter Profile Photo

刷到那位去世的中金女生 参加海上游学项目时认识的朋友给她写的告别信息,动容。网上的声音太刺耳,说一个被优绩主义裹挟的人,哪怕去参加海上游学,也是做题家内卷人生丰富履历的一环,更有甚者还说死者15 16年公众号上那些分享游记的文字充满了优越感。

刷到那位去世的中金女生 参加海上游学项目时认识的朋友给她写的告别信息,动容。网上的声音太刺耳,说一个被优绩主义裹挟的人,哪怕去参加海上游学,也是做题家内卷人生丰富履历的一环,更有甚者还说死者15 16年公众号上那些分享游记的文字充满了优越感。
Nouha Dziri (@nouhadziri) 's Twitter Profile Photo

📢Super excited that our workshop "System 2 Reasoning At Scale" was accepted to #NeurIPS24, Vancouver! 🎉 🎯 how can we equip LMs with reasoning, moving beyond just scaling parameters and data? Organized w. Stanford NLP Group Massachusetts Institute of Technology (MIT) Princeton University Ai2 UW NLP 🗓️ when? Dec 15 2024

📢Super excited that our workshop "System 2 Reasoning At Scale" was accepted to #NeurIPS24, Vancouver! 🎉
🎯 how can we equip LMs with reasoning, moving beyond just scaling parameters and data?

Organized w. <a href="/stanfordnlp/">Stanford NLP Group</a> <a href="/MIT/">Massachusetts Institute of Technology (MIT)</a> <a href="/Princeton/">Princeton University</a> <a href="/allen_ai/">Ai2</a> <a href="/uwnlp/">UW NLP</a> 

🗓️ when? Dec 15 2024
CLS (@chengleisi) 's Twitter Profile Photo

Automating AI research is exciting! But can LLMs actually produce novel, expert-level research ideas? After a year-long study, we obtained the first statistically significant conclusion: LLM-generated ideas are more novel than ideas written by expert human researchers.

Automating AI research is exciting! But can LLMs actually produce novel, expert-level research ideas?

After a year-long study, we obtained the first statistically significant conclusion: LLM-generated ideas are more novel than ideas written by expert human researchers.