JohnYue122333(@JohnYue122333) 's Twitter Profile Photo

Although I don't like Altman's exaggerated marketing, it's better for an organization, even a non-profit, to survive by creating outstanding products. AI is far from reaching a level of uncontrollable safety risks, and AGI is still a long way off.

account_circle
JohnYue122333(@JohnYue122333) 's Twitter Profile Photo

Sharon Zhou Your algorithm is very meaningful; can it achieve this effect? Q: Was Thales the first to systematize formal proofs? Your A: No, he wasn't... our conclusion is based on: en.wikipedia.org/wiki/Thales_of…

account_circle
JohnYue122333(@JohnYue122333) 's Twitter Profile Photo

Gary Marcus 1/3 OpenAI's greatest sorrow is that they underestimated the complexity of the world. The CEO as marketing officer said AGI is coming soon, and everyone believed it.

account_circle
JohnYue122333(@JohnYue122333) 's Twitter Profile Photo

ChatGPT is so amazing!
I've asked it thousands of questions, and it has never once asked-back me anything, always ready with answers.
It's more impressive than God.

account_circle
JohnYue122333(@JohnYue122333) 's Twitter Profile Photo

Greg Brockman Although I don't like Altman's exaggerated marketing, it's better for an organization, even a non-profit, to survive by creating outstanding products. AI is far from reaching a level of uncontrollable safety risks, and AGI is still a long way off.

account_circle
JohnYue122333(@JohnYue122333) 's Twitter Profile Photo

Judea Pearl (I am someone who loves natural sciences, not a political enthusiast, let alone an extremist. The bottom line is: whoever harms civilians first is in the wrong.)

account_circle
JohnYue122333(@JohnYue122333) 's Twitter Profile Photo

Sharon Zhou In a nutshell: When an LLM provides a “seemingly” fact-based 'reasoned' result, it should be able to 'cite' the “original form” of the training corpus as “evidence” in its output.

account_circle
JohnYue122333(@JohnYue122333) 's Twitter Profile Photo

Irina Rish 1. is LLM-like Gen AI enough? //me: NO!
2. Isn't the advanced intelligence of humans, particularly the language/symbol aspect, a good example of top-down processing? Why exclude it?

account_circle
JohnYue122333(@JohnYue122333) 's Twitter Profile Photo

Linus ●ᴗ● Ekenstam Machine-readable cards are too easy. The difficult part is the entire test paper, which includes handwritten text, handwritten charts, and even various non-standard writing formats.

account_circle
JohnYue122333(@JohnYue122333) 's Twitter Profile Photo

MrNeRF Reading papers in such a beautiful setting should double the efficiency, and the research goals should be even more ambitious, all pertaining to the history of human civilization.

account_circle
JohnYue122333(@JohnYue122333) 's Twitter Profile Photo

Gary Marcus Predictions the future of AI, Marcus is correct 99% of the time, except for the one time when AGI is finally achieved. On the other hand, LeCun is wrong 99% of the time. The only time he is right is when he says, 'Oh, look, I finally achieved AGI.' Gary Marcus Yann LeCun

account_circle