David Poole (@davpoole) 's Twitter Profile
David Poole

@davpoole

Scientist, Educator, Artificial Intelligence Researcher

ID: 1689571992

linkhttp://www.cs.ubc.ca/~poole/ calendar_today22-08-2013 00:43:20

282 Tweet

662 Takipçi

258 Takip Edilen

@emilymbender.bsky.social (@emilymbender) 's Twitter Profile Photo

In case anyone doubted, Google is not at all interested in "organizing the world's information" (despite that language still being in their mission statement). Mixing non-information, LLM-extruded sludge in w authentic info is actually the opposite. 404media.co/google-news-is…

In case anyone doubted, <a href="/Google/">Google</a> is not at all interested in "organizing the world's information" (despite that language still being in their mission statement). Mixing non-information, LLM-extruded sludge in w authentic info is actually the opposite.

404media.co/google-news-is…
Gary Marcus (@garymarcus) 's Twitter Profile Photo

When I started working on AI four decades ago, it simply didn’t occur to me that one of the biggest use cases would be derivative mimicry, transferring value from artists and other creators to megacorporations, using massive amounts of energy. This is not the AI I dreamed of.

Gary Marcus (@garymarcus) 's Twitter Profile Photo

What Yann LeCun got right (and wrong) while under siege the last several days. He was 👉Wrong to be vague about what he thought could not be done; he should have said eg, “reliably, physical accurate prediction” instead of leaving it open that he merely meant make nice-looking

MMitchell (@mmitchell_ai) 's Twitter Profile Photo

"LLMs generate responses by randomly sampling [previously seen] words based in part on probabilities" is basically what "stochastic parrot" means btw.

@emilymbender.bsky.social (@emilymbender) 's Twitter Profile Photo

It seems like there are just endless bad ideas about how to use "AI". Here are some new ones courtesy of the UK government. ... and a short thread because there is so much awfulness in this one article. /1 ft.com/content/f2ae55…

It seems like there are just endless bad ideas about how to use "AI". Here are some new ones courtesy of the UK government.

... and a short thread because there is so much awfulness in this one article.
/1

ft.com/content/f2ae55…
@emilymbender.bsky.social (@emilymbender) 's Twitter Profile Photo

Wow -- what if we could use those funds to build: alternative energy capacity, improved electricity grid, public transit, modernization of public buildings (schools, hospitals; think ventilation, energy efficiency) instead of data centers for churning out synthetic media?

François Chollet (@fchollet) 's Twitter Profile Photo

That memorization (which ML has solely focused on) is not intelligence. And because any task that does not involve significant novelty and uncertainty can be solved via memorization, *skill* is never a sign of intelligence, no matter the task.

David Myers (@davidgmyers) 's Twitter Profile Photo

A psych science perspective (writing as President Biden’s age-mate) on the challenges and strengths of octogenarian life.... in tomorrow's print NY Times:

A psych science perspective (writing as President Biden’s age-mate) on the  challenges and strengths of octogenarian life.... in tomorrow's print NY  Times:
Lenka Zdeborova (@zdeborova) 's Twitter Profile Photo

Another class of ML papers that collect good review scores largely overclaims the implications and generality of what is actually proven and sweeps the underlying assumptions under the carpet. See below what I ask the referees for those:

Sriraam Natarajan (@sriraam_utd) 's Twitter Profile Photo

While intellectual independence and scholarly pursuits are the key reasons why many of us choose to be in academia, arguably, the most satisfaction comes from teaching the next generation of great minds!

Yann LeCun (@ylecun) 's Twitter Profile Photo

AI is not some sort of natural phenomenon that will just emerge and become dangerous. *WE* design it and *WE* build it. I can imagine thousands of scenarios where a turbojet goes terribly wrong. Yet we managed to make turbojets insanely reliable before deploying them widely.

Yann LeCun (@ylecun) 's Twitter Profile Photo

LLMs are useful, but they are an off ramp on the road to human-level AI. If you are a PhD student, don't work on LLMs. Try to discover methods that would lift the limitations of LLMs.

MMitchell (@mmitchell_ai) 's Twitter Profile Photo

"I want AI to do my laundry and dishes so that I can do art and writing", not the other way around. Hugging Face: OK! venturebeat.com/ai/hugging-fac…

neil turkewitz (@neilturkewitz) 's Twitter Profile Photo

“The profound anthropomorphisms that characterize today’s AI discourse—conflating predictive analytics w/INTELLIGENCE & massive datasets with KNOWLEDGE & EXPERIENCE—are primarily the result of marketing hype, technological obscurantism & public ignorance.” publicbooks.org/now-the-humani…

@emilymbender.bsky.social (@emilymbender) 's Twitter Profile Photo

PSA: Every time you cite the arXiv version of something instead of the peer reviewed version, you're lending legitimacy to nonsense like this: Source: arxiv.org/html/2404.0722…

PSA: Every time you cite the arXiv version of something instead of the peer reviewed version, you're lending legitimacy to nonsense like this:

Source: arxiv.org/html/2404.0722…
Sebastian Thrun (@sebastianthrun) 's Twitter Profile Photo

I really wonder why so many AI companies seem to blindly compete for building the same thing: the best foundational models, as measured by some abstract set of benchmarks. It feels like there is now an entire ecosystems of companies that want to outperform GPT.x. Why?

David Poole (@davpoole) 's Twitter Profile Photo

I wrote this piece on AI and creative writing after reading articles by creative writers lamenting the impact of AI... Enjoy! "Why AI can’t take over creative writing" theconversation.com/why-ai-cant-ta… via The Conversation Canada