Critical AI : first issue out! https://read.dukeup(@CriticalAI) 's Twitter Profileg
Critical AI : first issue out! https://read.dukeup

@CriticalAI

Critical AI's first issue out: https://read.dukeupress ! Follow us here or @mastodon.social. Our editor often tweets from this acct on weekends.

ID:1290793547122302977

linkhttps://criticalai.org/ calendar_today04-08-2020 23:35:48

17,9K Tweets

5,0K Followers

1,6K Following

Iris van Rooij 💭(@IrisVanRooij) 's Twitter Profile Photo

Very unwise to use LLMs for this. Use instead a carefully designed algorithm that is developed for this specific purpose with a transparant specification that is legally validated and provably correct.

account_circle
Brian Merchant(@bcmerchant) 's Twitter Profile Photo

Wild that the Humane AI pin is just getting destroyed in the reviews with unambiguous recommendations of do not buy—what were they thinking with this thing? That the general wave of AI hype would paper over any deficiencies? This company was in stealth for *years*

Wild that the Humane AI pin is just getting destroyed in the reviews with unambiguous recommendations of do not buy—what were they thinking with this thing? That the general wave of AI hype would paper over any deficiencies? This company was in stealth for *years*
account_circle
MMitchell(@mmitchell_ai) 's Twitter Profile Photo

Interesting legislation: Generative AI Copyright Disclosure Act, recently introduced by California Democratic congressman Adam Schiff.
H/T Bruna Trevelin who helped me understand some key points. 🧵 1/
schiff.house.gov/imo/media/doc/…

account_circle
Critical AI : first issue out! https://read.dukeup(@CriticalAI) 's Twitter Profile Photo

Though I know what you mean Ted McCormick , I'm not sure that's a helpful way of meditating on gen AI's 'innovation.'

The tech isn't a 'poser' trying hard to talk the talk to sound smarter or cooler.

It's a bell curve-like function assembling the most probable concepts/words.

account_circle
Noah Giansiracusa(@ProfNoahGian) 's Twitter Profile Photo

(a) this is fascinating

(b) I hate to think how messed up science is going to get as people use LLMs for things they really shouldn’t, which evidently includes any kind of random sampling.

account_circle
Noah Giansiracusa(@ProfNoahGian) 's Twitter Profile Photo

Maybe this will help with the replication crisis. Results will be a lot easier to replicate if every random number is 42. 😂 🤦‍♂️

account_circle
Arvind Narayanan(@random_walker) 's Twitter Profile Photo

This is the craziest example of the ongoing AI frenzy I've ever seen. As far as I can tell this means that a single developer can make queries that burn $1,400 worth of compute for free *per day* (e.g. 'reverse the following 1M token string') — 50 * (7+21) = 1,400.
HT Simon Willison

This is the craziest example of the ongoing AI frenzy I've ever seen. As far as I can tell this means that a single developer can make queries that burn $1,400 worth of compute for free *per day* (e.g. 'reverse the following 1M token string') — 50 * (7+21) = 1,400. HT @simonw
account_circle
Wyatt Walls(@lefthanddraft) 's Twitter Profile Photo

It remains very funny to me that GPT-4 can't be trusted to count to 5

To unlock that skill, you need a special prompt

Once you think you've found the right prompt, you should undertake extensive testing to ensure your solution is reliable and robust across a range of fruit

It remains very funny to me that GPT-4 can't be trusted to count to 5 To unlock that skill, you need a special prompt Once you think you've found the right prompt, you should undertake extensive testing to ensure your solution is reliable and robust across a range of fruit
account_circle
Wyatt Walls(@lefthanddraft) 's Twitter Profile Photo

If you are using LLMs for summarizing long docs, you really should read this paper

Over 50% of book summaries (incl by Claude Opus and GPT-4) were identified as containing factual errors and errors of omission

Lesson: don't blindly assume AI summarization tools work. Test them.

If you are using LLMs for summarizing long docs, you really should read this paper Over 50% of book summaries (incl by Claude Opus and GPT-4) were identified as containing factual errors and errors of omission Lesson: don't blindly assume AI summarization tools work. Test them.
account_circle
Critical AI : first issue out! https://read.dukeup(@CriticalAI) 's Twitter Profile Photo

editorial collective member Christopher Newfield has a great blog post out on the the crisis in creating and disseminating knowledge. Research funding is migrating away from the basics + from much-needed social and cultural knowledge.
isrf.org/2024/04/03/the…

#CriticalAI editorial collective member @cnewf has a great blog post out on the the crisis in creating and disseminating knowledge. Research funding is migrating away from the basics + from much-needed social and cultural knowledge. isrf.org/2024/04/03/the…
account_circle
Dave Nelson(@thedavenelson) 's Twitter Profile Photo

Very excited to join Rutgers Food Science for a talk Fri 4/12 at 3pm. I'm hopeful that Critical AI : first issue out! https://read.dukeup team members might be able to join. Topic is 'Towards Practical, Critical and Cooperative Teaching & Learning with AI.'

account_circle