general abstract nonsense (@damienstanton) 's Twitter Profile
general abstract nonsense

@damienstanton

(he/him) sr. research software engineer @pwc & @cuboulder @gradbuffs student, focuses: type theory, programming languages, data science, distributed systems

ID: 134736674

linkhttps://github.com/damienstanton calendar_today19-04-2010 08:56:36

3,3K Tweet

443 Followers

1,1K Following

Gergely Orosz (@gergelyorosz) 's Twitter Profile Photo

Things that make this source code leak dangerous for Twitter: 1. Regulators can inspect, and potentially find cases when Twitter told them A, when the source code says B 2. IP and patent challenges could come from third parties examining it 3. Business logic exploits

Judea Pearl (@yudapearl) 's Twitter Profile Photo

and stories about causal assumptions as assumptions. The conversion, from stories to substance, would be very useful as a user- interface in many causal inference tasks. We need to make sure, though, that it can handle the substance properly, assuming a proper conversion. 3/3

Manish (@manishearth) 's Twitter Profile Photo

henry 🌘 Andrew Gallant Rust Foundation there are multiple voices in the project so ultimately they may still come to this conclusion after getting feedback, or choose to ignore it, project members still be loud about it and try to get transparency I don't think it's worth framing this as a forgone conclusion *now*

Jason Wei (@_jasonwei) 's Twitter Profile Photo

Clever Google/Stanford University paper on LLMs from my brother Jerry Wei. Performance boosts are great, but there is a more profound insight in this paper that was not explicitly stated: LLMs are trained on human language, but due to the nature of how language was developed (first

general abstract nonsense (@damienstanton) 's Twitter Profile Photo

I completely agree with Matthew on the need to integrate #ChatGPT style #llms into software systems as verifiable/validated components; “LLM Maximalism” is indeed a folly and — in my own anecdotal use for research — produces notably poor designs and code. explosion.ai/blog/against-l…

general abstract nonsense (@damienstanton) 's Twitter Profile Photo

Once more François hits the nail on the head. A better letter would have been a warning of societal-level concern w.r.t. the *application* of generative AI, especially when it comes to disinformation, but there’s simply no evidence of future direct e.g. Terminator doom scenarios.

LLM Security (@llm_sec) 's Twitter Profile Photo

* People ask LLMs to write code * LLMs recommend imports that don't actually exist * Attackers work out what these imports' names are, and create & upload them with malicious payloads * People using LLM-written code then auto-add malware themselves vulcan.io/blog/ai-halluc…

Gonçalo Hall (@gonzohall) 's Twitter Profile Photo

Paul Graham Remote work when well done is a superior management model. Most of those companies never implemented key aspects of remote work like: - async first communication - documentation first approach - proper feedback loops and performance review

AK (@_akhaliq) 's Twitter Profile Photo

Can Large Language Models Infer Causation from Correlation? paper page: huggingface.co/papers/2306.05… dataset: huggingface.co/datasets/causa… Causal inference is one of the hallmarks of human intelligence. While the field of CausalNLP has attracted much interest in the recent years,

Can Large Language Models Infer Causation from Correlation?

paper page: huggingface.co/papers/2306.05…
dataset: huggingface.co/datasets/causa…

Causal inference is one of the hallmarks of human intelligence. While the field of CausalNLP has attracted much interest in the recent years,
general abstract nonsense (@damienstanton) 's Twitter Profile Photo

This is also great advice for any discipline, artistic or otherwise. Real, tangible career growth is as much about connecting with people who help motivate, mentor, and inspire as it is about honing the skills.

(((ل()(ل() 'yoav))))👾 (@yoavgo) 's Twitter Profile Photo

huh? so when gpt4 was thought to be a really really big gpt3 people were like "WOW AMAZING" and now with the rumor of it being 8*220B mixture of experts with small inference trick they are like "Oh, Mixture of Experts? thats what you do when you are low on ideas"?