Brett Winton (@wintonark) 's Twitter Profile
Brett Winton

@wintonark

Chief Futurist @ARKInvest. ARK Venture IC. Welcome to the Great Acceleration. ark-invest.com/terms/

ID: 760345658

linkhttp://ark-invest.com calendar_today15-08-2012 23:04:16

10,10K Tweet

191,191K Followers

562 Following

Brett Winton (@wintonark) 's Twitter Profile Photo

on the false promise of quantum simulation the power of a quantum compute simulation system given a number of qubits distills to [qubits] = [discrete elements being modeled]^(1/[interaction variables between each element]) The major promising commercial buckets for quantum

Brett Winton (@wintonark) 's Twitter Profile Photo

Large language repositories Meta is feeding LLaMa with 15 trillion tokens but has a proprietary language dataset that is >10x larger that it is holding in reserve. Reddit is getting $70m per year for its language data; xAI via š¯•¸ has access to a repository 25x that size.

Large language repositories

Meta is feeding LLaMa with 15 trillion tokens but has a proprietary language dataset that is >10x larger that it is holding in reserve.

Reddit is getting $70m per year for its language data; xAI via š¯•¸ has access to a repository 25x that size.
Brett Winton (@wintonark) 's Twitter Profile Photo

Given our emerging understanding of microplastic infiltration We should probably put an immediate stop to all plastic recycling It is an effective subsidy to plastic manufacturing that results in more CO2 emitted per plastic molecule created and likely puts plastic into more

Brett Winton (@wintonark) 's Twitter Profile Photo

A well-understood bias in science incentive structuresā€”null results donā€™t merit publicationā€”overcome in this instance Was debating whether AI-assisted science is likely to lead to less scientific agility (conventional wisdom becomes even more entrenched) or more (the cost of

Elon Musk (@elonmusk) 's Twitter Profile Photo

This weekend, the @xAI team brought our Colossus 100k H100 training cluster online. From start to finish, it was done in 122 days. Colossus is the most powerful AI training system in the world. Moreover, it will double in size to 200k (50k H200s) in a few months. Excellent

Brett Winton (@wintonark) 's Twitter Profile Photo

Probably a 4 year build if relying on power/datacenter vendors, contractors, and consultants xAI gets it done in 4 months Given the dramatic performance improvement rate in AI, velocity/urgency is the key determinant of success

Brett Winton (@wintonark) 's Twitter Profile Photo

Size matters Facebook has ~10x the proprietary language data on database as was used to train the LLaMa models In images they have 20x more than that Instagram and Youtube have 2x more that in uploaded video And yet Tesla's data capture-ability dwarfs all (at 20x more again)

Size matters

Facebook has ~10x the proprietary language data on database as was used to train the LLaMa models

In images they have 20x more than that

Instagram and Youtube have 2x more that in uploaded video

And yet Tesla's data capture-ability dwarfs all (at 20x more again)
Brett Winton (@wintonark) 's Twitter Profile Photo

microsoft co-pilot for teams/office: is there some way to use it that makes it actually useful? None of the integrations with word/excel/powerpoint seem to do much of anything (other than waste the time I spend trying to query/task it) Am I just a bad user?

microsoft co-pilot for teams/office:

is there some way to use it that makes it actually useful?

None of the integrations with word/excel/powerpoint seem to do much of anything (other than waste the time I spend trying to query/task it)

Am I just a bad user?
Brett Winton (@wintonark) 's Twitter Profile Photo

Reason to be skeptical about Apple's ability to deliver a useful consumer-experience with Apple Intelligence Microsoft, with access to OpenAI's models all the way along and massive compute resources, hasn't cracked it. Nor has Google, with TPUs and a massive data abundance.

Brett Winton (@wintonark) 's Twitter Profile Photo

big data vs BIG data GPT-4 class models are trained on 15 trillion tokens I put the UK biobank's whole genome database of 500,000 individuals at 400 trillion tokens Reasoning follows: 800 million tokens per genome on a similar order of magnitude token vocabulary size to the