Valentyn Faychuk (@faychuk) 's Twitter Profile
Valentyn Faychuk

@faychuk

TEEware #secure-cloud

ID: 1547889951936692225

linkhttps://faychuk.com calendar_today15-07-2022 10:25:29

5 Tweet

22 Followers

53 Following

Valentyn Faychuk (@faychuk) 's Twitter Profile Photo

did some prototyping with traditional and crypto AI providers, and my conclusion is that today the competition between web2 and web3 is exaggerated - the targeted markets barely overlap for example, below are the different values that I've received from top representatives of

did some prototyping with traditional and crypto AI providers, and my conclusion is that today the competition between web2 and web3 is exaggerated - the targeted markets barely overlap

for example, below are the different values that I've received from top representatives of
Valentyn Faychuk (@faychuk) 's Twitter Profile Photo

just had a brutal revelation put $10,000 into $ETH 5 years ago, now it is roughly $8,500 put the same into $NVDA 5 years ago, now it would've become $140,000

just had a brutal revelation

put $10,000 into $ETH 5 years ago,
now it is roughly $8,500

put the same into $NVDA 5 years ago,
now it would've become $140,000
Valentyn Faychuk (@faychuk) 's Twitter Profile Photo

getting fascinated by the provenance nowadays, c2pa actively use zk circuits investigation.rollingstone.com/dj-photo-war-c…

Valentyn Faychuk (@faychuk) 's Twitter Profile Photo

been poking around stwo - 1400 open branches lol anyone shipping with it yet? curious how close it is to prod for verifiable ai github.com/starkware-libs

been poking around stwo - 1400 open branches lol
anyone shipping with it yet? curious how close it is to prod for verifiable ai
github.com/starkware-libs
Valentyn Faychuk (@faychuk) 's Twitter Profile Photo

most params in big AI models are basically zero. why are we spending energy multiplying by them stanford built Onyx, a chip that skips zeros entirely, 70x less energy and 8x faster than CPUs on average read more here spectrum.ieee.org/sparse-ai

most params in big AI models are basically zero. why are we spending energy multiplying by them

stanford built Onyx, a chip that skips zeros entirely, 70x less energy and 8x faster than CPUs on average

read more here spectrum.ieee.org/sparse-ai