Evgeny Kuzyakov(@ekuzyakov) 's Twitter Profileg
Evgeny Kuzyakov

@ekuzyakov

Co-founder of @fast_near, founder of @NearSocial_, Ex-@proximityfi, Ex-@NearProtocol, Ex-@google, ex-@facebook

ID:1706052276

linkhttps://github.com/evgenykuzyakov calendar_today28-08-2013 02:04:02

2,3K Tweet

21,1K Takipçi

153 Takip Edilen

Evgeny Kuzyakov(@ekuzyakov) 's Twitter Profile Photo

What’s the rationale behind Lightwave making requests to the same source of funds as NDC?

The funds were supposed to be distributed by the community represented by the NDC, and I remember that any attempt to spend them pre V1, but after election was fiercely declined. And even

account_circle
Evgeny Kuzyakov(@ekuzyakov) 's Twitter Profile Photo

In a recent release Brave Software added 'Pretty-print' to display JSON with indents. Always appreciated how Firefox displayed JSON.

Just wanted to say: 'Thank you!'

In a recent release @brave added 'Pretty-print' to display JSON with indents. Always appreciated how Firefox displayed JSON. Just wanted to say: 'Thank you!'
account_circle
FastNear(@fast_near) 's Twitter Profile Photo

Correction. The link was for the old version of the NearX token. That token was hard-forked after the exploit.

The new deprecated token is: near.social/mob.near/widge…

If you see your account ID there, you better sell it or keep it as a memecoin

account_circle
FastNear(@fast_near) 's Twitter Profile Photo

The latest FASTNEAR API returns FT balances.

Try it yourself: api.fastnear.com/v1/account/v2.…

Latest balances for all 357 tokens on Ref Finance contract in under 100ms.

The latest FASTNEAR API returns FT balances. Try it yourself: api.fastnear.com/v1/account/v2.… Latest balances for all 357 tokens on @finance_ref contract in under 100ms.
account_circle
Haseeb >|<(@hosseeb) 's Twitter Profile Photo

Don’t trust, verify: An Overview of Decentralized AI Inference

Say you want to run a large language model like Llama2-70B. A model this massive requires more than 140GB of memory, which means you can’t run the raw model on your home machine. What are your options? You might jump

Don’t trust, verify: An Overview of Decentralized AI Inference Say you want to run a large language model like Llama2-70B. A model this massive requires more than 140GB of memory, which means you can’t run the raw model on your home machine. What are your options? You might jump
account_circle