Pratyush Ranjan Tiwari (@pratyushrt) 's Twitter Profile
Pratyush Ranjan Tiwari

@pratyushrt

Building trust infra for an AI-enabled future @eternisai, prev. PhD @JohnsHopkins, 3X EF cryptography grantee, built @ketlxyz

ID: 1064795797114552322

calendar_today20-11-2018 08:21:15

994 Tweet

1,1K Takipçi

377 Takip Edilen

Pratyush Ranjan Tiwari (@pratyushrt) 's Twitter Profile Photo

A preliminary version of this work will appear at NeurIPS '25 Workshop on Efficient Reasoning Some more experiments+insights to follow in the next arxiv update 👀

A preliminary version of this work will appear at NeurIPS '25 Workshop on Efficient Reasoning

Some more experiments+insights to follow in the next arxiv update 👀
srikar (@srikarvaradaraj) 's Twitter Profile Photo

Join us Eternis if you want to work on continual learning. Check out the workshop and paper below for the flavor of work we are interested in.

Pratyush Ranjan Tiwari (@pratyushrt) 's Twitter Profile Photo

Interestingly, we made many of the same observations in our hard examples are the best for GRPO paper from last month x.com/pratyushrt/sta… Including the learnable percentage in the training set explanation to this phenomenon

Pratyush Ranjan Tiwari (@pratyushrt) 's Twitter Profile Photo

Working on a new open source effort on multi-agent coordination from the lens of social deduction games like Among Us. If you've been working on game-playing agents, RL, in-context learning, continual learning etc. comment/DM and I'll add you to our group chat.

Harrison Kinsley (@sentdex) 's Twitter Profile Photo

Reinforcement learning always sounds so fun and like it'll be such a great fit for your problem. Then 92 versions of a reward function later, you're reminded of the pain.

Justin Thaler (@succinctjt) 's Twitter Profile Photo

1/ New post: Jolt now proves RISC-V programs with 64-bit registers (RV64IMAC), at speeds exceeding those we previously reported for 32-bit. 1.5M cycles/sec on a 32-core CPU, 500k cycles/sec on a MacBook. Here’s why this matters 🧵

julian (@julianl093) 's Twitter Profile Photo

This is not a particularly good take and is indicative of a fundamental misunderstanding of what a top-tier technical college education is suppose to offer. Preparing to understand modern AI as a Harvard or Stanford undergrad is not about learning "prompt engineering", vibe

GLADIA Research Lab (@gladialab) 's Twitter Profile Photo

LLMs are injective and invertible. In our new paper, we show that different prompts always map to different embeddings, and this property can be used to recover input tokens from individual embeddings in latent space. (1/6)

LLMs are injective and invertible.

In our new paper, we show that different prompts always map to different embeddings, and this property can be used to recover input tokens from individual embeddings in latent space.

(1/6)
Pratyush Ranjan Tiwari (@pratyushrt) 's Twitter Profile Photo

With hardware co-location, TEE breaks are a dime a dozen but this one takes it to the next level and lets you generate a fake attestation report using keys extracted from production hardware with a ~$1k setup. Here's a breakdown and it affects secure LLM inference too: The