sahil bhatia (@sahilb17) 's Twitter Profile
sahil bhatia

@sahilb17

PhD student at UC Berkeley; Prev: Microsoft Research

ID: 121774429

calendar_today10-03-2010 14:57:10

22 Tweet

102 Followers

243 Following

Shah Rukh Khan (@iamsrk) 's Twitter Profile Photo

Sometimes we don’t land or arrive at the destination we want to. The important thing is we took off and had the Hope and Belief we can. Our current situation is never and not our final destination. That always comes in time and belief! Proud of #ISRO

Alvin Cheung (@alvinkcheung) 's Twitter Profile Photo

any suggestions for a parser that parses C, Java, Python, and Ruby into a common intermediate representation (like LLVM but higher level)? We want to implement a code analyzer for verified lifting but are too lazy to write a frontend for each language

Shadaj Laddad (@shadajl) 's Twitter Profile Photo

Katara, our system for automatically synthesizing CRDT designs from sequential data types (arxiv.org/abs/2205.12425), is now open-source at github.com/hydro-project/…! Deep-dive blog posts into the implementation coming soon!

Alvin Cheung (@alvinkcheung) 's Twitter Profile Photo

📢 Please RT! 📢 Our group has a 1-2 yr postdoc opening starting this fall on using ML for code compilation and generation in a number of domains. Please send me your CV, a brief research statement, and names of two references if you are interested!

sahil bhatia (@sahilb17) 's Twitter Profile Photo

We’re excited to present our work on Verified Code Transpilation using LLMs at #NeurIPS2024 today! 🤖🔍 Come visit our poster in the East Exhibit Hall A-C #2904 from 11AM-2PM. neurips.cc/virtual/2024/p…

Simon Guo 🦝 (@simonguozirui) 's Twitter Profile Photo

LLMs for GPU kernel🌽generation have been getting Pop🍿ular since our preview last Dec; excited to announce 📢 our full paper 📃 for KernelBench! Turns out KernelBench is quite challenging 🧠 — frontier models outperform the PyTorch Eager baseline <20% of the time. More 🧵👇

LLMs for GPU kernel🌽generation have been getting Pop🍿ular since our preview last Dec; excited to announce 📢 our full paper 📃 for KernelBench!

Turns out KernelBench is quite challenging 🧠 —  frontier models outperform the PyTorch Eager baseline &lt;20% of the time.

More 🧵👇
Alvin Cheung (@alvinkcheung) 's Twitter Profile Photo

In this work, we show how to use off-the-shelf LLMs to generate code for accelerators. This is interesting as accelerators are often "low resources," i.e., there isn't much code written using such accelerators to train custom models. Check out our paper for details!