Hyunsik Choi (@hyunsik_choi) 's Twitter Profile
Hyunsik Choi

@hyunsik_choi

Head of Platform SW at FuriosaAI (furiosa.ai), Member of Apache Software Foundation

ID: 96338584

linkhttp://www.linkedin.com/in/hyunsikchoi calendar_today12-12-2009 13:00:57

932 Tweet

448 Followers

615 Following

Cho Mar (@chomar85179973) 's Twitter Profile Photo

Such a Brave and Beautiful Soul. May you rest in peace my sister. We will win this fight for you. She said on Feb11- 'Not gonna say much, just one word, Thank you Daddy' before she went out to protest. #Mar3Coup #SaveMyanmar pic.x.com/RNFoVUSafu

박상민 / Sang-Min Park (@sm_park) 's Twitter Profile Photo

youtu.be/kCc8FmEb1nY 이번주 한일중 젤 유익한건 이 유튜브 본것. GPT를 어떻게 만드는지 Transformer 논문을 그대로 재현해 1천 라인 좀 넘는 PyTorch 코드 작성.

박상민 / Sang-Min Park (@sm_park) 's Twitter Profile Photo

방금 리트윗한 Yann Lecun 팟캐스트 내용 굿. GPT에 대해 평가하며 그 한계가 무언지 분명하게 설명. 이 아저씨는 딥러닝 공로로 튜링상까지 받은 사람이니 무게감. 평소에 자기 의견 감추지 않고 화끈하게 이야기하는 성격이라 재밌슴. 1/9

Guillaume Lample @ NeurIPS 2024 (@guillaumelample) 's Twitter Profile Photo

Mistral 7B is out. It outperforms Llama 2 13B on every benchmark we tried. It is also superior to LLaMA 1 34B in code, math, and reasoning, and is released under the Apache 2.0 licence. mistral.ai/news/announcin…

Mistral 7B is out. It outperforms Llama 2 13B on every benchmark we tried. It is also superior to LLaMA 1 34B in code, math, and reasoning, and is released under the Apache 2.0 licence.
mistral.ai/news/announcin…
FuriosaAI (@furiosaai) 's Twitter Profile Photo

FuriosaAI's research paper "TCP: A Tensor Contraction Processor for AI Workloads" has been accepted for publication by the International Symposium on Computer Architecture (ISCA), the premier forum for new ideas in silicon design. furiosa.ai/download/Furio… (1/5)

FuriosaAI's research paper "TCP: A Tensor Contraction Processor for AI Workloads" has been accepted for publication by the International Symposium on Computer Architecture (<a href="/ISCAConfOrg/">ISCA</a>), the premier forum for new ideas in silicon design. furiosa.ai/download/Furio… (1/5)
EE Times | Electronic Engineering Times (@eetimes) 's Twitter Profile Photo

South Korean startup Furiosa unveils its tensor contraction processor concept, with initial testing showing 2-3000 tokens/s throughput performance. #HotChips eetimes.com/furiosa-target…

South Korean startup Furiosa unveils its tensor contraction processor concept, with initial testing showing 2-3000 tokens/s throughput performance. #HotChips eetimes.com/furiosa-target…
Hyunsik Choi (@hyunsik_choi) 's Twitter Profile Photo

FuriosaAI is going to present TCP architecture and its performance at IEEE #hotchips 2024 today. Also, we demonstrate a live demo of LLaMA 70b (INT8) at the conference. Here is a news article published as EE times headline today. eetimes.com/furiosa-target…

FuriosaAI (@furiosaai) 's Twitter Profile Photo

🚀 Today at Hot Chips, we’re unveiling RNGD - data center AI accelerator for enterprise! furiosa.ai 💡150W TDP, Tensor Contraction Processor, and 48GB HBM3 make it powerful, efficient, and programmable in a single product - a GPUs have long struggled to achieve.

Sally Ward-Foxton (@sallywf) 's Twitter Profile Photo

Stealthy Korean startup FuriosaAI will present its tensor contraction processor at Hot Chips later today. I got the scoop! Please enjoy this delightful interview with CEO June Paik: eetimes.com/furiosa-target…

FuriosaAI (@furiosaai) 's Twitter Profile Photo

Can MLLMs Perform Text-to-Image In-Context Learning? 📄 Read the paper here: arxiv.org/abs/2402.01293 💻 Get the code and novel dataset here: github.com/UW-Madison-Lee… Our engineers Wonjun Kang and Hyung Il Koo collaborated with researchers UW–Madison to publish a paper on

FuriosaAI (@furiosaai) 's Twitter Profile Photo

It has been a busy week for research paper announcements at FuriosaAI 📚✍️📖💪. We're proud to announce that our paper, "RNGD: A 5nm Tensor Contraction Processor for Power-Efficient inference on Large Language Models", has been accepted as a Regular paper at #ISSCC 2025 - often

It has been a busy week for research paper announcements at FuriosaAI 📚✍️📖💪.

We're proud to announce that our paper, "RNGD: A 5nm Tensor Contraction Processor for Power-Efficient inference on Large Language Models", has been accepted as a Regular paper at #ISSCC 2025 - often
FuriosaAI (@furiosaai) 's Twitter Profile Photo

In case you missed it, here’s a look back at the RNGD unveil at the 🔥 Hot Chips Symposium in Palo Alto in August. 📽 lnkd.in/drq2-ThF Since then, a lot has happened, and we’re excited to share more RNGD updates soon. Sign up to learn the latest on benchmarking, the RNGD

FuriosaAI (@furiosaai) 's Twitter Profile Photo

🚀 The latest Furiosa SDK Releases support important new features for AI #inference in #datacenters, including 32K context lengths, Tensor Parallelism, and more. Our latest SDK releases bring key features, including: ✅ 32K context lengths support in Furiosa-LLM, optimized for

🚀 The latest Furiosa SDK Releases support important new features for AI #inference in #datacenters, including 32K context lengths, Tensor Parallelism, and more.

Our latest SDK releases bring key features, including:

✅ 32K context lengths support in Furiosa-LLM, optimized for
FuriosaAI (@furiosaai) 's Twitter Profile Photo

We have taken the first steps to open source our Cloud Native Toolkit. Our team has been working to simplify integrating and managing Furiosa NPUs within Kubernetes and the broader container ecosystem. Several key components are available on GitHub now: libfuriosa-kubernetes:

TechCrunch (@techcrunch) 's Twitter Profile Photo

Instead of selling to Meta, AI chip startup FuriosaAI signed a huge customer | TechCrunch techcrunch.com/2025/07/21/ins…

FuriosaAI (@furiosaai) 's Twitter Profile Photo

We are excited to announce that LG AI Research has adopted our RNGD (pronounced “Renegade”) AI accelerator for inference computing with its EXAONE models. RNGD achieves 2.25x better LLM inference performance per watt vs. GPUs, while also meeting LG’s demanding latency and

We are excited to announce that <a href="/LG_AI_Research/">LG AI Research</a>  has adopted our RNGD (pronounced “Renegade”) AI accelerator for inference computing with its EXAONE models.

RNGD achieves 2.25x better LLM inference performance per watt vs. GPUs, while also meeting LG’s demanding latency and
FuriosaAI (@furiosaai) 's Twitter Profile Photo

We're proud to announce that we've raised $125M in Series C bridge funding to scale sustainable AI compute and accelerate our roadmap. This brings our total funding to $246 million and will help us meet growing demand from global enterprise customers for our flagship AI chip,

We're proud to announce that we've raised $125M in Series C bridge funding to scale sustainable AI compute and accelerate our roadmap.

This brings our total funding to $246 million and will help us meet growing demand from global enterprise customers for our flagship AI chip,