FuriosaAI (@furiosaai) 's Twitter Profile
FuriosaAI

@furiosaai

FuriosaAI designs and develops data center accelerators for the most advanced AI models and applications.

ID: 1711782675147522048

linkhttps://www.furiosa.ai/ calendar_today10-10-2023 16:36:34

48 Tweet

288 Followers

12 Following

FuriosaAI (@furiosaai) 's Twitter Profile Photo

Thank you to everyone in the AI community we met in NYC this summer and those who came to our happy hour at New York University's Global AI Frontier Lab last week. It was a pleasure to show our AI accelerator, RNGD (pronounced “renegade”), to so many engineers and researchers working

Thank you to everyone in the AI community we met in NYC this summer and those who came to our happy hour at <a href="/nyuniversity/">New York University</a>'s Global AI Frontier Lab last week. It was a pleasure to show our AI accelerator, RNGD (pronounced “renegade”), to so many engineers and researchers working
FuriosaAI (@furiosaai) 's Twitter Profile Photo

At OCP APAC last week and at OCP Korea this week, we presented “From Silicon to AI Serving: Optimizing Inference and Engineering What’s Next”, which covered everything from designing and manufacturing silicon to applying it to real-world AI services. Our Head of Product, Donggun

At OCP APAC last week and at OCP Korea this week, we presented “From Silicon to AI Serving: Optimizing Inference and Engineering What’s Next”, which covered everything from designing and manufacturing silicon to applying it to real-world AI services.

Our Head of Product, Donggun
FuriosaAI (@furiosaai) 's Twitter Profile Photo

Last month, we presented four papers at ICML 2025 in Vancouver and two papers at ACL 2025 in Vienna. These six papers dig into ways to make advanced AI systems more efficient, more capable, and more flexible. We’re grateful to have collaborated on these projects with the

Last month, we presented four papers at ICML 2025 in Vancouver and two papers at ACL 2025 in Vienna.

These six papers dig into ways to make advanced AI systems more efficient, more capable, and more flexible.

We’re grateful to have collaborated on these projects with the
FuriosaAI (@furiosaai) 's Twitter Profile Photo

🔥 From our LG AI Research news to our $125 million funding raise, and from new executive hires to presenting six papers at ICML and ACL, it’s been a busy few months. We also managed to attend SuperAI in Singapore and RAISE Summit in Paris. Plus, we hosted our inaugural happy

🔥 From our LG AI Research news to our $125 million funding raise, and from new executive hires to presenting six papers at ICML and ACL, it’s been a busy few months.

We also managed to attend SuperAI in Singapore and RAISE Summit in Paris. Plus, we hosted our inaugural happy
FuriosaAI (@furiosaai) 's Twitter Profile Photo

The annual Hot Chips conference this week reminded us to share our article from the recent special issue of IEEE Micro, which highlights our Hot Chips 2024 presentation as one of the best at the event. 👀 Read the full “FuriosaAI RNGD: A Tensor Contraction Processor for

The annual <a href="/hotchipsorg/">Hot Chips</a> conference this week reminded us to share our article from the recent special issue of <a href="/ieeemicro/">IEEE Micro</a>, which highlights our Hot Chips 2024 presentation as one of the best at the event.

👀 Read the full “FuriosaAI RNGD: A Tensor Contraction Processor for
FuriosaAI (@furiosaai) 's Twitter Profile Photo

Our engineers tackle groundbreaking challenges head-on. This month, we feature SeungHo Song, a key engineer on our compiler team, and ask him about his experience. 1️⃣ What was your biggest motivation for joining Furiosa? The opportunity to work on AI development using

Our engineers tackle groundbreaking challenges head-on. This month, we feature SeungHo Song, a key engineer on our compiler team, and ask him about his experience.

1️⃣ What was your biggest motivation for joining Furiosa?

The opportunity to work on AI development using
FuriosaAI (@furiosaai) 's Twitter Profile Photo

🛬 Next week, we’re at #AIInfraSummit. 👀 Watch our SVP of Product and Business, Alex Liu, present on the Enterprise AI stage at 4:10 PM on September 9. 🤝 Connect with us in the app, stop by Booth 728 from September 9 to September 11, and book a meeting in advance:

🛬 Next week, we’re at #AIInfraSummit.

👀 Watch our SVP of Product and Business, Alex Liu, present on the Enterprise AI stage at 4:10 PM on September 9.

🤝 Connect with us in the app, stop by Booth 728 from September 9 to September 11, and book a meeting in advance:
FuriosaAI (@furiosaai) 's Twitter Profile Photo

Day 1 at #AIInfraSummit was energizing. The conversations at our booth make it clear: the future of AI depends not just on smarter models, but on more efficient compute. That’s why we were excited to have Alex Liu, our SVP of Product and Business, onstage sharing how we are

Day 1 at #AIInfraSummit was energizing. The conversations at our booth make it clear: the future of AI depends not just on smarter models, but on more efficient compute.

That’s why we were excited to have Alex Liu, our SVP of Product and Business, onstage sharing how we are
FuriosaAI (@furiosaai) 's Twitter Profile Photo

Today we partnered with OpenAI for the grand opening of its new Seoul office, where we showcased its new open-weight gpt-oss 120B model running live on RNGD, our flagship AI accelerator. We demonstrated a real-time chatbot efficiently running the model on just two RNGD cards,

Today we partnered with <a href="/OpenAI/">OpenAI</a> for the grand opening of its new Seoul office, where we showcased its new open-weight gpt-oss 120B model running live on RNGD, our flagship AI accelerator.

We demonstrated a real-time chatbot efficiently running the model on just two RNGD cards,
FuriosaAI (@furiosaai) 's Twitter Profile Photo

Last week, we demoed OpenAI’s open-weight gpt-oss 120B model running live on RNGD, our flagship AI accelerator. Here’s a recording of the demo in action, showing that cutting-edge models can be deployed well within the existing power budgets of typical data centers:

FuriosaAI (@furiosaai) 's Twitter Profile Photo

Introducing Furiosa NXT RNGD Server 🚀 Engineered for efficient AI inference, the NXT RNGD Server hosts up to 8 RNGD accelerators, delivering: ⚡ 4 PFLOPS (FP8) compute 📦 384GB HBM + 12TB/s bandwidth 🔋 Just 3kW power draw (compared to 10.2kW for H100 SXM servers) The result:

Introducing Furiosa NXT RNGD Server 🚀

Engineered for efficient AI inference, the NXT RNGD Server hosts up to 8 RNGD accelerators, delivering:
⚡ 4 PFLOPS (FP8) compute
📦 384GB HBM + 12TB/s bandwidth
🔋 Just 3kW power draw (compared to 10.2kW for H100 SXM servers)

The result:
FuriosaAI (@furiosaai) 's Twitter Profile Photo

🛬 We’re headed to #SDEC2025 in Kuala Lumpur next week to explore the future of AI and semiconductors. 👀 Don’t miss our co-founder and CEO, June Paik, onstage October 10 as he shares how Furiosa is building next-gen AI chips for a more sustainable future. 🤝 Visit us at KLCC

🛬 We’re headed to #SDEC2025 in Kuala Lumpur next week to explore the future of AI and semiconductors.

👀 Don’t miss our co-founder and CEO, June Paik, onstage October 10 as he shares how Furiosa is building next-gen AI chips for a more sustainable future.

🤝 Visit us at KLCC
FuriosaAI (@furiosaai) 's Twitter Profile Photo

We’re pleased to announce a new partnership between FuriosaAI and ByteBridge to advance next-generation AI infrastructure across the APAC region. "Furiosa is committed to delivering cutting-edge processors that power the next wave of AI workloads. By working with ByteBridge, we

We’re pleased to announce a new partnership between FuriosaAI and <a href="/ByteBT2023/">ByteBridge</a> to advance next-generation AI infrastructure across the APAC region.

"Furiosa is committed to delivering cutting-edge processors that power the next wave of AI workloads. By working with ByteBridge, we
FuriosaAI (@furiosaai) 's Twitter Profile Photo

It’s been an exciting few days at #SDEC2025 ⚡ Thank you to everyone who visited our booth and joined our keynote and panel conversation on Furiosa’s journey to building next-gen AI chips. Drop by Booth 6307 today and tomorrow as the conference continues to meet the team and

It’s been an exciting few days at #SDEC2025 ⚡ 

Thank you to everyone who visited our booth and joined our keynote and panel conversation on Furiosa’s journey to building next-gen AI chips.

Drop by Booth 6307 today and tomorrow as the conference continues to meet the team and
FuriosaAI (@furiosaai) 's Twitter Profile Photo

Missed our latest SDK 2025.3.0 release? We’re making #furiousprogress unlocking new levels of performance and efficiency for large-scale models and agentic AI on RNGD. Performance gains vs. 2025.2.0: ✅ Llama 3.1 8B: Up to 4.5% average throughput boost and up to 55% average

Missed our latest SDK 2025.3.0 release? We’re making #furiousprogress unlocking new levels of performance and efficiency for large-scale models and agentic AI on RNGD.

Performance gains vs. 2025.2.0:
✅ Llama 3.1 8B: Up to 4.5% average throughput boost and up to 55% average
Confidential Computing Consortium (@confidentialc2) 's Twitter Profile Photo

🎉 We're pleased to welcome FuriosaAI as CCC's startup member! FuriosaAI hopes to contribute its expertise in hardware-accelerated inference while learning from the community’s efforts to standardize & advance confidential computing practices. Blog: hubs.la/Q03NCPyM0

🎉 We're pleased to welcome FuriosaAI as CCC's startup member!

FuriosaAI hopes to contribute its expertise in hardware-accelerated inference while learning from the community’s efforts to standardize &amp; advance confidential computing practices.

Blog: hubs.la/Q03NCPyM0
FuriosaAI (@furiosaai) 's Twitter Profile Photo

The 2025 OCP Global Summit is officially underway 🎉 Join FuriosaAI, hosted·ai, and BlueSky Compute for a happy hour on Wednesday, October 15. Connect with peers across the AI infrastructure, cloud, and systems communities over drinks and light bites. Meet the team and exchange

The 2025 OCP Global Summit is officially underway 🎉 

Join FuriosaAI, hosted·ai, and BlueSky Compute for a happy hour on Wednesday, October 15. Connect with peers across the AI infrastructure, cloud, and systems communities over drinks and light bites. Meet the team and exchange
FuriosaAI (@furiosaai) 's Twitter Profile Photo

Behind the scenes: Furiosa engineer Sanguk Park and CTO Hanjoon Kim share how our team served gpt-oss-120B at 5.8 ms TPOT using just two RNGD cards while maintaining exceptional power efficiency (under 180 W per card) for OpenAI's South Korean office grand opening. Read the

Behind the scenes: Furiosa engineer Sanguk Park and CTO Hanjoon Kim share how our team served gpt-oss-120B at 5.8 ms TPOT using just two RNGD cards while maintaining exceptional power efficiency (under 180 W per card) for <a href="/OpenAI/">OpenAI</a>'s South Korean office grand opening.

Read the
FuriosaAI (@furiosaai) 's Twitter Profile Photo

📌 We’re at #PyTorchCon this week! As a global company, we are excited to see that 61 countries contributed to PyTorch this year, not to mention all the vLLM, DeepSpeed, and Ray updates. 🤝 Connect with us at Booth S13 on October 22 and October 23 to discuss your AI

📌 We’re at #PyTorchCon this week!

As a global company, we are excited to see that 61 countries contributed to <a href="/PyTorch/">PyTorch</a> this year, not to mention all the vLLM, DeepSpeed, and Ray updates.

🤝 Connect with us at Booth S13 on October 22 and October 23 to discuss your AI