Joe (Zhiyong) Xie (@joe_xie) 's Twitter Profile
Joe (Zhiyong) Xie

@joe_xie

AI Infra at Google Core ML/AI. Alum: @X, Twitter, @Amazon, @Facebook, @Microsoft, @UW, and Nanjing Univ. Love tech, food and invest. Opinions are my own :)

ID: 104582392

linkhttps://www.linkedin.com/in/zhiyongxie/ calendar_today13-01-2010 19:36:02

3,3K Tweet

1,1K Followers

2,2K Following

Joe (Zhiyong) Xie (@joe_xie) 's Twitter Profile Photo

Our Runtime Engines Team (under Google Core ML Frameworks org) works on building highly optimized training & inference stacks to execute purpose-oriented AI/LLM workload. I'm hiring a Senior ML Software Engineer google.com/about/careers/… in the Bay Area to build / ship together!

Alex Kantrowitz (@kantrowitz) 's Twitter Profile Photo

Full episode: Google DeepMind CEO Demis Hassabis on the path to AGI, AI creativity and deceptiveness, Google's new smart glasses plans, and building a virutal cell. Listen in full on Big Technology Podcast. Chapters 00:00 The Path To AGI 02:46 Current AI Capabilities and

Demis Hassabis (@demishassabis) 's Twitter Profile Photo

Amidst the massive demand for Gemini 2.5 and Veo 3 models, wanted to also give a big shout out to our world-class infrastructure, chip and SRE teams, who work tirelessly to keep our wonderful TPUs from melting, and without whose incredible work none of this would be possible.

Josh Woodward (@joshwoodward) 's Twitter Profile Photo

The wait is over. Google Gemini App is now shipping Veo 3 *globally* for all Pro members! That means India, Indonesia, all of Europe, and more are starting to get access to create videos right now. As a member, you'll get 3 video generations per day, and that credit will replenish

Jacob Austin (@jacobaustin132) 's Twitter Profile Photo

Today we're putting out an update to the JAX TPU book, this time on GPUs. How do GPUs work, especially compared to TPUs? How are they networked? And how does this affect LLM training? 1/n

Today we're putting out an update to the JAX TPU book, this time on GPUs. How do GPUs work, especially compared to TPUs? How are they networked? And how does this affect LLM training? 1/n