steven (@tu7uruu) 's Twitter Profile
steven

@tu7uruu

hearing voices and whispering to neural networks @Huggingface | posting about speech and audio models

ID: 2350034150

linkhttps://huggingface.co/Steveeeeeeen calendar_today18-02-2014 12:21:39

330 Tweet

767 Followers

643 Following

Eustache Le Bihan (@eustachelb) 's Twitter Profile Photo

Cool release by Liquid AI: LFM2-Audio-1.5B It’s a pretty cool omni-architecture that enables prediction of both text and audio tokens, meaning it can handle multi-turn S2S, ASR, and TTS (with voice description) within a single model. Great to see, once again this year, a model

Cool release by <a href="/LiquidAI_/">Liquid AI</a>: LFM2-Audio-1.5B

It’s a pretty cool omni-architecture that enables prediction of both text and audio tokens, meaning it can handle multi-turn S2S, ASR, and TTS (with voice description) within a single model.

Great to see, once again this year, a model
steven (@tu7uruu) 's Twitter Profile Photo

Just dropped on HF: Treble10, our collab with Treble Technologies, a full-band hybrid dataset for realistic speech & acoustics research: • 3000+ RIRs across 10 complex rooms • Mono, 8th-order Ambisonics & 6-mic array • LibriSpeech speech via hybrid simulations

Just dropped on HF: Treble10, our collab with Treble Technologies, a full-band hybrid dataset for realistic speech &amp; acoustics research:

• 3000+ RIRs across 10 complex rooms
• Mono, 8th-order Ambisonics &amp; 6-mic array
• LibriSpeech speech via hybrid simulations
Vaibhav (VB) Srivastav (@reach_vb) 's Twitter Profile Photo

Any git power users in my mutuals/ timeline? We have a new and faster git experience coming up on Hugging Face and we'd love to get feedback from you! Comment or DM and I'll hook you up!

Andi Marafioti (@andimarafioti) 's Twitter Profile Photo

You can now train SOTA models without any storage!🌩️ We completely revamped the Hub’s backend to enable streaming at scale. We streamed TBs of data to 100s of H100s to train SOTA VLMs and saw serious speed-ups. But how?