AK (@_akhaliq) 's Twitter Profile
AK

@_akhaliq

AI research paper tweets, ML @Gradio (acq. by @HuggingFace ๐Ÿค—)

dm for promo

follow on Hugging Face: huggingface.co/akhaliq

ID: 2465283662

linkhttps://huggingface.co/akhaliq calendar_today27-04-2014 00:20:12

35,35K Tweet

335,335K Followers

2,2K Following

Sylvain Filoni (@fffiloni) 's Twitter Profile Photo

I know some of you were waiting fo it, no wait no more ! SVD Keyframe Interpolation @gradio demo is available Hugging Face ! Space link : huggingface.co/spaces/fffilonโ€ฆ

Niels Rogge (@nielsrogge) 's Twitter Profile Photo

New model alert! ๐Ÿ”ฅLLaVa-OneVision is now in the Transformers library A powerful series (0.5B/7B/72/B) for single, multi-image, and video scenarios. Successor of LLaVa-NeXT. SOTA open model on Video-MME: video-mme.github.io/home_page.htmlโ€ฆ Definitely worth a look besides Qwen2-VL 1/2

New model alert! ๐Ÿ”ฅLLaVa-OneVision is now in the Transformers library

A powerful series (0.5B/7B/72/B) for single, multi-image, and video scenarios. Successor of LLaVa-NeXT. 

SOTA open model on Video-MME: video-mme.github.io/home_page.htmlโ€ฆ

Definitely worth a look besides Qwen2-VL

1/2
cocktail peanut (@cocktailpeanut) 's Twitter Profile Photo

Yes. So many people seem to think writing a pinokio launcher is some super technical thing only I can do, but the whole point of pinokio is that it makes writing launchers super easy. If you can follow a project readme to install and run some project, you can write a launcher

Matt Shumer (@mattshumer_) 's Twitter Profile Photo

The weights of our 70B model are available today on Hugging Face here: huggingface.co/mattshumer/Refโ€ฆ Hyperbolic API available later today. Next week, we will release the weights of Reflection-405B, along with a short report going into more detail on our process and findings.

Zachary Nado (@zacharynado) 's Twitter Profile Photo

Gemini Flash is now tied with gpt-4o for #2 on the lmsys *vision* leaderboard! combine that with a 1M context length and you can do some seriously cool multimodal work for super cheap โšก๏ธโšก๏ธโšก๏ธ

Gemini Flash is now tied with gpt-4o for #2 on the lmsys *vision* leaderboard!

combine that with a 1M context length and you can do some seriously cool multimodal work for super cheap โšก๏ธโšก๏ธโšก๏ธ
chansung (@algo_diver) 's Twitter Profile Photo

It is so much fun to play with Gradio. If you know JS/CSS, there is no limit of what you can do with it. Here, I am trying to create an application for writing tool with LLM-powered analysis. At then end of dev, it will be shared on Hugging Face Space.

Adina Yakup (@adeenay8) 's Twitter Profile Photo

Itโ€™s only 9 AM and the community has already submitted 8 new papers on hf.co/papers Hugging Face ๐Ÿš€๐Ÿ”ฅ Time to dive into some cool AI research ๐Ÿ’ก

Joseph Pollack #ร ๐ŸŽ—๏ธ (@josephpollack) 's Twitter Profile Photo

๐Ÿ™‹๐Ÿปโ€โ™‚๏ธhey there folks, just released a coding model under 10B parameters with 125K context window , achieving (very high!) SOTA scores on evals. you can try it out for yourself on Hugging Face ๐Ÿ‘‡๐Ÿป๐Ÿ“ท models : huggingface.co/collections/01โ€ฆ Gradio demo : huggingface.co/spaces/Tonic/Yโ€ฆ

clem ๐Ÿค— (@clementdelangue) 's Twitter Profile Photo

Reflection-Llama-3.1-70B by Matt Shumer Sahil Chaudhary Glaive AI is #1 trending on HF. I've said it and will say it again: you don't need to be a big tech to fine-tune, optimize and run your own models for your specific constraints and you will benefit massively from it.

Reflection-Llama-3.1-70B by <a href="/mattshumer_/">Matt Shumer</a> <a href="/csahil28/">Sahil Chaudhary</a> <a href="/GlaiveAI/">Glaive AI</a> is #1 trending on HF. 

I've said it and will say it again: you don't need to be a big tech to fine-tune, optimize and run your own models for your specific constraints and you will benefit massively from it.
cocktail peanut (@cocktailpeanut) 's Twitter Profile Photo

Fluxgym: Train FLUX LoRA LOCALLY with LOW VRAM FLUX LoRAs are super exciting, but there has not been an easy way to train them LOCALLY using LOW VRAM machines. Fluxgym is a Gradio app for training your own Flux LoRAs on your own 12G, 16G, 20G+ VRAM computer for free.

apolinario ๐ŸŒ (@multimodalart) 's Twitter Profile Photo

Video-to-video is now available in the official CogVideoX-5B Space ๐Ÿ”ฅ Try it out ๐ŸŽฅ โžก๏ธ๐ŸŽฅ huggingface.co/spaces/THUDM/Cโ€ฆ

cocktail peanut (@cocktailpeanut) 's Twitter Profile Photo

Great news--looks like I was wasting 4GB VRAM with the previous code, it wasn't clearing up memory after running Florence. Just pushed an update that saves 4GB. This means, theoretically you can run Fluxgym on: - 16G VRAM - 12G VRAM - 8GB VRAM (!!!) But will need to verify.

Great news--looks like I was wasting 4GB VRAM with the previous code, it wasn't clearing up memory after running Florence.

Just pushed an update that saves 4GB.

This means, theoretically you can run Fluxgym on:

- 16G VRAM
- 12G VRAM
- 8GB VRAM (!!!)

But will need to verify.
Sylvain Filoni (@fffiloni) 's Twitter Profile Photo

I've been working behind the scenes to make this runnable on TPU with Hugging Face, but I'm sorry, I was overly ambitious. We'll have to wait for the optimization steps to make it work with reasonable resources. ๐Ÿค—

cocktail peanut (@cocktailpeanut) 's Twitter Profile Photo

At the moment, Fluxgym provides Flux-dev based LoRA training only. Because you can actually take the Flux-dev LoRA and use it with Flux-schnell for 4 step inference. But maybe I'm missing something. Anyone know if there are any benefits to training on top of Flux-schnell?

At the moment, Fluxgym provides Flux-dev based LoRA training only.

Because you can actually take the Flux-dev LoRA and use it with Flux-schnell for 4 step inference.

But maybe I'm missing something. Anyone know if there are any benefits to training on top of Flux-schnell?
gabriel (@gabrielchua_) 's Twitter Profile Photo

updates: 1. completely forgot about GitHub pages, so now there's a static site 2. upgraded from Google AI Studio gemini flash to gemini pro 3. cleaned up the repo #buildinpublic

updates:

1. completely forgot about <a href="/github/">GitHub</a> pages, so now there's a static site

2. upgraded from <a href="/googleaistudio/">Google AI Studio</a> gemini flash to gemini pro

3. cleaned up the repo

#buildinpublic