John (@johnrachwan) 's Twitter Profile
John

@johnrachwan

Co-Founder & CTO @PrunaAI

ID: 1159379361331630080

linkhttp://pruna.ai calendar_today08-08-2019 08:22:34

312 Tweet

279 Takipçi

85 Takip Edilen

John (@johnrachwan) 's Twitter Profile Photo

Inspired by fofr's recent sinister robot prompting. I updated the replicate.com/replicate-prun… endpoint on Replicate to optimise prompts by translating them to chinese to automatically get better results. This is generated in 15s at the price of an image model (2.5c) > a

fofr (@fofrai) 's Twitter Profile Photo

Qwen image with Replicate lora, runs in ~6s using the super fast pruna-optimised model Pass your trained weights as a `lora_weights` input to replicate.com/qwen/qwen-image Example: replicate.com/p/kpqmcywygnrm…

Qwen image with Replicate lora, runs in ~6s using the super fast pruna-optimised model

Pass your trained weights as a `lora_weights` input to replicate.com/qwen/qwen-image

Example:
replicate.com/p/kpqmcywygnrm…
fofr (@fofrai) 's Twitter Profile Photo

Qwen Image Edit. 3 seconds, $0.03. replicate.com/qwen/qwen-imag… Pruna AI do the impossible. > Make the text 3D and floating on a city street

Qwen Image Edit. 3 seconds, $0.03.
replicate.com/qwen/qwen-imag…

<a href="/PrunaAI/">Pruna AI</a> do the impossible.

&gt; Make the text 3D and floating on a city street
Pruna AI (@prunaai) 's Twitter Profile Photo

🚀 𝗠𝗮𝘀𝘀𝗶𝘃𝗲 𝘂𝗽𝗴𝗿𝗮𝗱𝗲 𝗳𝗼𝗿 𝗤𝘄𝗲𝗻-𝗜𝗺𝗮𝗴𝗲! We've just dropped new speed, memory, and feature improvements on Replicate that are going to change your AI image generation workflow: • 𝗦𝗽𝗲𝗲𝗱 𝗯𝗼𝗼𝘀𝘁- Now generates in just 3 seconds! • 𝗠𝗲𝗺𝗼𝗿𝘆

🚀 𝗠𝗮𝘀𝘀𝗶𝘃𝗲 𝘂𝗽𝗴𝗿𝗮𝗱𝗲 𝗳𝗼𝗿 𝗤𝘄𝗲𝗻-𝗜𝗺𝗮𝗴𝗲!

We've just dropped new speed, memory, and feature improvements on <a href="/replicate/">Replicate</a> that are going to change your AI image generation workflow:

• 𝗦𝗽𝗲𝗲𝗱 𝗯𝗼𝗼𝘀𝘁- Now generates in just 3 seconds!
• 𝗠𝗲𝗺𝗼𝗿𝘆
BRIA AI (@bria_ai_) 's Twitter Profile Photo

It’s not every day that you 𝐜𝐮𝐭 𝐀𝐈 𝐢𝐧𝐟𝐞𝐫𝐞𝐧𝐜𝐞 𝐭𝐢𝐦𝐞 𝐛𝐲 𝟓𝟎%! Our new blog post tells the story of how we did it in just 2 days, without losing quality, using torch compile, and with the help of our partners at Pruna AI. Read the full story on our blog

It’s not every day that you 𝐜𝐮𝐭 𝐀𝐈 𝐢𝐧𝐟𝐞𝐫𝐞𝐧𝐜𝐞 𝐭𝐢𝐦𝐞 𝐛𝐲 𝟓𝟎%! Our new blog post tells the story of how we did it in just 2 days, without losing quality, using torch compile, and with the help of our partners at <a href="/PrunaAI/">Pruna AI</a>.

Read the full story on our blog