sane (@sane_codes) 's Twitter Profile
sane

@sane_codes

Dev and AI art | bento.me/sane-codes

ID: 1695907249552584704

calendar_today27-08-2023 21:13:14

193 Tweet

157 Followers

332 Following

Luma AI (@lumalabsai) 's Twitter Profile Photo

You node what time it is. Luma Photon and Ray models are now natively available in ComfyUI API nodes. Build your wildest workflows now.

You node what time it is. Luma Photon and Ray models are now natively available in <a href="/ComfyUI/">ComfyUI</a> API nodes. Build your wildest workflows now.
Luma AI (@lumalabsai) 's Twitter Profile Photo

Reframe is here. Outpaint and resize any uploaded video, image, or #DreamMachine creation into every format you need. Customize and reposition freely on the canvas to make every version of your story fit perfectly. Goodbye cropping. Hello creative freedom.

Luma AI (@lumalabsai) 's Twitter Profile Photo

Introducing Modify Video. Reimagine any video. Shoot it in post with director-grade control over style, character, and setting. Restyle expressive performances, swap entire worlds, or redesign the frame to your vision. Shoot once. Shape infinitely.

Luma AI (@lumalabsai) 's Twitter Profile Photo

This is Ray3. The world’s first reasoning video model, and the first to generate studio-grade HDR. Now with an all-new Draft Mode for rapid iteration in creative workflows, and state of the art physics and consistency. Available now for free in Dream Machine.

Luma AI (@lumalabsai) 's Twitter Profile Photo

To scale research and deployment of multimodal AGI, today we are pleased to announce that Luma has raised a 900M Series-C and we are partnering with Humain to build a colossal 2GW compute supercluster – Project Halo.

Luma AI (@lumalabsai) 's Twitter Profile Photo

Stop guessing. Start directing. Ray3 Modify is now in Dream Machine. Edit and reimagine videos with all-new precise keyframe and character reference controls. Your vision, reimagined. Supercharge your production with rapid retouching, precise element swapping, and scene redesign.

Anastasia Fomina (@exienator) 's Twitter Profile Photo

У меня есть свой небольшой музыкальный проект. И вчера у меня вышел новый трек. Приглашаю послушать.

У меня есть свой небольшой музыкальный проект. 
И вчера у меня вышел новый трек. 
Приглашаю послушать.
Luma AI (@lumalabsai) 's Twitter Profile Photo

“SOUL CODE” - Ep. 1 In the year 2099, humans can upload into machines, but it’s illegal. Where minds are stored, traded, and stolen, one street-level enforcer searches for what was taken from him. A recovered memory shard pulls him deep into a city filled with deadly permanence.

Luma AI (@lumalabsai) 's Twitter Profile Photo

Introducing Luma Agents. Creative agents that make you prolific. You set the direction. They build with you, seeing what you see and helping teams explore further, iterate faster, and watch ideas multiply.

Luma AI (@lumalabsai) 's Twitter Profile Photo

Introducing Uni-1, Luma’s first unified understanding and generation model, our next step on the path towards unified general intelligence. lumalabs.ai/uni-1

Introducing Uni-1, Luma’s first unified understanding and generation model, our next step on the path towards unified general intelligence.
lumalabs.ai/uni-1
Jiaming Song (@baaadas) 's Twitter Profile Photo

Excited to introduce Uni-1, our new *unified* multimodal model that does both understanding and generation: lumalabs.ai/uni-1 TLDR: I think Uni-1 Luma is > GPT Image 1.5 in many cases, and toe-to-toe with Nano Banana Pro/2. (showcase below)

Excited to introduce Uni-1, our new *unified* multimodal model that does both understanding and generation: lumalabs.ai/uni-1

TLDR: I think Uni-1 <a href="/LumaLabsAI/">Luma</a> is &gt; GPT Image 1.5 in many cases, and toe-to-toe with Nano Banana Pro/2. (showcase below)
amit (@gravicle) 's Twitter Profile Photo

We launched Luma Agents last week. ServicePlan — the largest independent agency in the world — has switched all their pitching to Luma Agents. A team of 3 at Mazda made a film spanning 40 years of the MX5. Adidas is generating campaigns before a stitch is sewn. Here's what's

William Shen (@shenbokui) 's Twitter Profile Photo

UNI-1 is intelligent, directable, cultured. Incredible range it can do. Incredibly proud of the world-class team building a world-class model. It’s a daunting task to go up against industry giants like Deepmind/OpenAI/Bytedance. More to come! API, technical report, model card…

UNI-1 is intelligent, directable, cultured. Incredible range it can do.

Incredibly proud of the world-class team building a world-class model.
It’s a daunting task to go up against industry giants like Deepmind/OpenAI/Bytedance.

More to come! API, technical report, model card…
William Shen (@shenbokui) 's Twitter Profile Photo

Most image models are good at one thing. Uni-1 has been good at everything we've thrown at it. Our team generated thousands of images leading up to Uni-1 launch. We embedded them all into a single map where visual similarity determines proximity. The result speaks for itself.

William Shen (@shenbokui) 's Twitter Profile Photo

🚀 UNI-1 debuts us as the best lab not named OpenAI / Google Gemini. Not bad for our first generation of unified image model! Interestingly, with the current update, GPT Image 2’s ELO score is now 110 points lower than before. not sure what happened… cc Kenji Hata Boyuan Chen

🚀 UNI-1 debuts us as the best lab not named <a href="/OpenAI/">OpenAI</a>  / <a href="/GeminiApp/">Google Gemini</a>. Not bad for our first generation of unified image model!

Interestingly, with the current update, GPT Image 2’s ELO score is now 110 points lower than before. not sure what happened… cc <a href="/kenjihata/">Kenji Hata</a> <a href="/BoyuanChen0/">Boyuan Chen</a>
amit (@gravicle) 's Twitter Profile Photo

Uni-1.1 API is now live and creates a new pareto frontier - thinking images at the cost and efficiency of old school diffusion models. Uni-1.1 is a model designed for professional work, which continues to be difficult to capture in public benchmarks. Give it a try in Luma or your

Uni-1.1 API is now live and creates a new pareto frontier - thinking images at the cost and efficiency of old school diffusion models. Uni-1.1 is a model designed for professional work, which continues to be difficult to capture in public benchmarks. Give it a try in Luma or your