Alex Gorin (@anklovee) 's Twitter Profile
Alex Gorin

@anklovee

Digital artist. Proud member of CLAN.

foundation.app/anklove

rarible.com/anklovee

ID: 172696954

linkhttps://www.instagram.com/anklovee/ calendar_today30-07-2010 10:44:04

1,1K Tweet

713 Followers

368 Following

rob - comfyui (@hellorob) 's Twitter Profile Photo

The quality, cost, and control you can achieve for upscaling + fixing plastic AI skin with open source models still amazes me... Models used: → Z-image-turbo for image gen (~3s) → SDXL + Lora for skin texture (~15s) → SeedVR2 for upscaling (~40s)

Oliver Prompts (@oliviscusai) 's Twitter Profile Photo

"Local Video Editing" is officially dead 🤯 VideoSOS can run 100+ AI models directly in your browser without cloud processing. It uses a massive stack including Veo 3.1, FLUX, Gemini 2.5 Flash, and Imagen 4, running locally on your hardware. 100% Open Source.

"Local Video Editing" is officially dead 🤯

VideoSOS can run 100+ AI models directly in your browser without cloud processing.

It uses a massive stack including Veo 3.1, FLUX, Gemini 2.5 Flash, and Imagen 4, running locally on your hardware.

100% Open Source.
Mackenzie Mathis, PhD (@trackingactions) 's Twitter Profile Photo

🚨 ✨ New 3D pose estimation method from M- Lab of Adaptive Intelligence @EPFL! #FMPose3D allows for monocular (i.e. single camera) 2D ➡️ 3D 🔥 Led by Ti & w/xiaohang #FMPose3D is SOTA on human & animal 3D benchmarks, & will be integrated into DeepLabCut 🦄 ⬇️👀 📝 arxiv.org/abs/2602.05755

Dev Ed (@developedbyed) 's Twitter Profile Photo

Flux 2 Klein 4B param, dropped at 2 steps with much higher FPS! I also added a couple of LORA's, I'll mess around with. Such a good diffusion model!

Wildminder (@wildmindai) 's Twitter Profile Photo

Pretty cool unified AI VFX production pipeline. - Cinema Prompt Engineering with ~110 film and animation presets; - an infinite Storyboard Canvas + distributed rendering across multiple ComfyUI nodes; - 13+ LLM providers. github.com/NickPittas/Dir…

Pretty cool unified AI VFX production pipeline.
- Cinema Prompt Engineering with ~110 film and animation presets; 
- an infinite Storyboard Canvas + distributed rendering across multiple ComfyUI nodes; 
- 13+ LLM providers.
github.com/NickPittas/Dir…
Martin Maly (@mountain_mal) 's Twitter Profile Photo

i built an app that converts any space into a digital clone in minutes as the founder of Teleport - the only iphone app that can capture high-quality 360° panoramas - i already had the perfect input when World Labs released their 3d reconstruction api 📍 first test - a

Jerome | Insane UEFN (@insaneuefn) 's Twitter Profile Photo

This is crazy! With LTX 2 in ComfyUI i was able to keep my original camera movement + it did a full lip-sync on the singing haha. All running locally. I made the original video back in 2021 when Metahumans first came out and we were working in Unreal Engine 4. This music video

Wildminder (@wildmindai) 's Twitter Profile Photo

ComfyUI Action Director. Interactive 3D viewport for ControlNet; - loads FBX/GLB; batch-renders OpenPose, Depth, Canny, Normal; - manual near/far depth control and infinite UI scaling. github.com/yedp123/ComfyU…

Hugging Models (@huggingmodels) 's Twitter Profile Photo

NVIDIA just dropped PersonaPlex-7B 🤯 A full-duplex voice model that listens and talks at the same time. No pauses. No turn-taking. Real conversation. 100% open source. Free. Voice AI just leveled up. huggingface.co/nvidia/persona…

80 LEVEL (@80level) 's Twitter Profile Photo

Pooya Moradi M. presented a demo made with Meshcapade, an AI-powered, markerless motion capture software. It was recently acquired by Epic Games: 80.lv/articles/ai-mo…

Gleb Alexandrov (@gleb_alexandrov) 's Twitter Profile Photo

While AI-based tools like Sparc3D or Hunyuan 3D can generate genuinely impressive results from a single photo, they often rely on credits, paywalls, and non-so-transparent processing of data. In this tutorial we strip it down to basics: a smartphone, 100% free software,

While AI-based tools like Sparc3D or Hunyuan 3D can generate genuinely impressive results from a single photo, they often rely on credits, paywalls, and non-so-transparent processing of data. In this tutorial we strip it down to basics: a smartphone, 100% free software,
Wildminder (@wildmindai) 's Twitter Profile Photo

FlowRVS - segmentation as a continuous deformation, mapping video latents directly to masks via an ODE. Built on Wan’s T2V. - complex semantic understanding with temporal consistency. - no flickering github.com/xmz111/FlowRVS

Alibaba Tongyi_Lab (@labtongyi96898) 's Twitter Profile Photo

We are impressed by this new Z-Image-Turbo LoRA from the community! By utilizing Flow-DPO, it effectively eliminates "washed-out" artifacts and brings cinematic, physically accurate lighting to our ultra-fast distilled model. 🔹 The Magic: Stunning photorealism in just 8

We are impressed by this new Z-Image-Turbo LoRA from the community!
By utilizing Flow-DPO, it effectively eliminates "washed-out" artifacts and brings cinematic, physically accurate lighting to our ultra-fast distilled model.
🔹 The Magic: Stunning photorealism in just 8
Cyanpuppets (@cyanpuppets) 's Twitter Profile Photo

A 1-billion-parameter AI real-time motion model that connects to a 1080P camera or uploaded videos, supports UE/Unity/Blender, and requires 8GB of VRAM for real-time processing.

Alex Patrascu (@maxescu) 's Twitter Profile Photo

Many tried, most failed. But this is the first skin enhancer I've used that actually makes characters look real. Meet Vellum from OpenArt It's now a staple in my workflow. I won't start a project without it:

Umar Iqbal (@umariqb) 's Twitter Profile Photo

#NVIDIA just released a whole ecosystem for human(oid) motion and robot learning from human data. 🚀🦾 Data, as we all know, is the key to scaling AI models. To accelerate the field of Embodied AI, we have open-sourced a full stack of models and tools to capture, generate,

Jeff Li (@jiefengli_jeff) 's Twitter Profile Photo

There are many human body models, SMPL, MHR, Anny… which one should you use? Answer: all of them. At GTC 2026, we release SOMA, a unified body layer that takes any model's shape and pose, and gives you one canonical mesh and rig.