TrippyTunesTV (@trippytunestv) 's Twitter Profile
TrippyTunesTV

@trippytunestv

Diffusion Model AI Music Visualizer
twitch.tv/TrippyTunesTV

ID: 6460672

linkhttps://twitch.tv/TrippyTunesTV calendar_today31-05-2007 03:27:02

24 Tweet

48 Takipçi

279 Takip Edilen

TrippyTunesTV (@trippytunestv) 's Twitter Profile Photo

Experimenting around with streamv2v(github.com/Jeff-LiangF/st…) , definitely some improvements on temporal consistency, even with my crazy input videos. Managed to apply the sdxl support from github.com/hkn-g/StreamDi…, and it runs 15-20fps on a 4090.

TrippyTunesTV (@trippytunestv) 's Twitter Profile Photo

if your playing with diffusion models, definitely check out res-adapter.github.io and work it into your pipelines if your using non-512x512 resolutions. Easy to hack into stuff like #streamdiffusion if your generating 16x9 and knocks down the brain tumors on 1.5 derived models

TrippyTunesTV (@trippytunestv) 's Twitter Profile Photo

Playing with OneDiff accelerated AnimateLCM pipelines. This was generated realtime.. need to try some last frame feedback with SparseCtrl...

TrippyTunesTV (@trippytunestv) 's Twitter Profile Photo

Wow, onediff accelerated diffusers AnimateLCM + rife_ncnn_vulkan_python makes a pretty decent stack. Any tips for autoregressive flows for AnimateDiff/AnimateLCM ideally with Diffusers, especially for alternate resolutions since SparseCtrl RGB really falls apart at non 512x512.

Decart (@decartai) 's Twitter Profile Photo

Introducing MirageLSD: The First Live-Stream Diffusion (LSD) AI Model Input any video stream, from a camera or video chat to a computer screen or game, and transform it into any world you desire, in real-time (<40ms latency). Here’s how it works (w/ demo you can use!):

TrippyTunesTV (@trippytunestv) 's Twitter Profile Photo

StreamDiffusionV2 - after a bit of tweaking of noise_scale and the adaptive noise scaler. It's pretty flexible. Can't wait to see what people build on this.