Yining Shi (@yining_shi) 's Twitter Profile
Yining Shi

@yining_shi

Director of Product Eng + Founding Engineer @runwayml | Adjunct professor @ITP_NYU, @ml5js, she/her.

ID: 2459436104

linkhttps://linktr.ee/yiningshi calendar_today23-04-2014 08:57:11

262 Tweet

2,2K Takipçi

730 Takip Edilen

Runway (@runwayml) 's Twitter Profile Photo

Introducing, Expand Video. This new feature allows you to transform videos into new aspect ratios by generating new areas around your input video. Expand Video has begun gradually rolling out and will soon be available to everyone. See below for more examples and results.

Runway (@runwayml) 's Twitter Profile Photo

Introducing Frames: An image generation model offering unprecedented stylistic control. Frames is our newest foundation model for image generation, marking a big step forward in stylistic control and visual fidelity. With Frames, you can begin to architect worlds that represent

Yining Shi (@yining_shi) 's Twitter Profile Photo

Excited about the new feature in Act-One: I can upload a video of me standing on the street (not singing), find any singing video, and get myself to sing:

Runway (@runwayml) 's Twitter Profile Photo

NYU is bringing Runway tools to the Martin Scorsese Virtual Production Center. A new course will explore how AI video can be integrated into various parts of the filmmaking process. The course will be taught by Leilanni Todd, an award winning artist and member of Runway’s

NYU is bringing Runway tools to the Martin Scorsese Virtual Production Center. A new course will explore how AI video can be integrated into various parts of the filmmaking process. The course will be taught by Leilanni Todd, an award winning artist and member of Runway’s
Runway (@runwayml) 's Twitter Profile Photo

Today we are releasing Frames. Our most advanced base model for image generation, offering unprecedented stylistic control and visual fidelity. Learn more below. (1/10)

Anastasis Germanidis (@agermanidis) 's Twitter Profile Photo

Gen-4 Turbo is an amazing feat of research and engineering. To give a sense of the improvement, in our internal evals its outputs were preferred ~90% of the time compared to those of non-Turbo Gen-3 Alpha.

Runway (@runwayml) 's Twitter Profile Photo

Today we are releasing Gen-4 References to all paid plans. Now anyone can generate consistent characters, locations and more. With References, you can use photos, generated images, 3D models or selfies to place yourself or others into any scene you can imagine. More examples

Runway (@runwayml) 's Twitter Profile Photo

Introducing Runway Aleph, a new way to edit, transform and generate video. Aleph is a state-of-the-art in-context video model, setting a new frontier for multi-task visual generation, with the ability to perform a wide range of edits on an input video such as adding, removing

Omri Avrahami (@omriavr) 's Twitter Profile Photo

One of my favorite capabilities of Runway Aleph is how it combines new camera angles with next-shot generation 🪄. It lets you build a consistent story from a single shot! It’s yet another small step closer to full-length AI-generated films! 🦿🎦

Anastasis Germanidis (@agermanidis) 's Twitter Profile Photo

I've learned that it takes about a year from initially setting a research direction internally to getting it to finally work. It took a year of investing in scaling laws for video models to releasing Gen-3 Alpha. It took a year of investing in a unified multi-task approach to

Omer Bar Tal (@omerbartal) 's Twitter Profile Photo

Aleph isn’t just multitask, it’s task-agnostic by design. No disentanglement, no hacky pipelines. Just a true generalist. So bullish on our approach 👌