Adithya Iyer (@adithy2) 's Twitter Profile
Adithya Iyer

@adithy2

Video @morphic . Applied Math @ NYU. IIT Bombay'21, Ex boring Consultant turned Ghibli Art maker

ID: 536884094

linkhttps://adithyaiyer1999.github.io/ calendar_today26-03-2012 03:26:43

89 Tweet

48 Followers

170 Following

Morphic (@morphic) 's Twitter Profile Photo

Turn your Ghibli style images into animated videos on Morphic. 1. Paste the image on Canvas 2. Select the image 3. Change to 'Video' option on Focus Mode 4. Write the prompt mentioning the action and hit Generate Originally animated by Kunal Bagaria. What will you animate?

Sai Maram (@adesihci) 's Twitter Profile Photo

I discovered Morphic yesterday, and it's incredible. I often draft stories, comics, game narratives to satisfy my creative juices. I made using the image model from #GPT4o and animations from Morphic. Voice from ElevenLabs. Morphic is the Figma for Animations.

Peter Tong (@tongpetersb) 's Twitter Profile Photo

Vision models have been smaller than language models; what if we scale them up? Introducing Web-SSL: A family of billion-scale SSL vision models (up to 7B parameters) trained on billions of images without language supervision, using VQA to evaluate the learned representation.

Vision models have been smaller than language models; what if we scale them up?

Introducing Web-SSL: A family of billion-scale SSL vision models (up to 7B parameters) trained on billions of images without language supervision, using VQA to evaluate the learned representation.
Palak Zatakia (@palakzat) 's Twitter Profile Photo

In 1997, Christopher Nolan made a short film called Doodlebug. He made it during his university days with a super low budget, and is 3 minutes long, but it is one of the most interesting things I've ever watched. As a personal experiment and side project, I recreated Doodblebug

Adithya Iyer (@adithy2) 's Twitter Profile Photo

Aside from the improved quality, we managed to get this one run 3x faster. Sub 3 min multi-image interpolation really speeds up iteration for animators

Morphic (@morphic) 's Twitter Profile Photo

This week, we're excited to roll out a new Interpolation update—with faster, smoother, and improved results than ever.

Xichen Pan (@xichen_pan) 's Twitter Profile Photo

We find training unified multimodal understanding and generation models is so easy, you do not need to tune MLLMs at all. MLLM's knowledge/reasoning/in-context learning can be transferred from multimodal understanding (text output) to generation (pixel output) even it is FROZEN!

We find training unified multimodal understanding and generation models is so easy, you do not need to tune MLLMs at all.
MLLM's knowledge/reasoning/in-context learning can be transferred from multimodal understanding (text output) to generation (pixel output) even it is FROZEN!
Variety (@variety) 's Twitter Profile Photo

AI Startup Morphic to Produce Anime Series 'DQN,' Launches $1 Million Creator Fund for Emerging Filmmakers (EXCLUSIVE) variety.com/2025/global/ne…

Adithya Iyer (@adithy2) 's Twitter Profile Photo

morphic.com/blog/video-mod… We wrote about some of our experience doing large scale data processing for training Video Gen models. We'll be releasing more tech blogs, and open source some of this in the future

Adithya Iyer (@adithy2) 's Twitter Profile Photo

One of those experiments which showed incredible promise very early - we pushed a lot to make ensure robustness across dimensions/video types. Do give this a shot

Andrew Gordon Wilson (@andrewgwils) 's Twitter Profile Photo

AI benchmarking culture is completely out of control. Tables with dozens of methods, datasets, and bold numbers, trying to answer a question that perhaps no one should be asking anymore.