AaltoMediaAI (@aaltomediaai) 's Twitter Profile
AaltoMediaAI

@aaltomediaai

@AaltoUniversity’s AI&ML for media, art & design course. This is a public backlog of material for updating the course.

ID: 1206637723802578944

linkhttps://github.com/PerttuHamalainen/MediaAI calendar_today16-12-2019 18:11:12

1,1K Tweet

249 Followers

170 Following

Benjamin De Kraker (@benjamindekr) 's Twitter Profile Photo

I resigned from xAI tonight. It makes me very sad, but was the right thing to do -- and here's why. xAI told me I either had to delete the post quoted below, or face being fired. After reviewing everything and thinking a lot, I've decided that I'm not going to delete the post

Simo Ryu (@cloneofsimo) 's Twitter Profile Photo

This is really insane. They took all the bet and scaled up discrete diffusion model to llama-7B scale. IIRC nobody dared to do this at this scale but these madlads done it. They even fine-tuned it to be a dialogue model. This is really frontier-level shit that is genuinely new

This is really insane. They took all the bet and scaled up discrete diffusion model to llama-7B scale.

IIRC nobody dared to do this at this scale but these madlads done it. They even fine-tuned it to be a dialogue model.
This is really frontier-level shit that is genuinely new
Pika (@pika_labs) 's Twitter Profile Photo

Today we’re launching Pikaswaps: replace anything in your videos using photos you upload, or scenes you describe. The results are unbelievably believable, and the possibilities are as unlimited as your imagination. Try it at Pika dot art

Freddy Chávez Olmos (@freddychavezo) 's Twitter Profile Photo

Testing Pika’s new Modify Region tool “Pikaswaps”, which allows you to specify what you want to change in video footage and what you want to replace it with, using prompts, a paint brush and image references. This tool clearly shows how rapidly this tech is advancing. I’m

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

This is interesting as a first large diffusion-based LLM. Most of the LLMs you've been seeing are ~clones as far as the core modeling approach goes. They're all trained "autoregressively", i.e. predicting tokens from left to right. Diffusion is different - it doesn't go left to

Anthropic (@anthropicai) 's Twitter Profile Photo

New Anthropic research: Tracing the thoughts of a large language model. We built a "microscope" to inspect what happens inside AI models and use it to understand Claude’s (often complex and surprising) internal mechanisms.

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

Noticing myself adopting a certain rhythm in AI-assisted coding (i.e. code I actually and professionally care about, contrast to vibe code). 1. Stuff everything relevant into context (this can take a while in big projects. If the project is small enough just stuff everything

Sundar Pichai (@sundarpichai) 's Twitter Profile Photo

At #GoogleIO, we shared how decades of AI research have now become reality.  From a total reimagining of Search to Agent Mode, Veo 3 and more, Gemini season will be the most exciting era of AI yet.  Some highlights 🧵

At #GoogleIO, we shared how decades of AI research have now become reality. 

From a total reimagining of Search to Agent Mode, Veo 3 and more, Gemini season will be the most exciting era of AI yet. 

Some highlights 🧵
Paul Couvert (@itspaulai) 's Twitter Profile Photo

Microsoft has revolutionized the automation game You can automate any task just by recording your screen and explaining it to the AI. Copilot will then analyze the mouse movements, audio... And build the automation flow all by itself! (Way easier than n8n or Make.) 00:00 -

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

It’s a hefty 206-page research paper, and the findings are concerning. "LLM users consistently underperformed at neural, linguistic, and behavioral levels" This study finds LLM dependence weakens the writer’s own neural and linguistic fingerprints. 🤔🤔 Relying only on EEG,

It’s a hefty 206-page research paper, and the findings are concerning.

"LLM users consistently underperformed at neural, linguistic, and behavioral levels"

This study finds LLM dependence weakens the writer’s own neural and linguistic fingerprints. 🤔🤔

Relying only on EEG,