Alexia Jolicoeur-Martineau (@jm_alexia) 's Twitter Profile
Alexia Jolicoeur-Martineau

@jm_alexia

AI Researcher at the Samsung SAIT AI Lab 🐱‍💻

I build generative models for images, videos, text, tabular data, NN weights, molecules, and video games.

ID: 839820726777544705

linkhttp://ajolicoeur.wordpress.com calendar_today09-03-2017 12:50:39

8,8K Tweet

11,11K Takipçi

1,1K Takip Edilen

Peyman Milanfar (@docmilanfar) 's Twitter Profile Photo

Very proud of our team. This feature deploys a model that is both the largest image-2-image model we've ever put in Pixel; and also the first diffusion model we’ve ever run inside the Pixel Camera.

François Chollet (@fchollet) 's Twitter Profile Photo

LLM adoption among US workers is closing in on 50%. Meanwhile labor productivity growth is lower than in 2020. Many counter-arguments can be made here, e.g. "they don't know yet how to be productive with it, they've only been using for 1-2 years", "50% is still too low to see

François Chollet (@fchollet) 's Twitter Profile Photo

By the way, I don't know if people realize this, but the 2020 work-from-home switch coincided with a major productivity boom, and the late 2021 and 2022 back-to-office reversal coincided with a noticeable productivity drop. It's right there in the statistics. Narrative

Daniel Jeffries (@dan_jeffries1) 's Twitter Profile Photo

People thinking AI will end all the jobs are hallucinating worse than Max Tegmark on an acid trip. And one of the reasons is this: AI does not make people 10x more productive and it is not a magical fix to anything. It is simply another kind of intelligence that shifts the

Timothy Nguyen (@iamtimnguyen) 's Twitter Profile Photo

I'm breaking my silence. For years, I was quiet about how Eric Weinstein, Sabine Hossenfelder, Prof. Brian Keating & Curt Jaimungal suppressed my scientific critique. They preach free inquiry but practice censorship. This is the story of their hypocrisy. 🧵 timothynguyen.org/2025/08/21/phy…

I'm breaking my silence.

For years, I was quiet about how <a href="/EricRWeinstein/">Eric Weinstein</a>, Sabine Hossenfelder, <a href="/DrBrianKeating/">Prof. Brian Keating</a> &amp; <a href="/TOEwithCurt/">Curt Jaimungal</a> suppressed my scientific critique.

They preach free inquiry but practice censorship. 

This is the story of their hypocrisy. 🧵
timothynguyen.org/2025/08/21/phy…
Dynamics Lab (@dynamicslab_ai) 's Twitter Profile Photo

Introducing Mirage 2 — a real-time, general-domain generative world engine you can play online Upload any image—photos, concept art, classic paintings, kids' drawings—and step into it as a live, interactive world. Prompt your worlds with text to create any surreal scenes and

Zichen Liu @ ICLR2025 (@zzlccc) 's Twitter Profile Photo

With just a few lines of code, Feng’s (Feng Yao) suggested fix—applying importance sampling on the behavior policy—resolved the training instability in my case (oat). I believe the result can generalize to other RL frameworks as well. Great work, Feng!

With just a few lines of code, Feng’s (<a href="/fengyao1909/">Feng Yao</a>) suggested fix—applying importance sampling on the behavior policy—resolved the training instability in my case (oat). I believe the result can generalize to other RL frameworks as well. Great work, Feng!
Jiawei Zhao (@jiawzhao) 's Twitter Profile Photo

Introducing DeepConf: Deep Think with Confidence 🚀 First method to achieve 99.9% on AIME 2025 with open-source models! Using GPT-OSS-120B even without tools, we reached this almost-perfect accuracy while saving up to 85% generated tokens. It also delivers many strong

signüll (@signulll) 's Twitter Profile Photo

very few people have ever eaten the glass of going from nothing to something. idea → design → build → launch. that full arc. with or without ai, it’s brutal. it breaks your back & your brain. it’s one of the most honest pains there is.

Taelin (@victortaelin) 's Twitter Profile Photo

correct take. yes I'm skeptical LLMs will create new, insightful math because that requires OOD thinking, which they suck at. but LLMs can solve very hard mathematical *problems* (that's different), which is really cool - as long as they don't require "new, insightful

Bing Xu (@bingxu_) 's Twitter Profile Photo

Yesterday once more. I was the first people to enable MacBook GPU training, getting it to run at about one-quarter the speed of a P100 in 2016–2017 for fine-tuning models. After that, it was all mediocre politics with no real technical vision. I developed PTSD and took half a

Glen Berseth (@glenberseth) 's Twitter Profile Photo

VLAs offer an avenue for generalist robot policies; however, naively following the action predictions leads to brittle or unsafe behaviours. We introduce VLAPS, which integrates model-based search with pre-trained VLA policies to improve performance without additional training.

Alexia Jolicoeur-Martineau (@jm_alexia) 's Twitter Profile Photo

I finally have a working automatic pipeline for training, generation, and high-precision DFT evaluation of novel molecules. Hopefully, we'll discover new and better types of molecules on the way!

Peyman Milanfar (@docmilanfar) 's Twitter Profile Photo

The debate around whether every pixel in a photo from your phone's camera is "real" misses a fundamental fact about how digital cameras have always worked for the last 20 years. The camera sensor only captures ONE color (red, green, or blue) per pixel. The rest are made up 1/4

The debate around whether every pixel in a photo from your phone's camera is "real" misses a fundamental fact about how digital cameras have always worked for the last 20 years. The camera sensor only captures ONE color (red, green, or blue) per pixel. The rest are made up

1/4
siphyshu // jaiyank (@siphyshu) 's Twitter Profile Photo

i found a very cool algorithm: poisson disc sampling it lets you place objects in a random but uniform & natural looking way it's surprising how often this algorithm shows up where you'd expect plain old .random() to work just fine it's widely used in gamedev for procedural

hardmaru (@hardmaru) 's Twitter Profile Photo

Our new GECCO paper builds on our past work, showing how AI models can be evolved like organisms. By letting models evolve their own merging boundaries, compete to specialize, and find ‘attractive’ partners to merge with, we can create adaptive, robust and scalable AI ecosystems.

机器之心 JIQIZHIXIN (@synced_global) 's Twitter Profile Photo

How small can a BERT get without losing its power? 📱🤯 Meet EI-BERT: an ultra-compact framework for edge NLP that combines token pruning, cross-distillation, and quantization. ✅ Just 1.91 MB — smallest ever for Natural Language Understanding (NLU) tasks! It's already been

How small can a BERT get without losing its power? 📱🤯

Meet EI-BERT: an ultra-compact framework for edge NLP that combines token pruning, cross-distillation, and quantization.

✅ Just 1.91 MB — smallest ever for Natural Language Understanding (NLU) tasks!

It's already been
Chris Offner (@chrisoffner3d) 's Twitter Profile Photo

This is what you get when there are no lasting and severe negative consequences to high-confidence wrong predictions. People will just say whatever is most beneficial to them in the moment (attentionally or financially) because there is no real cost to lying/bullshitting.

Rudy Gilman (@rgilman33) 's Twitter Profile Photo

DINO-v3 has a single high-magnitude channel on its residual pathway, channel 416. Turning off this single channel affects DINO's entire output by 50-80%. For context, turning off a random channel has an effect of less than one percent. The model builds up channel 416 in its last