Midcentury (@midcenturyai) 's Twitter Profile
Midcentury

@midcenturyai

Multimodal data research lab for open AI.

ID: 1995957101097836544

linkhttp://midcentury.xyz/ calendar_today02-12-2025 20:44:01

8 Tweet

135 Followers

2 Following

Midcentury (@midcenturyai) 's Twitter Profile Photo

NeurIPS 2025 was a reminder that AI progress isn’t about louder demos, it’s about better data and solving the hard problems Great conversations with researchers pushing the edges of multimodal intelligence.

NeurIPS 2025 was a reminder that AI progress isn’t about louder demos, it’s about better data and solving the hard problems

Great conversations with researchers pushing the edges of multimodal intelligence.
Midcentury (@midcenturyai) 's Twitter Profile Photo

key learning from recent convos with researchers: voice ai is increasingly circling hardware and wearable devices, and voice to action interfaces datasets and RL environments etc. will need to follow this new product paradigm

Midcentury (@midcenturyai) 's Twitter Profile Photo

ICYMI: Our researcher Brandon Samaroo at NeurIPS. Thoroughly enjoyed the robotics showcase. Key insight from the teams presenting: at enough scale, vision-language-action models stop separating human videos and robot data. They learn a shared latent space. Once a VLA can control

ICYMI: Our researcher <a href="/B_S_N_Y/">Brandon Samaroo</a> at NeurIPS. Thoroughly enjoyed the robotics showcase.

Key insight from the teams presenting:  at enough scale, vision-language-action models stop separating human videos and robot data. They learn a shared latent space.

Once a VLA can control
Midcentury (@midcenturyai) 's Twitter Profile Photo

Had a great time at ’s NeurIPS 2025 mixer, great convos w/ great people: ▪️why LLMs struggle with long back-and-forth w/ other LLMs (and how RL envs help) ▪️multi-speaker voice data demand is booming The future of data starts with research.

Had a great time at <a href="/cartesia/"></a>’s NeurIPS 2025 mixer, great convos w/ great people:

▪️why LLMs struggle with long back-and-forth w/ other LLMs (and how RL envs help)
▪️multi-speaker voice data demand is booming

The future of data starts with research.
Midcentury (@midcenturyai) 's Twitter Profile Photo

Late post but the team had a great time at the Cerebras Cafe Compute event. Met tons of smart researchers from UL and @Amazon. Thanks to the amazing host @SarahChieng!

Late post but the team had a great time at the <a href="/cerebras/">Cerebras</a> Cafe Compute event. 

Met tons of smart researchers from UL and @Amazon. Thanks to the amazing host @SarahChieng!
Midcentury (@midcenturyai) 's Twitter Profile Photo

Happy new year's!! To gear up for 2026, some things the team is proud of: - net-new investments into continual learning - new product lines: training data to inference data, RL envs to capture it, evals and benchmarks - finally went multimodal: voice to world models, robotics,

Midcentury (@midcenturyai) 's Twitter Profile Photo

2 days into 2026 and the team is already super excited. In 2025, we >10x'd revenue in the span of 3 months. 2026 will be crazier

Midcentury (@midcenturyai) 's Twitter Profile Photo

at this point it's pretty clear, world models are replacing VLA backbones. Real world intelligence requires interactive video data, in the future models will plan and make actions via latent 3d space, not copy demos. Bearish any player without RL loops/future generalization

DG (@dgmonsoon) 's Twitter Profile Photo

when people think about scaling robots learning from human data, they mostly think about just iphone/glasses ego data But spatiotemporal world model data improves both general planning and world representation, as well as execution. spatial intelligence needs gaming

DG (@dgmonsoon) 's Twitter Profile Photo

come train with our world model and ego data and try out some of our early benchmarks! Super excited for @midcenturyai/ORO AI to be a sponsor for researchers and builders building cool stuff at Founders Inc

DG (@dgmonsoon) 's Twitter Profile Photo

new neo-labs will have to focus on net-new training paradigms to enable recursive self-improvement. May be directionally bearish for RL but importance of data will still remain

Midcentury (@midcenturyai) 's Twitter Profile Photo

This is a cool demo but ultimately sorely limited. An agent without control over a learning mechanism will always make the same mistakes and leak the same info every time. Agents in order to be successful need to self-improve and operate their own learning loop from past

DG (@dgmonsoon) 's Twitter Profile Photo

ego data is starting to see real evidence it helps scale robotic models! 20,000 hours used in total, one of the largest pre-training sets here.... what if I told you that certain players were already scaling to 1 million 👀