Shan (@misterdushan) 's Twitter Profile
Shan

@misterdushan

@boredapeyc 6420 @azuki 3392

ID: 1405977558122893313

calendar_today18-06-2021 19:56:01

17,17K Tweet

10,10K Takipçi

2,2K Takip Edilen

Shan (@misterdushan) 's Twitter Profile Photo

We ignored the market when building our 3D AI model. No feature lists. No benchmarks. Just first principles. AI isn’t the product. It’s the compression layer between intent and execution. 3,000+ customers in 28 days. Not hype. Real value.

Shan (@misterdushan) 's Twitter Profile Photo

This is Julian 5.0 by Threedium. Single image → production-usable mesh. Sharper edges. Cleaner topology. Much closer to the original concept. Took it straight into Blender — fully editable. Same asset, same control in Threedium Workspace. No lock-in. Retopo is a solid first

Shan (@misterdushan) 's Twitter Profile Photo

Every new medium starts as an object. Then a tool. Then a system. 3D on the web is at that transition. The shift: tools adapt to intent, not the other way around. You shape outcomes, not rules. 3D stops being placed on the web. It starts living on it.

Every new medium starts as an object.
Then a tool.
Then a system.

3D on the web is at that transition.

The shift:
tools adapt to intent, not the other way around.
You shape outcomes, not rules.

3D stops being placed on the web.
It starts living on it.
Shan (@misterdushan) 's Twitter Profile Photo

The model translates intent into a structured 3D state — not a static mesh. It outputs editable geometry, constraints, and appearance logic that remain addressable and reversible over time. In short: it doesn’t generate 3D assets. It compiles intent into a controllable 3D

Shan (@misterdushan) 's Twitter Profile Photo

I believe 3D foundation models shouldn’t be “coming soon”. They should already be usable. That’s why we built them at Threedium. Production-grade. Live today. Free to start. Multimodal is coming next: image, video, and agent models, deeply integrated with our 3D core. One

Shan (@misterdushan) 's Twitter Profile Photo

Most AI 3D models fail after generation. They look great until you: – open them in Blender – try to edit topology – push them into Unreal or Unity This one survives that step. That’s the bar.

Most AI 3D models fail after generation.

They look great until you:
– open them in Blender
– try to edit topology
– push them into Unreal or Unity

This one survives that step.
That’s the bar.
Shan (@misterdushan) 's Twitter Profile Photo

This video shows our AI model running inside Studio. Not a final result, but something you can actually work with. Edit it, iterate, export to Blender, Unreal or Unity, and keep going. Generating 3D is easy. Keeping control while you iterate is the hard part.

Shan (@misterdushan) 's Twitter Profile Photo

Julian 5.5 is learning to see structure first. Same character. Different versions of Julian. Evolving wireframes. 5.5 is the most promising so far. Something new coming soon.

Shan (@misterdushan) 's Twitter Profile Photo

We just built a full Prop Hunt map in Fortnite using Threedium end to end. All assets were generated with our AI model JulianNXT, refined in Studio, then exported straight into Unreal for UEFN. Real, editable, game-ready production.

Shan (@misterdushan) 's Twitter Profile Photo

Quick video showing our Magic Brush in action. A simple way to edit and shape 3D directly inside Threedium Studio. Fast iteration, no heavy tooling.

Shan (@misterdushan) 's Twitter Profile Photo

A single product image → a usable 3D foundation. From there you can generate: • Any-angle product visuals • Color/material variants • Catalog-scale imagery for eCommerce + marketing All editable inside Threedium Workspace. Product content is shifting from produced →