Robert Tercek (@superplex) 's Twitter Profile
Robert Tercek

@superplex

I'm the author of Vaporized. Spent 20 years inventing the future of games, TV, education, mobile. Now I help companies make the transition to digital domain.

ID: 26937522

linkhttp://roberttercek.com calendar_today27-03-2009 03:20:13

5,5K Tweet

3,3K Followers

692 Following

Sully (@sullyomarr) 's Twitter Profile Photo

Stability just released their new LLM. It's open-source, has 7bparameters, and its entirely free to use commercially. And its a MASSIVE deal, that has the potential to change up everything in AI Here's why:

Stability just released their new LLM. 

It's open-source, has 7bparameters, and its entirely free to use commercially.

And its a MASSIVE deal, that has the potential to change up everything in AI

Here's why:
Rowan Cheung (@rowancheung) 's Twitter Profile Photo

Another huge day in the world of AI with announcements from: Snapchat 'My AI' Synthesis AI Google Brain and Deepmind Martin Shkreli Here's a rundown on everything you need to know:

fofr (@fofrai) 's Twitter Profile Photo

🧵 A big #Midjourney thread on how to write prompts to get good cinematic images. In this thread I’ll build up a single prompt with cinematic elements, and show their effects. Each prompt will use a 16:9 aspect ratio, and to minimise variation I've locked in a seed.

🧵 A big #Midjourney thread on how to write prompts to get good cinematic images.

In this thread I’ll build up a single prompt with cinematic elements, and show their effects.

Each prompt will use a 16:9 aspect ratio, and to minimise variation I've locked in a seed.
Robert Tercek (@superplex) 's Twitter Profile Photo

These examples don’t reveal anything that could plausibly “disrupt Hollywood” any time soon. But the progress is impressive and the trajectory is clear.

Matt Wolfe (@mreflow) 's Twitter Profile Photo

We've seen text-to-image, text-to-3d object, and even text-to-video... Now check out text-to-3d character from Daz 3D. Use natural language to create any character you can imagine in near-AAA game quality and then export that character directly into Blender, Unreal or Unity!

Jim Fan (@drjimfan) 's Twitter Profile Photo

Google is hosting the first "Machine Unlearning" challenge. Yes you heard it right - it's the art of forgetting, an emergent research field. GPT-4 lobotomy is a type of machine unlearning. OpenAI tried for months to remove abilities it deems unethical or harmful, sometimes

Google is hosting the first "Machine Unlearning" challenge. Yes you heard it right - it's the art of forgetting, an emergent research field. 

GPT-4 lobotomy is a type of machine unlearning. OpenAI tried for months to remove abilities it deems unethical or harmful, sometimes
Andrej Karpathy (@karpathy) 's Twitter Profile Photo

I think this is mostly right. - LLMs created a whole new layer of abstraction and profession. - I've so far called this role "Prompt Engineer" but agree it is misleading. It's not just prompting alone, there's a lot of glue code/infra around it. Maybe "AI Engineer" is ~usable,

TomLikesRobots🤖 (@tomlikesrobots) 's Twitter Profile Photo

I'm absolutely blown away by Runway's #Gen2 using image input. The movement is so natural. Using it with Midjourney is a winning combination. If you want your video to stay true to your image, don't use a text prompt. (Thanks to Uncanny Harry AI and Merzmensch Kosmopol🧑‍🎨🤖 for the tip!).

Nathan Benaich (@nathanbenaich) 's Twitter Profile Photo

🪩The State of AI 2023 is now here. Our 6th installment is one of the most exciting years I can remember. The #stateofai report covers everything you *need* to know, covering research, industry, safety and politics. There’s lots in there, so here’s my director’s cut 🧵

🪩The <a href="/stateofaireport/">State of AI</a> 2023 is now here.

Our 6th installment is one of the most exciting years I can remember. The #stateofai report covers everything you *need* to know, covering research, industry, safety and politics.

There’s lots in there, so here’s my director’s cut 🧵
Jim Fan (@drjimfan) 's Twitter Profile Photo

If you think OpenAI Sora is a creative toy like DALLE, ... think again. Sora is a data-driven physics engine. It is a simulation of many worlds, real or fantastical. The simulator learns intricate rendering, "intuitive" physics, long-horizon reasoning, and semantic grounding, all

Jim Fan (@drjimfan) 's Twitter Profile Photo

Apparently some folks don't get "data-driven physics engine", so let me clarify. Sora is an end-to-end, diffusion transformer model. It inputs text/image and outputs video pixels directly. Sora learns a physics engine implicitly in the neural parameters by gradient descent

Jim Fan (@drjimfan) 's Twitter Profile Photo

I see some vocal objections: "Sora is not learning physics, it's just manipulating pixels in 2D". I respectfully disagree with this reductionist view. It's similar to saying "GPT-4 doesn't learn coding, it's just sampling strings". Well, what transformers do is just manipulating

Bilawal Sidhu (@bilawalsidhu) 's Twitter Profile Photo

I’ve used the Apple Vision Pro for 2 weeks now and here are my unfiltered thoughts — you might even call it a hot take 🌶️ 😅 Overall: I'm blown away, absolutely hyped... but also? Frustrated. Why is Apple making it SO HARD to tap into the existing VR media scene? There is a

I’ve used the Apple Vision Pro for 2 weeks now and here are my unfiltered thoughts — you might even call it a hot take 🌶️ 😅

Overall: I'm blown away, absolutely hyped... but also? Frustrated. Why is Apple making it SO HARD to tap into the existing VR media scene? 

There is a