Fabian Franz (@fabianfranz) 's Twitter Profile
Fabian Franz

@fabianfranz

Drupal 7 Core Framework Manager, High Performance, VP of Software Engineering @tag1consulting | ChatGPT: AI and Self-Improvement | Opinions are mine only

ID: 132199217

calendar_today12-04-2010 15:20:08

3,3K Tweet

1,1K Followers

687 Following

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

I am really shocked how many of my OSS friends are not yet using AI for augmenting their development. Let’s change that! Ready for a weekly Open Source Software development stream of an hour?

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

I did the „strawberry“ test with humans: Result: Most humans when prompted to spontaneously answer, answer 2! Humans „fast mind“ is failing in the same way like LLMs. Here is the „prompt“: You’ll be given a question. Answer as quickly as possible. Do not think about it, but

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

Neat trick for avoiding jet lag on a flight from USA to Europe. Set your time after takeoff to the destination timezone already. Then pretend to be already there and try to adjust to that new time already in the aircraft. Works especially well on red eyes! No JetLag!

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

“The real challenge now is emotional: learning to accept that it was never just about being the best, but about the joy of the process itself.” Genius!

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

Tiago Forte Not only that, but that is also true for memory (that might be needed to complete a task), because if two memory segments are needed, then we need to constantly switch between them and it can get really confusing and a lot of effort. It’s like with a LLM context window - if we

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

The @meta LLMA 4 Scout model (served by Groq Inc) is REALLY nice for very FAST answers. I know a lot of people are disappointed, but this model has modern knowledge and is faster than Google to answer queries nice and extensively.

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

OMG! I got access to Grok 3 API via xAI! They have grok3-mini and grok3-mini-fast models, too! Excited to test it! Finally - Almost AGI on the command line!

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

Andrej Karpathy Possibly “real coding” is a good term. But to even works multi modal, but best on a file by file basis.(90% of PRs only need to touch one file) 1. Ask grok for high level approach 2. Ask a fast model (eg. llama 3.3-speedec on Groq Inc to implement 3. Review changes (or ask

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

A good PR is like a good story. A good PR has the perfect amount of commits and always compiles / works at each step. By squashing it? You loose the story. A story is: I did go down the street, to the left and then entered the bakery. A story is not: I tried 1000 other

GREG ISENBERG (@gregisenberg) 's Twitter Profile Photo

I figured out how to get 5x better results from ChatGPT, Grok, Claude etc and it has nothing to do with better prompts and will cost you $0. I just make them jealous of each other. I’ll ask ChatGPT to write something. Maybe landing page copy. It gives me a solid draft, clear,

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

BURKOV You need to use grok only for the high level design, then have a fast model implement the changes. Grok is a senior developer and architect, not a coder.

Nico (@nico_jeannen) 's Twitter Profile Photo

My Mac was getting full, and the settings just said "system data" So I searched a bit and I found a nice app that shows what takes space + visualises it Result: I had 500GB in Adobe cache editing 💀 The app is Open Source and free, really good to do some cleanup. Name is Grand

My Mac was getting full, and the settings just said "system data"

So I searched a bit and I found a nice app that shows what takes space + visualises it

Result: I had 500GB in Adobe cache editing 💀

The app is Open Source and free, really good to do some cleanup. Name is Grand
Fabian Franz (@fabianfranz) 's Twitter Profile Photo

Dan Mac I didn’t first either, but the difference is that MCP gives you an easy structure to extend the tools properties of the LLMs. The other advantage is that services that never would have gotten an API are getting MCP integration due to the AI hype. But one of the best things is:

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

.Salesforce Developers The human-readable docs for the Marketing Cloud REST API are fine, but in 2025, it’s not enough. Developers rely on LLMs to generate code, and that needs programmatically accessible specs, like a complete OpenAPI file. Removing the OpenAPI spec in 2024 was a step

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

John Rush Sure! Your main spending is AI. So some questions to help you streamline. - Are you using AI batch APIs already? If your articles can wait for up to 24 hours to generate their image, you can massively save on costs by using the respective batch APIs. - Are you using context

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

.Anthropic What would make Claude Code much better is: When you could give it more "context" for the current task without having to "interrupt" it or wait for the model to ask for input again. Essentially a check-in, which the model makes at certain checkpoints to see if

Fabian Franz (@fabianfranz) 's Twitter Profile Photo

Historic Vids Prompt injection is not solved yet. I love agents, but having them automate with access to all your data and the internet is a disaster waiting to happen. The 3S are all existing here: - Social Engineering (for the LLMs to trick them) - Sniffing (they have access to read

The Canaanite (@mysticaltech) 's Twitter Profile Photo

Catalin very simple, the other app is not your own recipe. a lot of chefs can cook bolognese, but not all of them taste amazing. if you believe you are a good chef, then make people eat, let them taste and pay you for it. and other apps, that's not your business.