Dale Harper (@daleharper) 's Twitter Profile
Dale Harper

@daleharper

Growth focused Tech for over 20 years. 4micro.co, cloudadministrator.com.au and internacious.com

ID: 14338612

linkhttps://daleharper.start.page calendar_today09-04-2008 03:23:17

776 Tweet

307 Followers

530 Following

Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) (@rao2z) 's Twitter Profile Photo

💡We put all our linguistic output on the web, but not the thought that went into generating it. LLMs can thus do some form of transformational analogy, but not derivational analogy. The prompting techniques are essentially ways of making up for the derivational shortfall. 🧵1/

Santiago (@svpino) 's Twitter Profile Photo

LLMs are incredible powerful memorization machines. They are impressive, but they are not intelligent. They have the ability to memorize extremely large amounts of data and the ability to generalize a tiny bit from it. This is enough in many cases, but insufficient in many

Dale Harper (@daleharper) 's Twitter Profile Photo

Just ask 'em. Anything along the lines of - if you were honest aren't you more synthesist than generative. Combining your responses in novel ways from your training data is as about as generative as you can get.

Dale Harper (@daleharper) 's Twitter Profile Photo

I feel like how machines meet the criteria for reasoning is becoming less important as the dialogue continues. The debate about AI’s ability to reason is starting to focus more on outcomes, rather than how AI gets there versus humans. Kinda the first steps in demoting

Santiago (@svpino) 's Twitter Profile Photo

There are 42,700 self-appointed "AI Engineers" on LinkedIn, and I wonder how many of them will pass this Generative AI test: It's 25 questions you need to answer in 40 minutes. The questions cover the following topics: • Foundational knowledge • Model training and fine-tuning

There are 42,700 self-appointed "AI Engineers" on LinkedIn, and I wonder how many of them will pass this Generative AI test:

It's 25 questions you need to answer in 40 minutes. The questions cover the following topics:

• Foundational knowledge
• Model training and fine-tuning
Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

📌 Deductive reasoning presents a greater challenge than inductive reasoning for LLMs. While LLMs can often infer correct mapping functions inductively, they struggle to apply these functions deductively, especially for unfamiliar tasks. 👨‍🔧 Definition: Deductive reasoning is

📌 Deductive reasoning presents a greater challenge than inductive reasoning for LLMs. While LLMs can often infer correct mapping functions inductively, they struggle to apply these functions deductively, especially for unfamiliar tasks.

👨‍🔧 Definition: Deductive reasoning is
Dale Harper (@daleharper) 's Twitter Profile Photo

I’ll forgive AI for “foster” the day AI can take that word and animate into 3d dimensional form so I can cyber smash it

Dale Harper (@daleharper) 's Twitter Profile Photo

And the award for an LLM noticing your pun but finding it irksome goes to Mistral Large 2. Question: What does the feed-forward sublayer do? It seems like the self-attention sublayer in a transformer layer has all the attention (pun intended) Mistral Large 2: ...the

Dale Harper (@daleharper) 's Twitter Profile Photo

We’re shifting through three AI eras in compressed real time - blink and you’ll miss it. 1/ 1-2 years ago: AI. Meh. 2/ The liminal phase where we are right now. Where we don’t know yet what the boundaries are on AI’s potential. AGI etc. 3/ 1-2 years time: Already the

Dale Harper (@daleharper) 's Twitter Profile Photo

We’ve always documented and retained great insights. it’s called writing - books, post, long form, short form whatever form it may take. Now, AI’s ephemeral final outputs at the end of all those transformations arguably against some to be determined measure - should have a merit

Dale Harper (@daleharper) 's Twitter Profile Photo

Microsoft NLWeb. Positioned as “html for the AI web”. And using good old RSS as a first-class citizen along with modern standards including MCP. Seems some standards really REALLY stand the test of time. #nlweb

Latent.Space (@latentspacepod) 's Twitter Profile Photo

Noam Brown from OpenAI just dropped a truth bomb:⁣ ⁣ "Your fancy AI scaffolds will be washed away by scale"⁣ ⁣ Routers, harnesses, complex agentic systems... all getting replaced by models that just work better out of the box⁣ ⁣ The reasoning models already proved this

K Srinivas Rao (@sriniously) 's Twitter Profile Photo

I study the history of software because most people think code innovation happens in a vacuum. They see React and think Facebook just invented components. They miss the decades of work on MVC patterns, the failed attempts at web components, the slow evolution from server-side

Jaya Gupta (@jayagup10) 's Twitter Profile Photo

Foundation model providers like OpenAI and Anthropic have abandoned the pure infrastructure play. They're vertically integrating at unprecedented speed: OpenAI's Agent Mode, Claude Code, Deep Research. Your startup's success becomes their product roadmap. Build something

Foundation model providers like <a href="/OpenAI/">OpenAI</a> and <a href="/AnthropicAI/">Anthropic</a> have abandoned the pure infrastructure play. They're vertically integrating at unprecedented speed: OpenAI's Agent Mode, Claude Code, Deep Research.

Your startup's success becomes their product roadmap. Build something
DEJAN (@dejanseo) 's Twitter Profile Photo

My claim that the future of SEO is secured with AI relying on search engines rather than internal memory has been both praised and challenged by the community. For those questioning my predictions I've updated the article which is now massive in size and contains key citations

Dale Harper (@daleharper) 's Twitter Profile Photo

the current AI forms of interaction - are they genuinely new, or the last form of the old. What if we’re witnessing the last of the old way of doing things, from an exhausted postmodernist era? open.substack.com/pub/a16z/p/pre…? Alex Danco

Aaron Levie (@levie) 's Twitter Profile Photo

This is why the business model of AI agents is just extremely different from traditional software. In software, you’re capped in the $10-50/mo range per user of your tool. With AI agents, your cap is only what the productivity level is that you’re increasing for the user. If