Stop Learning Prompts, Build Expertise Instead

Watch someone use AI for the first time and you'll see one of two reactions. They will either be stunned by what it can do and immediately start outsourcing their thinking to it, or they will get a bad result, call it hype, and write it off entirely.
I've been watching smart people make these mistakes in opposite directions. Some expect AI to do their jobs for them. Others refuse to touch it at all. Both are just wrong.
The problem isn't the tool. It's the mental model: how people think about AI*.
AI Has Memorized the Lecture
To understand why both camps are wrong, we first need to understand what AI actually has: chauffeur knowledge.
There's a story about Max Planck (probably apocryphal, but useful). After winning the Nobel Prize, he toured Germany giving the same lecture repeatedly. His chauffeur heard it so many times, he memorized it word-for-word. One day the chauffeur said, "Let me give the lecture. You wear my cap and sit in the audience." The chauffeur nailed the lecture. Then someone asked a question. "That's such a simple question," the chauffeur replied, "I'll let my chauffeur answer it."
This is the difference between real knowledge and chauffeur knowledge:
- Real knowledge has depth and nuance. It's flexible, adaptable. It applies to new situations. It understands the "why."
- Chauffeur knowledge is surface-level. Pattern matching. It fails under pressure. Can't explain reasoning.
AI has chauffeur knowledge. It's sophisticated pattern-matching at scale, an advanced autocomplete trained on massive data. This means it sounds smart but lacks depth. It can recite the lecture but can't answer the unexpected question.
The Two Trends (and the Gap)
Understanding chauffeur knowledge reveals what AI actually does well:
- Trend 1: Makes experts faster (and lets them go further): If you're already at 70/100 on something, AI takes you to 100 quickly and sometimes beyond what you could reach alone. Think of a senior developer with a copilot or an experienced writer with an editor.
- Trend 2: Makes beginners competent: If you're at 0/100, AI gets you to 50 fast. Like a non-coder building a simple website or app (you've seen the demos on Twitter/X) or an amateur creating decent first drafts.
- The Gap: Zero to hero: What AI doesn't do is take someone from 0 to 100. You still need foundational knowledge. You still need to understand the domain. You still need expertise.
The all-in crowd expects AI to bridge the gap. It doesn't. They plateau at mediocre and often don't realize their output is mediocre because they lack the expertise to judge quality. The skeptics dismiss AI because because it's unreliable on its own. They miss that it does something valuable, if you bring expertise to amplify it.
Your expertise still matters and you need to keep building it. You can't skip the hard part because AI is a multiplier, not a replacement. Without expertise, there's nothing to amplify. I've watched this play out many times with people early in their careers. They think AI lets them skip the hard work: the reading, the experimenting, the grunt work. It doesn't. And it's sad to watch because they don't realize they're trapping themselves in a mediocrity loop. No foundation to build on, no expertise to amplify, just endless 50% outputs.
Five Mental Models for Thinking About AI
If AI amplifies what you already have, you need better frameworks for thinking about it. Here are five mental models that changed how I think about AI.
1. High-IQ Intern
Imagine a brilliant intern just walked in. Smart, eager, capable. But no context. No experience. Doesn't know your domain's quirks or your project's specifics.
That's AI. It needs guidance, clear instructions, and, most importantly, context you take for granted. It'll do what you ask, but only if you're specific. Leave it unsupervised and it wanders into the weeds.
2. Prep Cook, Not Chef
AI can chop onions. Prep ingredients. Start the sauce. It saves time on grunt work.
But it can't design the menu. Can't judge the seasoning. Can't cook a great meal. You're still the chef.
3. Sophisticated Search
Stop thinking you're chatting with AI. Behind the friendly chat interface, it's still token prediction. Next word, then next, then next.
You're not conversing. You're searching through a massive database of patterns. Wrong keywords = wrong results. You can't blame Google for bad searches when you're not using the right keywords.
Think of it like navigating to the right region in knowledge space. Your job is to prompt your way there. Give it the right signals so it autocompletes from the right place.
4. Roguelikes
Each chat session is a fresh run. Sometimes you go down blind alleys and need to restart. Like roguelike games: try, fail, retry. But you keep what you learned between runs.
When you hit a dead end, restart. Take the good parts, edit your previous prompts, start fresh. And save and reload strategically.
5. Context is King
Prompts matter. But context matters more. It's not about the last message you send. It's about managing the entire conversation space.
Your inputs are grounded (based on real data, domain knowledge, actual requirements). AI outputs are ungrounded (pattern-based guesses that need validation). Your job is to keep the context grounded. The quality of AI's output scales with the quality of your input context.
The Wrong Optimization
Both camps make the same error: they don't understand what AI amplifies. The skeptics are right that it's unreliable. The believers are right that it's powerful. But neither sees that the power only works if you bring something to the table.
This means most people are optimizing for the wrong thing. They're learning to prompt better when they should be building expertise. They're stuck at 50% wondering why AI isn't taking them further. The tool works. They just have nothing to amplify.
The skill isn't prompting. It's judgment.
* I don't love calling LLMs "AI", as they're a small part of a much broader field. But that's how most people talk about them now so for clarity, "AI" means large language models like GPT, Claude, and Gemini in this post.
Much of my thinking on this topic has been shaped by Jon Stokes' excellent writing on LLMs. His mental models of treating interacting with AI as search and roguelike sessions, grounded vs. ungrounded context, and the sigmoid curve of context quality have deeply influenced how I approach AI. If you found this post useful, his writing is essential reading.
The chauffeur knowledge concept comes from Sahil Bloom's post, which crystallized something I'd been observing but couldn't name.
Oliver Kel's comprehensive guide on prompting LLMs informed my thinking vastly on prompting LLM models.