56 Comments
User's avatar
Karina Korpela's avatar

We need these in t-shirts: “Expertise isn’t knowledge.” And so read this part in Liam Neeson’s voice: “It’s a set of capabilities that takes years to develop, “. Great job outlining the how transfer one’s knowledge onto LLM. Next move, create it as digital worker version of yourself, put it to work on your behalf so you can enjoy life, which AI can’t do for you! And acquire more experience while doing so! Agentpreneurship.

Ruben Hassid's avatar

Transfering knowledge is the key, really.

Axelle Malek's avatar

This is the part most people skip. They want the template and the shortcut.

But the shortcut doesn't work if you can't spot when AI is wrong.

That's the whole point.

Ruben Hassid's avatar

AI writes v1. You catch the mistakes. That's the formula.

But you can only catch what you've learned to see.

Templates and shortcuts aren’t the value.

Luc Puis's avatar

Love this post and agree with you on most of the statements. Thank you.

Are you afraid that, hypothetically, if everyone uses the same strategy, then the outcome will turn into the 70%? Eventually, 99.99% of what AI will train on, will ultimately come from AI-supported content. Diluting human content to a very low concentration.

In other words, never stop thinking and producing human-made content...

Ruben Hassid's avatar

good question - the answer is no.

think about it: a thousand people can follow the exact same 5-step process I shared. but the outputs will be wildly different because what they upload is different. their standards. their constraints. their audience knowledge.

never outsource the parts that make your work yours.

Fred Brown's avatar

Outstanding article. The overall flow is great and the "non-obvious" prompt iteration feedback points were useful as well.

Ruben Hassid's avatar

I always aim at non-obvious

Andy's avatar

I couldn't agree with this more. I love AI tools because they push me further in what I already know and to be better at it. Most of the criticism I see of AI is people not understanding how LLMs work and what their strengths and weaknesses are, so they get disappointing results. I think you have to be an expert in (or at least have a good understanding of) the subject you're getting assistance on. It's like a smart intern - it has skills but no experience. This is an article that everyone should read and understand, and it would change the discussion if people really take it on board. Great work!

Ruben Hassid's avatar

It's when you can mix YOUR skill & taste with the effortless work of an AI that you do magic

Alexander Wipf's avatar

Great take. Speaking for a lot of strategy people there!

Ruben Hassid's avatar

are you one of them? :) i’d love to know the position of people reading my newsletter

Ruben Hassid's avatar

what does that mean?

Alexander Wipf's avatar

I quote: When you ask a great strategist for advice, they don’t give you a clean answer. They say, “It depends.” See what I did there? Happy to discuss for sure!

Ruben Hassid's avatar

You definitely know aha

Esha Bhatia's avatar

Hi Ruben - I can’t access Slack and likely need a new invite. Thanks!

Ruben Hassid's avatar

Sure - sent you a DM here.

Vattan Bali's avatar

This framing of AI as a mirror, not a crutch, is probably the most useful mental model I've seen in a while. The 70th percentile trap is real... Most people outsource their gaps to AI and end up with consensus output that sounds confident but lacks edge.

The context file approach is brilliant. It forces you to codify taste, scars, and audience intuition that can't be scraped from the internet. That's where the asymmetry lives.

What I appreciate most is the shift from "AI will fix my weakness" to "AI will amplify my strength." The former gets you mediocrity at scale. The latter gets you leverage on what already works.

The steering wheel metaphor nails it... You're not delegating judgment, you're delegating drafts. Big difference.

Ruben Hassid's avatar

Most people treat AI like a shortcut. Give it a task. Get an output. Ship it.

That's the 70th percentile trap. You're getting consensus. Pattern recognition. The average of everything the internet already said.

And average doesn't cut it.

Kalle Kataila's avatar

What are the credits of the video embedded?

Ruben Hassid's avatar

By 3Blue1Brown on YouTube.

Yonathan Levy's avatar

Brilliant article Ruben

Ruben Hassid's avatar

thanks for reading it man :)

Katerina Schmitt's avatar

I've haven't seen Ai explained in this way. It makes sense. Thank you Ruben 😊

Ruben Hassid's avatar

thanks for reading it :)

Anisha Jain's avatar

The "scars" point is the whole thing.

AI has read a million case studies about failed product launches. But it's never sat in the room when the client went silent. Never felt the panic of realizing you missed something obvious. Never had to send the "we need to talk" email.

It’s just pattern recognition built from pain.

When you upload your “DON’Ts”, you're giving it the shortcuts you paid for in mistakes. The "never again" rules that took years to learn.

AI without your scars is a consultant who's read every book but shipped nothing. AI with your scars is a junior who finally gets how you think.

Ruben Hassid's avatar

your scars are filtered. when you read AI output, your gut scans for the patterns that burned you before.

you feel wrongness before you can name it. that's thousands of reps converted into instinct.

this is why the "AI replaces experts" take is backwards.

AI makes your scars more valuable, never less.

Zane's avatar

Love the message: stop using AI for things you are bad at.

Simply because you cannot judge the quality and know what “good” actually is.

Taste and expertise are about to become the real differentiator in this AI world.

Ruben Hassid's avatar

use AI where you're already great. that's where you can actually steer.

AI gives you the 70th percentile. your taste pushes it to the 95th.

Anouar Haouam's avatar

Thank you again for an awesome tutorial Ruben!

Quick question: before this tutorial you’ve explained us how to apply our ‘taste’. Is this file compatible with the ‘taste file’ or will it become too messy if I combine it with each other?

Thank you in advance!

Ruben Hassid's avatar

keep them separate because you'll update them separately.

Lloyd Silver's avatar

I think this also extends to using tools like Notebook LLM where you can identify who you think the experts are in a particular area where you lack expertise and curate their insights. And then you can use that in Notebook LLM or bring it into Claude, ChatGPT, etc projects. It's not perfect but certainly up levels the quality of responses.

Ruben Hassid's avatar

You're describing a workaround. And it works.

The trap is you still can't tell when it's wrong.

Here's when it actually works: you're building expertise.

Using their frameworks as training wheels while you develop your own.

NotebookLM as a bridge to get you there.

Langley's avatar

Ruben, this is a great post, but takes readers through a solution that's 70% there. After you have the document that has captured your digital writing fingerprint, you then create a custom gpt built on that document. You only have to upload it once and you create a living system to kickstart any project. You now have a sparring partner. Edits your make to your writing style can be added to the gpt so you don't have to do extra work every time. I'm happy to teach you or others how to do this. It's a game changer.

Ruben Hassid's avatar

Custom GPTs work. But here's the problem: A Custom GPT becomes a crutch.

You upload once. You forget. Six months later, your taste evolved but your GPT didn't. You're getting outputs based on who you were.

70% there is intentional.

I want friction. Every time you upload that file, you remember what's in it. You stay connected to your standards.

Langley's avatar

I like your approach