Sticky Notes and the Future of Prompt
“LLMs are quite literally like the guy in Memento, except we haven't given them their scratchpad yet.”— Andrej Karpathy
Claude’s 17,000-word system prompt isn’t memory—it's proof we're solving the wrong problem.
When racing sailboats, I scribble notes on masking tape wrapped around the boom: compass bearings, wind patterns, crew quirks. Some notes last a race; others become permanent guides, internalized as instinct. Humans naturally take explicit notes, test them, and internalize what sticks.
We instinctively organize what we learn into three buckets:
Facts: Immutable truths stored once and rarely changed. (The buoy is at 152°.)
Preferences: Mutable, context-specific details that evolve over time. (The trimmer prefers verbal countdowns.)
Rules: Conditional heuristics, internalized through repeated practice. (If ebb current exceeds 3 knots, tack early.)
We don't rewrite our entire brain for each new insight. We take notes, test them, and gradually internalize what works.
AI does the opposite. Everything gets baked into billions of parameters through expensive retraining. It's powerful but rigid, opaque, and unsustainable.
Why Prompts are Broken
Anthropic's Claude buries detailed instructions inside a 17,000 word prompt, such as explicitly counting letters to avoid errors like “how many r’s in strawberry?” This isn’t elegant. It is duct tape.
Every edge case gets stuffed into an ever-growing prompt. A startup I know watched theirs balloon from 2,000 to 14,000 tokens in six months. Not because the AI got smarter, but because they kept hitting new scenarios.
It’s like writing every recipe directly into the frying pan.
The traditional AI learning paradigm offers two options:
Pretraining: Bake knowledge into parameters. Expensive, permanent, inflexible.
Fine-tuning: Adjust behavior patterns. Still expensive, still opaque, still rigid.
Both miss something fundamental: most learning isn't about changing who you are. It's about remembering what works.
How Explicit Memory Works
Instead of encoding everything implicitly, AI should maintain explicit memories that can be inspected, edited, and improved. It's Scratchpad Memory—explicit, editable, transparent.
Real-world implementations are already emerging:
Cursor pioneered
.cursorrules
files. Tell it once you prefer functional components, and it remembers across your entire codebase. Simple and effective.ChatGPT maintains explicit "Saved Memories" for user-provided details and references previous chats to recall context.
These examples aren't perfect, but they hint at how AI should learn: incrementally, explicitly, transparently—just like humans.
Why Explicit Memory Is Hard
Context Conflicts: "Be concise" works for Slack, fails for legal docs. One size doesn’t fit all.
Memory Decay: Yesterday's pricing rule becomes tomorrow's liability. Knowledge needs expiration dates.
Version Control: When something goes wrong, you need to understand not just what the AI knows, but how it learned it.
They're engineering challenges, not insurmountable, but require thoughtful design and iteration, like human organizations have done for decades.
The Benefits of Explicit Memory
Explicit memory makes AI simpler and more flexible:
Clear Audit Trails: Decisions cite their influences. No more black boxes.
Rapid Specialization: One model handles thousands of contexts—legal contracts in the morning, marketing copy after lunch.
Easy Customization: Swap memory sets like config files.
Building the Scratchpad Future
We're heading toward a future where AI agents document insights explicitly, learning from each interaction. A single model can serve countless specialized use cases simply by swapping memory contexts.
This isn’t science fiction; the pieces exist today. Cursor's rules files, ChatGPT's evolving memories, and Claude's massive prompts all point toward the same insight: AI needs a scratchpad. The winners won't have the biggest models. They'll figure out how to capture, organize, and evolve explicit memory.
My sailing notes become instinctive over time. AI memory should do the same: stable facts, adaptive preferences, instinctive rules.
The real question isn’t whether AI will use scratchpads. It's who builds it first - and what becomes possible once AI truly remembers.