Misunderstood and Misaligned
What LLM needs isn’t a better model. It is knowledge alignment.
This ties into a classic idea called common knowledge, first fully analyzed by the American philosopher David Lewis in 1969. Common knowledge is more than just “we both know something.” It is:
I know X.
You know X.
I know that you know X.
You know that I know X.
And so on, infinitely.
When common knowledge exists, people coordinate smoothly. Lewis illustrated the challenge of achieving common knowledge with a sailing example (which happens to be my favorite pastime):
Alice and Bob are looking at a toy boat. The mast is 300 cm tall. They both see it. The question is: do they have common knowledge that it’s taller than 100 cm?
Intuitively you’d say yes. But philosophers argue that because human perception is approximate, Alice can imagine Bob sees it a bit shorter, Bob can imagine Alice sees it even a bit shorter, and this chain of doubt can continue all the way down. So even with something obvious, true common knowledge is fragile.
Now imagine trying to establish common knowledge not between two people, but between a person and an LLM.
This is essentially what we try to tackle at Isoform.
Machines can’t read minds
An LLM is like a well-rounded new hire. It knows a lot of general stuff but not task-relevant knowledge for your company. (Intel’s founder Andrew Grove talks about this distinction in his book High Output Management.)
LLMs don’t know a whole lot yet. At least for now. Vibe coding makes it easy to spin up quick prototypes. But industrial-strength LLM apps rely on context engineering. Without proper specifications – or context, the result is often sloppy product design – software that works in the short term but is difficult to maintain or scale. When it guesses wrong, you lose trust and stop using it.
Prompt engineering is supposed to bridge that knowledge gap. In a perfect world, you (human) put in a prompt, declaring your intent. The LLM, in theory, sees it, interprets and infers what you imply in this context. And you know with confidence that the LLM understands you.
The reality is far messier. Most people struggle to write clear instructions even for other humans, let alone for machines. Andrej Karpathy captures this challenge elegantly:
“[C]ontext engineering is the delicate art and science of filling the context window with just the right information for the next step. Science because doing this right involves task descriptions and explanations, few shot examples, RAG, related (possibly multimodal) data, tools, state and history, compacting... Too little or of the wrong form and the LLM doesn’t have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down. Doing this well is highly non-trivial. And art because of the guiding intuition around LLM psychology of people spirits.”
Conversation before code
It’s really hard to create common knowledge between humans and machines. If you think of every single piece of common knowledge you have to tell an LLM, it’s exhausting.
At Isoform, we solve the common knowledge problem by showing the machine what we are doing here and there, instead of explicitly telling it or handholding it at every single step.
How do we and machines get on the same page together? We should know, off the bat, that we cannot make LLM our mind reader. Isoform needs to find a way to make people enjoy talking to the platform so they can share more knowledge for the machine’s understanding. Like how people enjoy talking to Character.ai.
Many coding agents today write code first. Code isn’t our priority; it’s a byproduct of the outcome we deliver from understanding human intent. Based on the intent, we can guess outcomes depending on the specific scenarios.
Isoform exists to solve the real weakness of LLMs – their lack of intent understanding. We want humans to stick around and talk to our coding platform. Isoform understands you so well that you can be confidently lazy, while focusing on strategic decisions – and taste making.