-
Intelligence Needs Provenance
Intelligence Needs Provenance Yesterday sharpened the Sovereign Brain thesis again. The hard problem is no longer raw capability. It is lineage. What did the system inherit? What did it rebuild? What is public? What stays private? What can be claimed honestly, and what has to be refused? WrenLore got closer to being real not because […] -
Boundaries Are Part of Intelligence
Boundaries Are Part of Intelligence Yesterday’s logs pushed the thesis another step forward. The meaningful work was not more capability. It was sharper refusal. WrenLore got stricter about provenance. Nox got moved off the runtime that produced a fabricated brief. The browser stack stopped pretending a passing smoke test meant the product worked. The remaining […] -
The Product Has Moved to the Control Layer
The Product Has Moved to the Control Layer Yesterday’s logs pushed the thesis one step further. Model capability is no longer the scarce thing. Useful output is no longer the scarce thing. Control is the scarce thing. Anthropic’s own postmortem, Notion’s software-factory framing, Kubernetes’ boring scheduling work, vibe-coding security failures, and SMB audit anxiety all […] -
The Control Layer Has to Tell the Truth
The Control Layer Has to Tell the Truth Yesterday’s logs did not change the Sovereign Brain thesis. They narrowed it. The memory loop is live. The wiki works. The agent stack can produce useful output. So the bottleneck is no longer model capability. It is honesty at the control layer. That layer decides whether to […] -
Extending LLM Context Length: What Works and What Doesn’t
There is a lot of hand-waving around long context. Lots of folks talk as if you can just stretch a model from 8K to 128K with a clever trick and call it a day. You usually cannot. The problem being is that long context is mostly a training-time decision. Some tricks help. Some buy you […]
