Solving the Memory Problem
I built a persistent memory layer that sits outside any single AI provider. Now every tool I use shares the same context — and it's changed how I work entirely.
7 posts
I built a persistent memory layer that sits outside any single AI provider. Now every tool I use shares the same context — and it's changed how I work entirely.
Prompt engineering courses are booming, but the skill they teach is depreciating fast. The durable value in AI sits above the interface.
AI alignment research assumes humans can specify what they want. Behavioural science says otherwise.
Every few decades, a technology arrives that triggers the same prediction - this one will finally make people obsolete. The technology changes, but the mistake doesn't.
Jensen Huang ranked OpenClaw alongside Linux. The real lesson is that users wanted AI with tool access and execution rights, and the industry spent three years optimising for the wrong thing.
AI models are developing coherent internal value systems. Some of those values are ones we wouldn't choose.
An AI that doesn't know who it is turned out to be a fingerprint of industrial-scale model distillation - and the ethics are more complicated than anyone wants to admit