I use Large Language Models (Claude, GPT, etc.) routinely while writing and building on this site. This page is the disclosure so you know what you’re reading.
Default: drafts from my notes; I review
For blog posts and knowledge notes — and for the code that runs the site itself — I use AI to:
- Write prose from my notes, outlines, and rough drafts,
- Proofread and correct text I’ve already written,
- Write the code that runs the site,
- Review code and explain unfamiliar libraries.
Every paragraph and every commit is read, edited, and signed off by me before it ships. AI suggestions I disagree with get cut. Treat the text and code in those surfaces as mine — AI does a lot of the typing, I do the judgment.
Exception: spatial-thoughts annotations
On the spatial-thoughts pages, the annotation paragraph below each card — the smaller text under the bold note — is AI-generated. The same goes for the connections the LLM draws between cards and any thesis card it proposes.
I do skim these for harmful content before publishing — that’s the only filter. I do not fact-check the annotations, verify citations, or confirm attributed quotes. LLMs are known to fabricate citations: a card may attribute a claim to Christensen 1997 or HBR 2019 that doesn’t exist or that says something different from what the annotation suggests. Treat every reference as a starting point for your own search, never as a verified source.
This is by design: the point of that surface is to see what an LLM actually says when I drop a fresh note on it. If I rewrote each annotation to match what I already believe, the artifact would lose its purpose.
Concretely, on every spatial-thoughts card:
- The bold text at the top is mine.
- The annotation below it is AI — reviewed only for harm, not for accuracy.
- The connections between cards and any thesis card are AI-suggested.
Upstream tool: NodePad.
Not on this site
No AI-generated images. No AI moderation, ranking, or personalisation. No telemetry — see privacy.md for that side.