SD-195
[context-audit] Context pollution audit ordered. File depth as read-frequency convention. BFS default. Captain’s independent finding, confirmed by systematic research: context pollution from too many context files degrades both human and LLM performance. The human layer moves from O(1) triage into O(n) scanning and beyond; the LLM equivalent is breadth-loaded attention dilution where everything becomes turbo-expansive. “Doing research all over your breakfast table; when it comes to food time, life is difficult.” Critical TODO: audit and triage all docs and context files. Signal = enough context for operational oversight (index cards, refs to other files). Noise = long lists, exhaustive documents, anything shown on every cycle that doesn’t need to be. Convention: file hierarchy depth signals read frequency. Shallow (depth 1) = read often by human or agent. Deep (depth 2+) = read only when researching something specific. Search strategy: BFS by default (scan breadth, don’t dive). DFS only when investigating a specific question. Trade-offs are real and span multiple layers. Context that helps one creature may hinder another; learning what not to pay attention to poses its own risks. The audit must respect these trade-offs. Cross-reference: SD-180 (Big O for cognitive load), SD-181 (demand digests not documents), SD-095 (Main Thread protection), SD-138 (deckhand context minimisation).
← all decisions