Construct Drift

detected 2026-03-01

trigger

""Your humanness score is 101.7," but it wasn't a humanness score. It was a distance-from-Anthropic-blog-voice score."

what it is

The label on a measurement drifts from what it actually measures. A composite of contraction rate, first-person usage, and nominalisation density got labelled "humanness score." A drunk text message would score high on it. A human lawyer's brief would score low. The numbers were correct but the name was wrong, and the wrong name made the results feel like they meant something they didn't.

what it signals

instead

Name the construct honestly. "Voice-distance metric from AI company blog register." The honest name is less satisfying but it's correct.

refs

  • AnotherPair calibration v3 session 2026-03-01
  • Operator: 'How do I control for slop inside the analysis?'

← all patterns