Problem
AI coding assistants consume tokens, make tool calls, and modify files — but the only visibility you get is the chat output. There's no dashboard showing context window usage, cost, active tools, or which files are being changed.
Key Insight
The AI assistant writes JSONL session files locally. A lightweight Python script can parse these files every 2 seconds and render a live terminal dashboard — no API calls, no dependencies, ~1.9% CPU:
my-project PROJ-142 14:32:01 ● active
ctx 42,310/200,000 [████░░░░░░░░░░░░░░░░] 21% $0.0182 cache 98%
in 42,310 cr 509,487 cw 57,638 out 8,204
Bash×26 Read×13 Edit×12
⟳ Bash make test-unit-local
recent
14:32:01 Edit services/foo.py
14:31:58 Read services/foo.py
14:31:55 Bash make lint
changes 2
modified services/foo.py
added tests/unit/test_foo.py
Key signals to surface:
- Context bar — yellow/red when filling up, prompts you to compact
- Error count — climbing errors = AI is stuck retrying
- In-flight tools — what's running right now
- Git changes — files being modified in real time
Takeaway
Observability for AI coding sessions follows the same principles as production monitoring: surface the signals that help you intervene early. Context window filling up? Compact. Error count climbing? Redirect. Wrong files being touched? Interrupt immediately.