Eras Expert domain context:
- Full Heizkostenabrechnung business model (Kunde>Objekte>Nutzeinheiten>Geraete)
- Known PK/FK mappings: kunden.Kundennummer, objekte.KundenID, etc.
- Correct JOIN example in SCHEMA prompt
- PA knows domain hierarchy for better job formulation
Iterative plan-execute in ExpertNode:
- DESCRIBE queries execute first, results injected into re-plan
- Re-plan uses actual column names from DESCRIBE
- Eliminates "Unknown column" errors on first query
Frontend:
- Node inspector: per-node cards with model, tokens, progress, last event
- Graph switcher buttons in top bar
- Clear button in top bar
- Nodes panel 300px wide
- WS reconnect on 1006 (deploy) without showing login
- Model info emitted on context HUD events
Domain context test: 21/21 (hierarchy, JOINs, FK, PA job quality)
Default graph: v4-eras
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Frame Engine (v3-framed):
- Tick-based deterministic pipeline: frames advance on completion, not timers
- FrameRecord/FrameTrace dataclasses for structured per-message tracing
- /api/frames endpoint: queryable frame trace history (last 20 messages)
- frame_trace HUD event with full pipeline visibility
- Reflex=2F, Director=4F, Director+Interpreter=5F deterministic frame counts
Expert Architecture (v4-eras):
- PA node (pa_v1): routes to domain experts, holds user context
- ExpertNode base: stateless executor with plan+execute two-LLM-call pattern
- ErasExpertNode: eras2_production DB specialist with DESCRIBE-first discipline
- Schema caching: DESCRIBE results reused across queries within session
- Progress streaming: PA streams thinking message, expert streams per-tool progress
- PARouting type for structured routing decisions
UI Controls Split:
- Separate thinker_controls from machine controls (current_controls is now a property)
- Machine buttons persist across Thinker responses
- Machine state parser handles both dict and list formats from Director
- Normalized button format with go/payload field mapping
WebSocket Architecture:
- /ws/test: dedicated debug socket for test runner progress
- /ws/trace: dedicated debug socket for HUD/frame trace events
- /ws (chat): cleaned up, only deltas/controls/done/cleared
- WS survives graph switch (re-attaches to new runtime)
- Pipeline result reset on clear
Test Infrastructure:
- Live test streaming: on_result callback fires per check during execution
- Frontend polling fallback (500ms) for proxy-buffered WS
- frame_trace-first trace assertion (fixes stale perceived event bug)
- action_match supports "or" patterns and multi-pattern matching
- Trace window increased to 40 events
- Graph-agnostic assertions (has X or Y)
Test Suites:
- smoketest.md: 12 steps covering all categories (~2min)
- fast.md: 10 quick checks (~1min)
- fast_v4.md: 10 v4-eras specific checks
- expert_eras.md: eras domain tests (routing, DB, schema, errors)
- expert_progress.md: progress streaming tests
Other:
- Shared db.py extracted from thinker_v2 (reused by experts)
- InputNode prompt: few-shot examples, history as context summary
- Director prompt: full tool signatures for add_state/reset_machine/destroy_machine
- nginx no-cache headers for static files during development
- Cache-busted static file references
Scores: v3 smoketest 39/40, v4-eras fast 28/28, expert_eras 23/23
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
6-node pipeline: Input -> Thinker -> Output (voice) + UI (screen) in parallel
- Output: text only (markdown, emoji). Never emits HTML or controls.
- UI: dedicated node for labels, buttons, tables. Tracks workspace state.
Replaces entire workspace on each update. Runs parallel with Output.
- Input: strict one-sentence perception. No more hallucinating responses.
- Thinker: controls removed from prompt, focuses on reasoning + tools.
- Frontend: markdown rendered in chat (bold, italic, code blocks, lists).
Label control type added. UI node meter in top bar.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- ThinkerNode: reasons about perception, decides tool use vs direct answer
- Python tool: subprocess execution with 10s timeout
- Auto-detects python code blocks in LLM output and executes them
- Tool call/result visible in trace + HUD
- Thinker meter in frontend (token budget: 4K)
- Flow: Input (perceive) -> Thinker (reason + tools) -> Output (speak)
- Tested: math (42*137=5754), SQLite (create+query), time, greetings
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Per-node context fill bars (input/output/memorizer/sensor)
- Color-coded: green <50%, amber 50-80%, red >80%
- Sensor meter shows tick count + latest deltas
- Token info in trace context events
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- SensorNode: 5s tick loop with delta-only emissions (clock, idle, memo changes)
- Input reframed as perceiver (describes what it heard, not commands)
- Output reframed as voice (acts on perception, never echoes it)
- Per-node token budgets: Input 2K, Output 4K, Memorizer 3K
- fit_context() trims oldest messages to stay within budget
- History sliding window: 40 messages max
- Facts capped at 20, trace file rotates at 500KB
- /api/send + /api/clear endpoints for programmatic testing
- test_cog.py test suite
- Listener context: physical/social/security awareness
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>