Card generation moved to response step:
- Response LLM outputs JSON with "text" + optional "card"
- Cards use actual query data, not placeholder templates
- Plan step no longer includes emit_card (avoids {{template}} syntax)
- Fallback: raw text response if JSON parse fails
History restore on reconnect:
- Frontend fetches /api/history on WS connect
- Renders last 20 messages in chat panel
- Only restores if chat is empty (fresh load)
Graph animation:
- Dynamic node name → graph ID mapping from graph definition
- All nodes (including eras_expert) pulse correctly
- 200ms animation queue prevents bulk event overlap
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Description
cog cognitive agent runtime
Languages
Python
71.7%
JavaScript
18.5%
HTML
6.9%
CSS
2.8%
Dockerfile
0.1%