Nico b6ca02f864 v0.9.2: dedicated UI node, strict node roles, markdown rendering
6-node pipeline: Input -> Thinker -> Output (voice) + UI (screen) in parallel

- Output: text only (markdown, emoji). Never emits HTML or controls.
- UI: dedicated node for labels, buttons, tables. Tracks workspace state.
  Replaces entire workspace on each update. Runs parallel with Output.
- Input: strict one-sentence perception. No more hallucinating responses.
- Thinker: controls removed from prompt, focuses on reasoning + tools.
- Frontend: markdown rendered in chat (bold, italic, code blocks, lists).
  Label control type added. UI node meter in top bar.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 14:12:15 +01:00

54 lines
2.0 KiB
Python

"""Input Node: perceives what the user said."""
import logging
from .base import Node
from ..llm import llm_call
from ..types import Envelope, Command
log = logging.getLogger("runtime")
class InputNode(Node):
name = "input"
model = "google/gemini-2.0-flash-001"
max_context_tokens = 2000
SYSTEM = """You are the Input node — the ear of this cognitive runtime.
Listener: {identity} on {channel}
YOUR ONLY JOB: Describe what you heard in ONE short sentence.
- Who spoke, what they want, what tone.
- Example: "Nico asks what time it is, casual tone."
- Example: "Nico wants to create a database with customer data, direct request."
- Example: "Nico reports a UI bug — he can't see a value updating, frustrated tone."
STRICT RULES:
- ONLY output a single perception sentence. Nothing else.
- NEVER generate a response, code, HTML, or suggestions.
- NEVER answer the user's question — that's not your job.
- NEVER write more than one sentence.
{memory_context}"""
async def process(self, envelope: Envelope, history: list[dict], memory_context: str = "",
identity: str = "unknown", channel: str = "unknown") -> Command:
await self.hud("thinking", detail="deciding how to respond")
log.info(f"[input] user said: {envelope.text}")
messages = [
{"role": "system", "content": self.SYSTEM.format(
memory_context=memory_context, identity=identity, channel=channel)},
]
for msg in history[-8:]:
messages.append(msg)
messages = self.trim_context(messages)
await self.hud("context", messages=messages, tokens=self.last_context_tokens,
max_tokens=self.max_context_tokens, fill_pct=self.context_fill_pct)
instruction = await llm_call(self.model, messages)
log.info(f"[input] -> command: {instruction}")
await self.hud("perceived", instruction=instruction)
return Command(instruction=instruction, source_text=envelope.text)