Skip to main content

Overview

When a message arrives on a connector, MeepaGateway routes it to the owning agent and runs the agent loop — a cycle of LLM calls and tool executions that continues until the model produces a final text response or hits the iteration limit. Each agent is an independent AgentInstance holding its own LLM provider, tool registry, skill registry, session manager, and memory store.

Event Flow


Step-by-Step

1. Event Routing

All connector streams are merged. Each event is tagged agent_id/connector_name. On MessageReceived:
  • The source tag maps to the owning agent via the connector-to-agent routing table
  • If isolation.enabled = true, the loop runs inside a fresh Docker container (meepagateway agent-run)
  • Otherwise it runs in the host process

2. Session Load

Each channel gets a persistent session keyed by channel_id. The session manager loads the existing conversation history or creates a new empty session.

3. Skill Matching

The message is checked against the agent’s skill registry:
  • Keyword matchingtriggers from skill frontmatter checked against message text
  • Semantic matching — cosine similarity against skill embeddings (when available)
A matched skill’s markdown content is injected into the system prompt.

4. System Prompt Construction

The system prompt is assembled from:
  1. SOUL.md — agent persona (from the agent workspace)
  2. Matched skill content (appended after the soul)
  3. Auto-generated AGENTS.md — describes available tools, memory layers, and skills
  4. Relevant facts auto-injected from the SQLite memory store
AGENTS.md instructs the agent to read SOUL.md, USER.md, and MEMORY.md at the start of every session before doing anything else.

5. LLM Call

Messages are sent to the configured provider (Anthropic or OpenAI-compatible). The provider registry handles transient failures — rate limits, 5xx, network errors — with automatic retry and back-off. ContextOverflow is returned immediately without retry; the caller compacts the session and retries.

6. Tool Execution

When finish_reason = tool_calls, all tools in the response are executed concurrently. Results are appended to the message history and the LLM is called again.

7. Iteration Limit

If max_iterations is reached without a stop response, the loop calls the LLM one final time with a nudge to produce a text answer. Default is 10. max_tool_failures caps how many consecutive failures of the same tool trigger a corrective nudge.

8. Response Delivery

The final text is sent back through the same connector and channel the message arrived on.

9. Session Compaction

After delivery, the session may be compacted if it exceeds the size threshold — older messages are summarized to keep context windows manageable for future turns.

Configuration

agent_defaults:
  provider: anthropic
  max_iterations: 10
  max_tool_failures: 3

agents:
  - id: meepa
    name: Meepa
    default: true
    model: claude-opus-4-6
    # max_iterations: 15  # per-agent override
max_iterations
integer
default:"10"
Maximum LLM call → tool execution cycles per message before forcing a final response.
max_tool_failures
integer
default:"3"
Maximum consecutive failures of the same tool before injecting a corrective nudge.
provider
string
default:"anthropic"
LLM provider key. Must match an entry in [providers.providers].
model
string
Model override. Falls back to the provider’s default_model when absent.

LLM Providers

Anthropic

providers:
  providers:
    anthropic:
      api_key_env: ANTHROPIC_API_KEY
      model: claude-opus-4-6
      # base_url: https://api.anthropic.com
      # max_tokens: 8192

OpenAI / compatible

providers:
  providers:
    openai:
      api_key_env: OPENAI_API_KEY
      model: gpt-4o
      base_url: https://api.openai.com/v1
      # max_tokens: 8192
Any OpenAI-compatible endpoint works — Ollama, vLLM, etc. — by setting base_url.

Failover

providers:
  primary: anthropic
  fallback:
    - openai
  health_check_interval: 30s

Agent Components

ComponentDescription
PersonaAgent personality loaded from SOUL.md in the agent workspace
SkillsPrompt fragments matched and injected per message
ToolsAvailable tools filtered by allow/deny config
SessionPer-channel conversation history
MemorySQLite fact store for long-term memory
ConnectorsConnected platform connectors
IsolationDocker isolation settings
Agent workspace lives at ~/.meepagateway/agents/{agent_id}/. See Introduction for the full layout.