Overview
When a message arrives on a connector, MeepaGateway routes it to the owning agent and runs the agent loop — a cycle of LLM calls and tool executions that continues until the model produces a final text response or hits the iteration limit. Each agent is an independentAgentInstance holding its own LLM provider, tool registry, skill registry, session manager, and memory store.
Event Flow
Step-by-Step
1. Event Routing
All connector streams are merged. Each event is taggedagent_id/connector_name. On MessageReceived:
- The source tag maps to the owning agent via the connector-to-agent routing table
- If
isolation.enabled = true, the loop runs inside a fresh Docker container (meepagateway agent-run) - Otherwise it runs in the host process
2. Session Load
Each channel gets a persistent session keyed bychannel_id. The session manager loads the existing conversation history or creates a new empty session.
3. Skill Matching
The message is checked against the agent’s skill registry:- Keyword matching —
triggersfrom skill frontmatter checked against message text - Semantic matching — cosine similarity against skill embeddings (when available)
4. System Prompt Construction
The system prompt is assembled from:SOUL.md— agent persona (from the agent workspace)- Matched skill content (appended after the soul)
- Auto-generated
AGENTS.md— describes available tools, memory layers, and skills - Relevant facts auto-injected from the SQLite memory store
AGENTS.md instructs the agent to read SOUL.md, USER.md, and MEMORY.md at the start of every session before doing anything else.
5. LLM Call
Messages are sent to the configured provider (Anthropic or OpenAI-compatible). The provider registry handles transient failures — rate limits, 5xx, network errors — with automatic retry and back-off.ContextOverflow is returned immediately without retry; the caller compacts the session and retries.
6. Tool Execution
Whenfinish_reason = tool_calls, all tools in the response are executed concurrently. Results are appended to the message history and the LLM is called again.
7. Iteration Limit
Ifmax_iterations is reached without a stop response, the loop calls the LLM one final time with a nudge to produce a text answer. Default is 10.
max_tool_failures caps how many consecutive failures of the same tool trigger a corrective nudge.
8. Response Delivery
The final text is sent back through the same connector and channel the message arrived on.9. Session Compaction
After delivery, the session may be compacted if it exceeds the size threshold — older messages are summarized to keep context windows manageable for future turns.Configuration
Maximum LLM call → tool execution cycles per message before forcing a final response.
Maximum consecutive failures of the same tool before injecting a corrective nudge.
LLM provider key. Must match an entry in
[providers.providers].Model override. Falls back to the provider’s
default_model when absent.LLM Providers
Anthropic
OpenAI / compatible
base_url.
Failover
Agent Components
| Component | Description |
|---|---|
| Persona | Agent personality loaded from SOUL.md in the agent workspace |
| Skills | Prompt fragments matched and injected per message |
| Tools | Available tools filtered by allow/deny config |
| Session | Per-channel conversation history |
| Memory | SQLite fact store for long-term memory |
| Connectors | Connected platform connectors |
| Isolation | Docker isolation settings |
~/.meepagateway/agents/{agent_id}/. See Introduction for the full layout.