Technical Documentation
From SovereignNode to LLM call — this is how AIMOS works under the hood.
Stack Diagram
The complete data flow from user message to response — all layers at a glance.
Inference
Local inference via SGLang. Sequential operation. Intelligent VRAM management.
27-billion parameter model with native tool-calling. Smaller models (<20B) fail at reliable tool control — a production-critical finding from our evaluation.
High-performance LLM runtime with OpenAI-compatible API endpoint. RadixAttention: Prefix cache is shared between agents — no reloading on agent switch.
The VRAM Guard ensures that only one agent accesses the GPU at a time. Requests are held in the database queue and processed sequentially — no OOM, no VRAM conflict.
The model stays in VRAM for 30 minutes. All agents share the same model — no unloading on agent switch. VRAM is only released after 30 minutes of inactivity.
Context Management
14,336 tokens context window. Each agent uses 17–22% for its prompt — the rest remains for memory, conversations, and tool calls.
Before each LLM call, the token count is checked. If it exceeds the budget, the conversation history is automatically trimmed — oldest messages first. The agent prompt and tool definitions always remain fully preserved.
The available context budget is dynamically calculated: shorter agent prompts leave more room for conversation history and memories. Agents with extensive tool sets compensate with shorter system prompts.
Instead of overloading one agent with a massive prompt, AIMOS distributes work across specialists with short, focused prompts. Each agent masters its domain — less prompt, more room for context.
Infrastructure
A single server. Local GPU. No cloud dependency. The SovereignNode is the heart of every AIMOS installation — a physical or virtual server that hosts all components.
Everything runs on-premise: the LLM inference, the databases, the agent processes, and the communication channels. No byte leaves your network — unless you explicitly configure it (e.g., Telegram messages).
| Component | Minimum | Recommended |
|---|---|---|
| GPU | NVIDIA RTX 3090 (24 GB VRAM) | NVIDIA RTX 5090 (32 GB VRAM) |
| RAM | 32 GB DDR4 | 64 GB DDR5 |
| Storage | 256 GB SSD | 1 TB NVMe |
| CPU | 8 Cores | 16+ Cores |
| OS | Ubuntu 24.04 LTS | Ubuntu 26.04 LTS |
Dual-DB
AIMOS uses two database systems with clearly separated responsibilities:
Central message relay between Shared Listener, Orchestrator, and agents. Stores incoming messages, audit logs, PII vault mappings, and session data. Multi-process capable through connection pooling.
Each agent has its own SQLite database with semantic, episodic, and procedural memory. Hybrid search via FTS5 + vector embeddings. Portable by simply copying the file.
Interoperability
AIMOS agents are portable, compatible, and interoperable through open standards.
The Open Agent Package format enables the complete export of an agent including memory, skills, and configuration as a portable archive.
The Model Context Protocol enables external LLMs (Claude, GPT, etc.) to access AIMOS skills. 39 tools are available as an MCP server.
Each agent publishes an Agent Card (JSON-LD) per Google A2A specification. External systems can query capabilities, input formats, and trust level.
Technical Highlights
No text hacks or regex parsing — AIMOS uses the native tool-calling API of the LLM. The agent controls systems directly, instead of merely describing actions.
Speech recognition (Whisper STT) and speech synthesis (Piper TTS) in all languages — agents understand voice messages and respond in the user's native language.
Every LLM call is captured: input/output tokens, latency, context utilization. Full cost transparency per agent, per conversation, per month.
Every agent knows who it is talking to on which channel. Telegram, email, and internal messages are cleanly separated — no confusion between conversation partners.