# Mnexium API Documentation > **For LLMs:** You are helping build apps that call these Mnexium APIs. When you generate code, always follow the endpoint, header, and mnx parameter rules below. Do not invent parameters or options not listed here. > > Companion guide for building production apps with Mnexium: https://www.mnexium.com/skills/mnexium-app-builder/SKILL.md > Mnexium is the memory and state layer for AI applications. Add persistent memory, conversation history, user profiles, and agent state to your LLM apps with a single API call. ## Overview Mnexium provides: - **Chat History**: Automatic conversation persistence across sessions - **Memory**: Semantic long-term memory extraction and recall - **Profiles**: Structured user data (preferences, facts, relationships) - **Agent State**: Key-value storage for workflow continuity - **Summarization**: Rolling conversation compression to reduce token costs by up to 95% Works with **OpenAI (GPT-4o, GPT-4o-mini, o1)**, **Anthropic (Claude Sonnet 4, Claude 3.5 Sonnet)**, and **Google Gemini (2.0 Flash, 2.5 Flash, 1.5 Pro)**. Bring your own API key. ## Authentication All requests require a Mnexium API key. You can pass it via `x-mnexium-key` (recommended) or `Authorization` header: - `x-mnexium-key`: `mnx_live_...` — Your Mnexium API key (recommended for SDK users) - `Authorization`: `Bearer mnx_live_...` — Alternative: Mnexium key via Authorization header - `x-openai-key`: `sk-...` — Your OpenAI API key (required for OpenAI models) - `x-anthropic-key`: `sk-ant-...` — Anthropic key for `/api/v1/chat/completions` with Claude models - `x-api-key`: `sk-ant-...` — Anthropic key for `/api/v1/messages` (Anthropic-native route) - `x-google-key`: `...` — Your Google API key (for Gemini models) - `x-mnx-memory-policy`: optional memory policy override for header-based MNX config (`false` disables policy) **SDK users:** Use `x-mnexium-key` so the SDK's `apiKey` can be used for your provider key (OpenAI, Anthropic, etc.). If you override `Authorization` with your Mnexium key, you must explicitly pass the provider key via `x-openai-key` or `x-anthropic-key` (or `x-api-key` on `/api/v1/messages`). ## Native SDKs Mnexium works with native SDKs from OpenAI, Anthropic, and Google. Simply point your SDK's base URL to Mnexium. | Provider | SDK | Base URL | |----------|-----|----------| | OpenAI | `openai` | `https://mnexium.com/api/v1` | | Anthropic | `@anthropic-ai/sdk` | `https://mnexium.com/api` | | Google | `@google/genai` | `https://mnexium.com` | ### OpenAI SDK ```javascript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.OPENAI_KEY, baseURL: "https://mnexium.com/api/v1", defaultHeaders: { "x-mnexium-key": process.env.MNX_KEY, }, }); const response = await client.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Hello!" }], mnx: { subject_id: "user_123", learn: true, recall: true, }, }); ``` ### Anthropic SDK ```javascript import Anthropic from "@anthropic-ai/sdk"; const client = new Anthropic({ apiKey: process.env.CLAUDE_API_KEY, baseURL: "https://mnexium.com/api", defaultHeaders: { "x-mnexium-key": process.env.MNX_KEY, }, }); const response = await client.messages.create({ model: "claude-sonnet-4-20250514", messages: [{ role: "user", content: "Hello!" }], max_tokens: 1024, }); ``` ### Google Gemini SDK ```javascript import { GoogleGenAI } from "@google/genai"; const client = new GoogleGenAI({ apiKey: process.env.GEMINI_KEY, httpOptions: { baseUrl: "https://mnexium.com", headers: { "x-mnexium-key": process.env.MNX_KEY, }, }, }); const response = await client .models.generateContent({ model: "gemini-2.0-flash-lite", contents: "Hello!", }); ``` ## API Endpoints ### Chat Completions - [POST /api/v1/chat/completions](https://mnexium.com/docs#chat): **Drop-in replacement for OpenAI's chat/completions.** Use the same request format as OpenAI, just change the URL to Mnexium and add the `mnx` object. ### Responses API - [POST /api/v1/responses](https://mnexium.com/docs#responses): **Drop-in replacement for OpenAI's responses.create.** Use the same request format as OpenAI, just change the URL to Mnexium and add the `mnx` object. ### Chat History - [GET /api/v1/chat/history/list](https://mnexium.com/docs#history): List conversations for a user (query param: `subject_id`) - [GET /api/v1/chat/history/read](https://mnexium.com/docs#history): Get messages for a conversation (query params: `chat_id`, `subject_id`) - [DELETE /api/v1/chat/history/delete](https://mnexium.com/docs#history): Delete a conversation (query params: `chat_id`, `subject_id`) ### Memories - [GET /api/v1/memories](https://mnexium.com/docs#memories): List memories for a user (query param: `subject_id`) - [GET /api/v1/memories/search](https://mnexium.com/docs#memories): Semantic search over memories (query params: `subject_id`, `q`) - [POST /api/v1/memories](https://mnexium.com/docs#memories): Create a memory (body: `subject_id`, `text`, etc.) - [GET /api/v1/memories/:id](https://mnexium.com/docs#memories): Get a specific memory by ID - [GET /api/v1/memories/:id/claims](https://mnexium.com/docs#memories): Get structured claims/assertions extracted from a memory - [PATCH /api/v1/memories/:id](https://mnexium.com/docs#memories): Update a memory - [DELETE /api/v1/memories/:id](https://mnexium.com/docs#memories): Delete a memory - [GET /api/v1/memories/superseded](https://mnexium.com/docs#memories): List superseded memories (query param: `subject_id`) - [POST /api/v1/memories/:id/restore](https://mnexium.com/docs#memories): Restore a superseded memory - [GET /api/v1/memories/recalls](https://mnexium.com/docs#memories): Query memory recall events (query params: `chat_id` or `memory_id`) ### Claims - [GET /api/v1/claims/:id](https://mnexium.com/docs#claims): Get one claim with assertions, edges, and supersession chain - [GET /api/v1/claims/subject/:subject_id/truth](https://mnexium.com/docs#claims): Get current truth for a subject (all active slot values) - [GET /api/v1/claims/subject/:subject_id/slot/:slot](https://mnexium.com/docs#claims): Get current value for a specific slot - [GET /api/v1/claims/subject/:subject_id/slots](https://mnexium.com/docs#claims): List slot states grouped by active/superseded/other - [GET /api/v1/claims/subject/:subject_id/graph](https://mnexium.com/docs#claims): Get claim graph snapshot (claims + typed edges) - [GET /api/v1/claims/subject/:subject_id/history](https://mnexium.com/docs#claims): Get claim history for a subject - [POST /api/v1/claims](https://mnexium.com/docs#claims): Create a claim (body: `subject_id`, `predicate`, `object_value`) - [POST /api/v1/claims/:id/retract](https://mnexium.com/docs#claims): Retract a claim (restores previous slot winner when available) ### Claim OLTP Workflow Mnexium uses a two-layer workflow: - **Layer 1 (Ingestion/Provenance):** memories + observations capture raw evidence and context - **Layer 2 (Truth/OLTP):** claims + slot state resolve current truth per slot Write flow: 1. Store/update memory evidence 2. Extract atomic claims 3. Insert claim (+ optional observation/assertion links) 4. Run async graph linking (edges + supersession + slot winner) 5. Read truth from `/api/v1/claims/subject/:subject_id/truth` or `/slot/:slot` ### Profiles - [GET /api/v1/profiles](https://mnexium.com/docs#profiles): Get user profile (query param: `subject_id`) - [GET /api/v1/profiles/schema](https://mnexium.com/docs#profiles): Get active profile schema for the project - [PATCH /api/v1/profiles](https://mnexium.com/docs#profiles): Update profile fields (body: `subject_id`, `updates` array with `field_key` and `value`) - [DELETE /api/v1/profiles](https://mnexium.com/docs#profiles): Delete a profile field (query params: `subject_id`, `field_key`) ### Agent State - [GET /api/v1/state/:key](https://mnexium.com/docs#agent-state): Get agent state (header: `X-Subject-ID`) - [PUT /api/v1/state/:key](https://mnexium.com/docs#agent-state): Set agent state (header: `X-Subject-ID`, body: `value`, `ttl_seconds`) - [DELETE /api/v1/state/:key](https://mnexium.com/docs#agent-state): Delete agent state (header: `X-Subject-ID`) ### Real-time Events - [GET /api/v1/events/memories](https://mnexium.com/docs#events): Subscribe to real-time memory events via SSE (query param: `subject_id`) ### System Prompts - [GET /api/v1/prompts](https://mnexium.com/docs#system-prompts): List system prompts - [POST /api/v1/prompts](https://mnexium.com/docs#system-prompts): Create system prompt (body: `name`, `prompt_text`, `scope`, `scope_id`, `is_default`, `priority`) - [GET /api/v1/prompts/:id](https://mnexium.com/docs#system-prompts): Get one system prompt - [PATCH /api/v1/prompts/:id](https://mnexium.com/docs#system-prompts): Update system prompt - [DELETE /api/v1/prompts/:id](https://mnexium.com/docs#system-prompts): Delete system prompt - [GET /api/v1/prompts/resolve](https://mnexium.com/docs#system-prompts): Preview which prompts will be injected (query params: `subject_id`, `chat_id`, `combined`, `default_only`) ### Memory Policies - [GET /api/v1/memory/policies](https://mnexium.com/docs#additional-apis): List memory policies - [POST /api/v1/memory/policies](https://mnexium.com/docs#additional-apis): Create memory policy (body: `name`, `policy_text`, `scope`, `scope_id`, `is_default`, `priority`, `config`) - [GET /api/v1/memory/policies/:id](https://mnexium.com/docs#additional-apis): Get one memory policy - [PATCH /api/v1/memory/policies/:id](https://mnexium.com/docs#additional-apis): Update memory policy - [DELETE /api/v1/memory/policies/:id](https://mnexium.com/docs#additional-apis): Delete memory policy - [GET /api/v1/memory/policies/resolve](https://mnexium.com/docs#additional-apis): Resolve effective memory policies for a context (query params: `subject_id`, `chat_id`, `default_only`, `combined`) ### Additional APIs - [POST /api/v1/messages](https://mnexium.com/docs#additional-apis): Anthropic Messages-compatible route with Mnexium features - [GET /api/v1/audit/requests](https://mnexium.com/docs#additional-apis): Query request/response audit logs - [GET /api/v1/records/schemas](https://mnexium.com/docs#additional-apis): List record schemas - [POST /api/v1/records/schemas](https://mnexium.com/docs#additional-apis): Create/update record schema - [GET /api/v1/records/schemas/:type](https://mnexium.com/docs#additional-apis): Get schema by type - [GET /api/v1/records/:type](https://mnexium.com/docs#additional-apis): List records for a type - [POST /api/v1/records/:type](https://mnexium.com/docs#additional-apis): Create record - [GET /api/v1/records/:type/:id](https://mnexium.com/docs#additional-apis): Get record by ID - [PUT /api/v1/records/:type/:id](https://mnexium.com/docs#additional-apis): Update record - [DELETE /api/v1/records/:type/:id](https://mnexium.com/docs#additional-apis): Soft-delete record - [POST /api/v1/records/:type/query](https://mnexium.com/docs#additional-apis): Filter/query records - [POST /api/v1/records/:type/search](https://mnexium.com/docs#additional-apis): Semantic search records ## Mnexium Parameters (mnx object) Add the `mnx` object to any chat completion request to enable features: ```json { "model": "gpt-4o-mini", "messages": [{"role": "user", "content": "Hello"}], "mnx": { "subject_id": "user_123", "chat_id": "550e8400-e29b-41d4-a716-446655440000", "history": true, "learn": true, "recall": true, "summarize": "balanced", "state": { "load": true, "key": "workflow_state" }, "system_prompt": "sp_abc123", "memory_policy": "mem_pol_abc123" } } ``` ### Parameters & Defaults | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `subject_id` | string | auto-generated | Unique identifier for the user/subject. Auto-generated with `subj_` prefix if omitted. | | `chat_id` | string | auto-generated UUID | Conversation identifier. If omitted, a new UUID is generated for each request. | | `log` | boolean | `true` | Save messages to chat history. | | `learn` | boolean/string | `true` | Extract and store memories. `true` (LLM decides), `"force"` (always), `false` (never). **Extraction is asynchronous — it never blocks the response.** | | `recall` | boolean | `false` | Inject relevant memories into context. | | `history` | boolean | `true` | Prepend previous messages from this chat. Set `false` to disable. (Defaults to `false` for the Responses API.) | | `summarize` | string/object | `false` | Enable summarization. **Valid strings: `"light"`, `"balanced"`, `"aggressive"`** or a custom config object. | | `state` | object | `null` | Agent state config: `{ "load": true, "key": "optional_key" }` | | `system_prompt` | string/boolean | `true` | `true` (auto-resolve), `false` (skip), or a prompt ID like `"sp_abc123"`. | | `memory_policy` | string/boolean | `true` | `true`/omitted (auto-resolve by scope), `false` (disable memory policy), or a policy ID like `"mem_pol_abc123"`. | | `metadata` | object | `null` | Custom metadata attached to saved logs. | **Important:** `log`, `learn`, and `history` default to `true`. `recall` defaults to `false` — you must explicitly set it to `true` to enable. Profile data is automatically included when `recall: true`. ## Summarization Modes Only these string values are valid for `summarize`: - **`"light"`**: ~50% compression, starts at 70K tokens, keeps 25 recent messages - **`"balanced"`**: ~75% compression, starts at 55K tokens, keeps 15 recent messages - **`"aggressive"`**: ~95% compression, starts at 35K tokens, keeps 8 recent messages For custom config, use an object with: `start_at_tokens`, `chunk_size`, `keep_recent_messages`, `summary_target` ## Example Requests ### OpenAI ```bash curl -X POST "https://www.mnexium.com/api/v1/chat/completions" \ -H "x-mnexium-key: $MNX_KEY" \ -H "Content-Type: application/json" \ -H "x-openai-key: $OPENAI_KEY" \ -d '{ "model": "gpt-4o-mini", "messages": [{ "role": "user", "content": "Remember that I prefer dark mode" }], "mnx": { "subject_id": "user_123", "history": true, "learn": true, "recall": true } }' ``` ### Claude ```bash curl -X POST "https://www.mnexium.com/api/v1/chat/completions" \ -H "x-mnexium-key: $MNX_KEY" \ -H "Content-Type: application/json" \ -H "x-anthropic-key: $ANTHROPIC_KEY" \ -d '{ "model": "claude-sonnet-4-20250514", "messages": [{ "role": "user", "content": "What do I prefer?" }], "mnx": { "subject_id": "user_123", "recall": true } }' ``` ### Gemini ```bash curl -X POST "https://www.mnexium.com/api/v1/chat/completions" \ -H "x-mnexium-key: $MNX_KEY" \ -H "Content-Type: application/json" \ -H "x-google-key: $GOOGLE_KEY" \ -d '{ "model": "gemini-2.0-flash-lite", "messages": [{ "role": "user", "content": "What do I prefer?" }], "mnx": { "subject_id": "user_123", "recall": true } }' ``` ## Cross-Provider Memory Memories learned with one provider are available to all others. Learn a fact with GPT-4, recall it with Claude or Gemini: ```javascript // Learn with OpenAI await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "My favorite color is blue" }], mnx: { subject_id: "user_123", learn: "force" }, }); // Recall with Gemini - it knows! await gemini.chat.completions.create({ model: "gemini-2.0-flash-lite", messages: [{ role: "user", content: "What's my favorite color?" }], mnx: { subject_id: "user_123", recall: true }, }); // Gemini responds: "Your favorite color is blue!" ``` ## Supported Models **OpenAI:** gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-4, o1, o1-mini, o1-preview **Anthropic:** claude-sonnet-4-20250514, claude-3-5-sonnet-20241022, claude-3-opus-20240229, claude-3-sonnet-20240229, claude-3-haiku-20240307 **Google Gemini:** gemini-2.0-flash-lite, gemini-2.5-flash, gemini-1.5-pro, gemini-1.5-flash ## Links - [Documentation](https://mnexium.com/docs) - [Blog](https://mnexium.com/blogs) - [Sign Up](https://mnexium.com)