API Documentation
Dashboard

API Documentation

Reference documentation for Mnexium public HTTP APIs. Use these endpoints to run OpenAI-powered requests with project-scoped continuity through chat history, memories, and system prompts.

Concepts & Architecture

Before diving into the API, it helps to understand the core concepts that power Mnexium's memory system.

Mnexium Architecture Diagram

Your agent sends a normal API request to Mnexium, along with a few mnx options. Mnexium automatically retrieves conversation history, relevant long-term memory, and agent state — and builds an enriched prompt for the model.

The model returns a response, and Mnexium optionally learns from the interaction. Every step is visible through logs, traces, and recall events so you can debug exactly what happened.

Who This Is For

Use Mnexium if you're building AI assistants or agents that must remember users across sessions, resume multi-step tasks, and be configurable per project, user, or conversation. It's the memory and state layer so you can focus on your product.

Works with developers using ChatGPT (OpenAI) and Claude (Anthropic) — bring your own API key and Mnexium handles the rest. Support for additional model providers coming soon.

Chat History, Memory & State

Three distinct but complementary systems for context management:

Chat History

The raw conversation log — every message sent and received within a chat_id. Used for context continuity within a single conversation session. Think of it as short-term, session-scoped memory.

Enabled with history: true

Agent Memory

Extracted facts, preferences, and context about a subject_id (user). Persists across all conversations and sessions. Think of it as long-term, user-scoped memory that the agent "remembers" about someone.

Created with learn: true, recalled with recall: true

Agent State

Short-term, task-scoped working context for agentic workflows. Tracks task progress, pending actions, and session variables. Think of it as the agent's "scratchpad" for multi-step tasks.

Stored with PUT /state/:key, loaded with state.load: true

Message Assembly Order

For chat completions, Mnexium assembles the final messages array in this order:

1Resolved System Prompt — Project → subject → chat scoped prompt (if system_prompt is not false)
2Agent State — Current task context as JSON (if state.load: true)
3Memories — Relevant facts about the user (if recall: true)
4Chat History — Previous messages from this conversation (if history: true)
5User Messages — The messages you provide in the request

Items 1-3 are appended to the system message. Item 4 is prepended to the messages array. Item 5 is your original request.

Memory Fields

Each memory has metadata that helps with organization, recall, and lifecycle management:

status
string
active (current, will be recalled) or superseded (replaced by newer memory, won't be recalled)
kind
string
Category: fact, preference, context, or note
importance
number
0-100 score affecting recall priority. Higher = more likely to be included in context.
visibility
string
private (subject only), shared (project-wide), or public
seen_count
number
How many times this memory has been recalled in conversations.
last_seen_at
timestamp
When this memory was last recalled.
superseded_by
string
If superseded, the ID of the memory that replaced this one.

Memory Versioning

When new memories are created, the system automatically handles conflicts using semantic similarity. There are only two status values: active and superseded.

Skip
If a new memory is very similar to an existing one (same meaning), the new memory is not created to avoid redundancy.

Example: "User likes coffee" → "User enjoys coffee" (new one skipped)

Supersede
If a new memory conflicts with an existing one (same topic, different value), the old memory's status changes to superseded and the new one is created as active.

Example: "Favorite fruit is blueberry" → "Favorite fruit is apple" (old becomes superseded)

Create
If the memory is about a different topic, it's stored as a new active memory.

Example: "User likes coffee" + "User works remotely" (both remain active)

Superseded memories are preserved for audit purposes and can be restored via the POST /memories/:id/restore endpoint.

Memory Decay & Reinforcement

Memories naturally decay over time, similar to human memory. Frequently recalled memories become stronger, while unused memories gradually fade in relevance. This ensures the most important and actively-used information surfaces during recall.

Confidence
How certain the AI was when extracting this memory. Higher confidence memories are prioritized during recall.
Reinforcement
Each time a memory is recalled, it gets reinforced — strengthening its relevance and resetting its decay timer.
Temporal
Some memories are time-sensitive (e.g., "User is traveling next week"). These decay faster than permanent facts.
Source
Memories can be explicit (created via API), inferred (extracted from conversation), or corrected (user corrected an inference).

The Memory Lifecycle

1Extract — LLM analyzes conversation and identifies memorable facts (learn: true)
2Store — Memory is saved with embedding for semantic search
3Recall — Relevant memories are injected into future conversations (recall: true)
4Reinforce — Recalled memories get stronger; unused memories naturally decay
5Evolve — Conflicting memories supersede old ones; duplicates are skipped
Getting Started

Mnexium provides a proxy layer for OpenAI APIs with built-in support for conversation persistence, memory management, and system prompt injection.

Quick Example

A request to the Chat Completions API with history, memory extraction, and all Mnexium features enabled:

curl -X POST "https://www.mnexium.com/api/v1/chat/completions" \  -H "Authorization: Bearer $MNX_KEY" \  -H "Content-Type: application/json" \  -H "x-openai-key: $OPENAI_KEY" \  -d '{    "model": "gpt-4o-mini",    "messages": [{ "role": "user", "content": "What IDE should I use?" }],    "mnx": {      "subject_id": "user_123",      "chat_id": "550e8400-e29b-41d4-a716-446655440000",  // UUID      "log": true,      "learn": true,      "recall": true,      "history": true    }  }'

What happens:

  • log: true — Saves this conversation turn to chat history
  • learn: true — LLM analyzes the message and may extract memories
  • recall: true — Injects relevant stored memories into context (e.g., "User prefers dark mode", "User is learning Rust")
  • history: true — Prepends previous messages from this chat_id for context

Use learn: "force" to always create a memory, or learn: false to skip memory extraction entirely.

Quick Start

Get Started Repository

Clone our starter repo for working examples in Node.js and Python:

github.com/mariusndini/mnexium-get-started
Native SDKs

Use Your Favorite SDK

Mnexium works with native SDKs from OpenAI, Anthropic, and Google. Simply point your SDK's base URL to Mnexium and get persistent memory across all providers.

ProviderSDKBase URL
OpenAIopenaihttps://mnexium.com/api/v1
Anthropic@anthropic-ai/sdkhttps://mnexium.com/api
Google@google/genaihttps://mnexium.com

OpenAI SDK

Use the native OpenAI SDK with Mnexium by setting the baseURL.

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.OPENAI_KEY,
  baseURL: "https://mnexium.com/api/v1",
  defaultHeaders: {
    "Authorization": `Bearer ${process.env.MNX_KEY}`,
  },
});

const response = await client.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Hello!" }],
  mnx: {
    subject_id: "user_123",
    learn: true,
    recall: true,
  },
});

Anthropic SDK

Use the native Anthropic SDK. Note the base URL ends with /api (the SDK adds /v1/messages).

import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic({
  apiKey: process.env.CLAUDE_API_KEY,
  baseURL: "https://mnexium.com/api",
  defaultHeaders: {
    "Authorization": `Bearer ${process.env.MNX_KEY}`,
  },
});

const response = await client.messages.create({
  model: "claude-3-haiku-20240307",
  messages: [{ role: "user", content: "Hello!" }],
  max_tokens: 1024,
});

Google Gemini SDK

Use the native Google Generative AI SDK. The base URL should be the root domain.

import { GoogleGenAI } from "@google/genai";

const client = new GoogleGenAI({
  apiKey: process.env.GEMINI_KEY,
  httpOptions: {
    baseUrl: "https://mnexium.com",
    headers: {
      "Authorization": `Bearer ${process.env.MNX_KEY}`,
    },
  },
});

const response = await client
  .models.generateContent({
    model: "gemini-2.0-flash-lite",
    contents: "Hello!",
  });

Cross-Provider Memory Sharing

Memories learned with one provider are automatically available to all others. Use the same subject_id across providers to share context.

// Learn a fact with OpenAI
await openai.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "My favorite color is purple" }],
  mnx: { subject_id: "user_123", learn: "force" },
});

// Recall with Claude - it knows the color!
const response = await fetch("https://mnexium.com/api/v1/chat/completions", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${MNX_KEY}`,
    "x-anthropic-key": CLAUDE_KEY,
  },
  body: JSON.stringify({
    model: "claude-3-haiku-20240307",
    messages: [{ role: "user", content: "What is my favorite color?" }],
    mnx: { subject_id: "user_123", recall: true },
  }),
});
// Claude responds: "Your favorite color is purple!"

This enables powerful workflows where you can use the best model for each task while maintaining consistent user context across all interactions.

Authentication

API Keys

All requests require a Mnexium API key passed via the Authorization header.

Authorization*
header
Bearer mnx_live_... — Your Mnexium API key
x-openai-key
header
sk-... — Your OpenAI API key (required for OpenAI models)
x-anthropic-key
header
sk-ant-... — Your Anthropic API key (required for Claude models)

Provide the API key for the provider matching your chosen model. For example, use x-openai-key forChatGPT or x-anthropic-key for Claude.

API Key Permissions

API keys can be scoped to limit access. Available scopes:

ScopeGETPOST/PATCHDELETE
read
write
delete
*

The mnx Object

Include the mnx object in your request body to control Mnexium features:

subject_id
string
Identifies the end-user. Auto-generated with subj_ prefix if omitted.
chat_id
string
Conversation identifier (UUID). Auto-generated if omitted.
log
boolean
Save messages to chat history. Default: true
learn
boolean | 'force'
Memory extraction: true (LLM decides), "force" (always), false (never). Default: true
recall
boolean
Inject relevant stored memories into context. Searches memories for this subject and adds matching ones to the system prompt. Default: false
history
boolean
Prepend previous messages from this chat. Default: false
system_prompt
boolean | string
true (auto-resolve, default), false (skip injection), or a prompt ID like "sp_abc" for explicit selection.
metadata
object
Custom metadata attached to saved logs.
Responses API
POST/api/v1/responses

Proxy for OpenAI and Anthropic APIs with Mnexium extensions for history, persistence, and system prompts. Supports GPT-4, Claude, and other models.

Scope:responses:write
Request
curl -X POST "https://www.mnexium.com/api/v1/responses" \
  -H "Authorization: Bearer $MNX_KEY" \
  -H "Content-Type: application/json" \
  -H "x-openai-key: $OPENAI_KEY" \
  -d '{
    "model": "gpt-4o-mini",
    "input": "What is the weather like?",
    "mnx": {
      "subject_id": "user_123",
      "chat_id": "550e8400-e29b-41d4-a716-446655440000",  // Must be a UUID
      "log": true,
      "learn": true
    }
  }'
mnx Parameters
subject_id
string
User/subject identifier for memory and history.
chat_id
string
Conversation ID (UUID recommended) for history grouping.
log
boolean
Save to chat history. Default: true
learn
boolean | 'force'
Memory extraction: false (never), true (LLM decides), "force" (always). Default: true
history
boolean | number
Prepend chat history. Default: false
system_prompt
string | boolean
Prompt ID, true (auto-resolve), or false (skip). Default: true
Response
{
  "id": "resp_abc123",
  "object": "response",
  "created_at": 1702847400,
  "output": [
    {
      "type": "message",
      "role": "assistant",
      "content": [
        { "type": "output_text", "text": "I don't have access to real-time weather data..." }
      ]
    }
  ],
  "usage": { "input_tokens": 12, "output_tokens": 45 }
}
Response headers include X-Mnx-Chat-Id and X-Mnx-Subject-Id
Show Claude (Anthropic) example

Use x-anthropic-key header and a Claude model name.

Request
curl -X POST "https://www.mnexium.com/api/v1/responses" \
  -H "Authorization: Bearer $MNX_KEY" \
  -H "Content-Type: application/json" \
  -H "x-anthropic-key: $ANTHROPIC_KEY" \
  -d '{
    "model": "claude-sonnet-4-20250514",
    "input": "What is the weather like?",
    "mnx": {
      "subject_id": "user_123",
      "log": true,
      "learn": true
    }
  }'
Show streaming example

Set "stream": true to receive Server-Sent Events (SSE).

Request
curl -X POST "https://www.mnexium.com/api/v1/responses" \
  -H "Authorization: Bearer $MNX_KEY" \
  -H "Content-Type: application/json" \
  -H "x-openai-key: $OPENAI_KEY" \
  -d '{ "model": "gpt-4o-mini", "input": "Hello", "stream": true }'
Response (SSE)
data: {"type":"response.output_text.delta","delta":"Hello"}
data: {"type":"response.output_text.delta","delta":"!"}
data: {"type":"response.output_text.delta","delta":" How"}
data: {"type":"response.output_text.delta","delta":" can"}
data: {"type":"response.output_text.delta","delta":" I"}
data: {"type":"response.output_text.delta","delta":" help?"}
data: {"type":"response.completed","response":{...}}
data: [DONE]

Parse each data: line as JSON. Collect delta values to build the full response.

Chat Completions
POST/api/v1/chat/completions

Proxy for OpenAI and Anthropic Chat APIs with automatic history prepending and system prompt injection. Supports GPT-4, Claude, and other models.

Scope:chat:write
Request
curl -X POST "https://www.mnexium.com/api/v1/chat/completions" \
  -H "Authorization: Bearer $MNX_KEY" \
  -H "Content-Type: application/json" \
  -H "x-openai-key: $OPENAI_KEY" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      { "role": "user", "content": "Hello!" }
    ],
    "mnx": {
      "subject_id": "user_123",
      "chat_id": "550e8400-e29b-41d4-a716-446655440000",  // Must be a UUID
      "log": true,
      "learn": true,
      "history": true
    }
  }'
mnx Parameters
subject_id
string
User/subject identifier for memory and history.
chat_id
string
Conversation ID (UUID recommended) for history grouping.
log
boolean
Save to chat history. Default: true
learn
boolean | 'force'
Memory extraction: false (never), true (LLM decides), "force" (always). Default: true
history
boolean | number
Prepend chat history. Default: false
system_prompt
string | boolean
Prompt ID, true (auto-resolve), or false (skip). Default: true
Response
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1702847400,
  "model": "gpt-4o-mini",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": { "prompt_tokens": 10, "completion_tokens": 12, "total_tokens": 22 }
}
Response headers include X-Mnx-Chat-Id and X-Mnx-Subject-Id
Show streaming example

Set "stream": true to receive Server-Sent Events (SSE).

Request
curl -X POST "https://www.mnexium.com/api/v1/chat/completions" \
  -H "Authorization: Bearer $MNX_KEY" \
  -H "Content-Type: application/json" \
  -H "x-openai-key: $OPENAI_KEY" \
  -d '{ "model": "gpt-4o-mini", "messages": [{"role":"user","content":"Hi"}], "stream": true }'
Response (SSE)
data: {"choices":[{"delta":{"role":"assistant"},"index":0}]}
data: {"choices":[{"delta":{"content":"Hello"},"index":0}]}
data: {"choices":[{"delta":{"content":"!"},"index":0}]}
data: {"choices":[{"delta":{"content":" How"},"index":0}]}
data: {"choices":[{"delta":{"content":" can"},"index":0}]}
data: {"choices":[{"delta":{"content":" I"},"index":0}]}
data: {"choices":[{"delta":{"content":" help?"},"index":0}]}
data: {"choices":[{"delta":{},"finish_reason":"stop","index":0}]}
data: [DONE]

Parse each data: line as JSON. Concatenate delta.content values to build the response.

Chat History
GET/api/v1/chat/history/list

List all chats for a subject. Returns chat summaries with message counts — useful for building chat sidebars.

Scope:history:read
subject_id*
string
The subject to list chats for.
limit
number
Max chats to return. Default: 50, Max: 500
Request
curl -G "https://www.mnexium.com/api/v1/chat/history/list" \
  -H "Authorization: Bearer $MNX_KEY" \
  --data-urlencode "subject_id=user_123" \
  --data-urlencode "limit=50"
Response
{
  "chats": [
    {
      "subject_id": "user_123",
      "chat_id": "550e8400-e29b-41d4-a716-446655440000",
      "last_time": "2024-12-17T19:00:01Z",
      "message_count": 12
    },
    {
      "subject_id": "user_123",
      "chat_id": "660e8400-e29b-41d4-a716-446655440001",
      "last_time": "2024-12-16T14:30:00Z",
      "message_count": 8
    }
  ]
}
GET/api/v1/chat/history/read

Retrieve message history for a specific conversation. Use after listing chats to load full messages.

Scope:history:read
chat_id*
string
The conversation ID to fetch history for.
subject_id
string
Filter by subject (optional).
limit
number
Max messages to return. Default: 200
Request
curl -G "https://www.mnexium.com/api/v1/chat/history/read" \
  -H "Authorization: Bearer $MNX_KEY" \
  --data-urlencode "chat_id=550e8400-e29b-41d4-a716-446655440000" \
  --data-urlencode "subject_id=user_123" \
  --data-urlencode "limit=50"
Response
{
  "messages": [
    {
      "role": "user",
      "message": "Hello!",
      "event_time": "2024-12-17T19:00:00Z"
    },
    {
      "role": "assistant",
      "message": "Hi there! How can I help?",
      "event_time": "2024-12-17T19:00:01Z"
    }
  ]
}
DELETE/api/v1/chat/history/delete

Delete all messages in a chat. This is a soft delete — messages are marked as deleted but retained for audit purposes.

Scope:history:write
chat_id*
string
The conversation ID to delete.
subject_id
string
Filter by subject (optional, for additional safety).
Request
curl -X DELETE "https://www.mnexium.com/api/v1/chat/history/delete?chat_id=550e8400-e29b-41d4-a716-446655440000&subject_id=user_123" \
  -H "Authorization: Bearer $MNX_KEY"
Response
{
  "success": true,
  "chat_id": "550e8400-e29b-41d4-a716-446655440000"
}
Summarization

Long conversations can exceed context window limits and increase costs. Mnexium's Summarization feature automatically compresses older messages into concise summaries while preserving recent messages verbatim.

When enabled, Mnexium uses gpt-4o-mini to generate rolling summaries of your conversation history. Summaries are cached and reused across requests, so you only pay for summarization once per conversation segment.

Use the summarize parameter in your mnx object to enable automatic summarization. Choose a preset mode based on your cost/fidelity tradeoff:

ModeStart AtKeep RecentSummary TargetBest For
offAllMaximum fidelity (default)
light70K tokens25 msgs~1,800 tokensSafe compression
balanced55K tokens15 msgs~1,100 tokensBest cost/performance
aggressive35K tokens8 msgs~700 tokensCheapest possible
Using a preset mode
{
  "model": "gpt-4o-mini",
  "messages": [{ "role": "user", "content": "..." }],
  "mnx": {
    "subject_id": "user_123",
    "chat_id": "550e8400-e29b-41d4-a716-446655440000",
    "summarize": "balanced"
  }
}
Using custom config
{
  "model": "gpt-4o-mini",
  "messages": [{ "role": "user", "content": "..." }],
  "mnx": {
    "subject_id": "user_123",
    "chat_id": "550e8400-e29b-41d4-a716-446655440000",
    "summarize_config": {
      "start_at_tokens": 40000,
      "chunk_size": 15000,
      "keep_recent_messages": 10,
      "summary_target": 800
    }
  }
}
start_at_tokens— Token threshold to trigger summarization. History below this is sent verbatim.
chunk_size— How many tokens to summarize at a time when history exceeds threshold.
keep_recent_messages— Always keep this many recent messages verbatim (not summarized).
summary_target— Target token count for each generated summary.
  1. When a chat request comes in, Mnexium counts tokens in the conversation history using tiktoken.
  2. If history exceeds start_at_tokens, older messages are summarized.
  3. The summary is generated using gpt-4o-mini and cached in the database.
  4. Future requests reuse the cached summary until new messages push past the threshold again.
  5. The final context sent to the LLM is: [Summary] + [Recent Messages] + [New Message]

Mnexium uses a rolling summary by default: we maintain a single condensed memory block for older messages and inject that plus the most recent turns into the model.

This is the most token-efficient strategy and is recommended for almost all workloads.

For specialized use cases that need more detailed historical context inside the prompt (at higher token cost), granular summaries can be enabled in a future release, which keep multiple smaller summary blocks instead of one.

Memories
GET/api/v1/memories

List all memories for a subject. Use this for full memory management.

Scope:memories:read
subject_id*
string
The subject to fetch memories for.
limit
number
Max memories to return. Default: 50
offset
number
Pagination offset. Default: 0
Request
curl -G "https://www.mnexium.com/api/v1/memories" \
  -H "Authorization: Bearer $MNX_KEY" \
  --data-urlencode "subject_id=user_123" \
  --data-urlencode "limit=20"
Response
{
  "data": [
    {
      "id": "mem_abc123",
      "text": "User prefers dark mode interfaces",
      "kind": "preference",
      "importance": 75,
      "created_at": "2024-12-15T10:30:00Z"
    }
  ],
  "count": 1
}
GET/api/v1/memories/search

Semantic search over a subject's memories. Returns the most relevant items by similarity score.

Scope:memories:search
subject_id*
string
The subject to search memories for.
q*
string
Search query.
limit
number
Max results. Default: 10
Request
curl -G "https://www.mnexium.com/api/v1/memories/search" \
  -H "Authorization: Bearer $MNX_KEY" \
  --data-urlencode "subject_id=user_123" \
  --data-urlencode "q=food preferences" \
  --data-urlencode "limit=5"
Response
{
  "data": [
    {
      "id": "mem_xyz789",
      "text": "User is vegetarian and enjoys Italian cuisine",
      "score": 0.92
    },
    {
      "id": "mem_uvw012",
      "text": "User is allergic to peanuts",
      "score": 0.78
    }
  ],
  "query": "food preferences",
  "count": 2
}
POST/api/v1/memories

Manually create a memory. For automatic extraction with LLM-chosen classification, use the Responses or Chat API with learn: true instead.

Scope:memories:write
💡 Tip: When you use the Responses or Chat Completions API with learn: true, the LLM automatically extracts memories and intelligently chooses the kind, importance, and tags based on conversation context. Use learn: "force" to always create a memory. This endpoint is for manual injection when you need direct control.
subject_id*
string
The subject this memory belongs to.
text*
string
The memory content (max 10,000 chars).
kind
string
Optional. Type: fact, preference, context, instruction. Fallback: "fact"
visibility
string
Optional. Visibility: private, shared, public. Fallback: "private"
importance
number
Optional. Priority 0-100. Fallback: 50
tags
array
Optional. Tags for categorization. Fallback: []
metadata
object
Optional. Custom metadata object. Fallback: {}
Note: When using learn: true with the Responses/Chat API, the LLM intelligently chooses kind, visibility, importance, and tags based on context. The fallback values above only apply when manually creating memories via this endpoint.
Request
curl -X POST "https://www.mnexium.com/api/v1/memories" \
  -H "Authorization: Bearer $MNX_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "subject_id": "user_123",
    "text": "User prefers dark mode interfaces",
    "kind": "preference",
    "importance": 75
  }'
Response
{
  "id": "mem_abc123",
  "subject_id": "user_123",
  "text": "User prefers dark mode interfaces",
  "kind": "preference",
  "created": true
}
GET/api/v1/memories/:id

Get a specific memory by ID.

Scope:memories:read
id*
path
The memory ID.
Request
curl "https://www.mnexium.com/api/v1/memories/mem_abc123" \
  -H "Authorization: Bearer $MNX_KEY"
Response
{
  "data": {
    "id": "mem_abc123",
    "subject_id": "user_123",
    "text": "User prefers dark mode interfaces",
    "kind": "preference",
    "importance": 75,
    "created_at": "2024-12-15T10:30:00Z"
  }
}
PATCH/api/v1/memories/:id

Update an existing memory. Embeddings are regenerated if text changes.

Scope:memories:write
id*
path
The memory ID to update.
text
string
New memory content.
kind
string
New type.
importance
number
New importance (0-100).
tags
array
New tags.
Request
curl -X PATCH "https://www.mnexium.com/api/v1/memories/mem_abc123" \
  -H "Authorization: Bearer $MNX_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "User strongly prefers dark mode",
    "importance": 90
  }'
Response
{
  "id": "mem_abc123",
  "updated": true
}
DELETE/api/v1/memories/:id

Soft-delete a memory. The memory is deactivated but retained for audit.

Scope:memories:write
id*
path
The memory ID to delete.
Request
curl -X DELETE "https://www.mnexium.com/api/v1/memories/mem_abc123" \
  -H "Authorization: Bearer $MNX_KEY"
Response
{
  "ok": true,
  "deleted": true
}
GET/api/v1/memories/superseded

List memories that have been superseded (replaced by newer memories). Useful for audit and debugging.

Scope:memories:read
subject_id*
string
The subject to fetch superseded memories for.
limit
number
Max memories to return. Default: 50
offset
number
Pagination offset. Default: 0
Request
curl -G "https://www.mnexium.com/api/v1/memories/superseded" \
  -H "Authorization: Bearer $MNX_KEY" \
  --data-urlencode "subject_id=user_123"
Response
{
  "data": [
    {
      "id": "mem_old123",
      "text": "Favorite fruit is blueberry",
      "status": "superseded",
      "superseded_by": "mem_new456",
      "created_at": "2024-12-10T10:00:00Z"
    }
  ],
  "count": 1
}
POST/api/v1/memories/:id/restore

Restore a superseded memory back to active status. Use this to undo an incorrect supersede.

Scope:memories:write
id*
path
The memory ID to restore.
Request
curl -X POST "https://www.mnexium.com/api/v1/memories/mem_old123/restore" \
  -H "Authorization: Bearer $MNX_KEY"
Response
{
  "ok": true,
  "restored": true,
  "id": "mem_old123",
  "subject_id": "user_123",
  "text": "Favorite fruit is blueberry"
}

Memory Versioning & Conflict Resolution

Mnexium automatically handles conflicting memories. When a user updates a preference or fact, the system detects semantically similar memories and supersedes them.

Example: If a user has the memory "Favorite fruit is blueberry" and later says "my new favorite fruit is strawberry", the system will:

  1. Extract the new memory: "User's favorite fruit is strawberry"
  2. Detect the old "blueberry" memory as a conflict
  3. Mark the old memory as superseded
  4. Only the new "strawberry" memory will be recalled in future conversations

Memory Status

activeMemory is current and will be included in recall searches.
supersededMemory has been replaced by a newer one. Excluded from recall but retained for audit.

Usage Tracking

When memories are recalled during a chat completion with recall: true, the system automatically tracks:

  • last_seen_at — Timestamp of the most recent recall
  • seen_count — Total number of times the memory has been recalled
GET/api/v1/memories/recalls

Query memory recall events for auditability. Track which memories were used in which conversations.

Scope:memories:read
chat_id
string
Get all memories recalled in a specific chat. Provide either chat_id or memory_id.
memory_id
string
Get all chats where a specific memory was recalled.
stats
boolean
If true with memory_id, returns aggregated stats instead of individual events.
limit
number
Max results. Default: 100, Max: 1000
Query by Chat
curl -G "https://www.mnexium.com/api/v1/memories/recalls" \
  -H "Authorization: Bearer $MNX_KEY" \
  --data-urlencode "chat_id=550e8400-e29b-41d4-a716-446655440000"
Response
{
  "data": [
    {
      "event_id": "evt_abc123",
      "memory_id": "mem_xyz789",
      "memory_text": "User prefers dark mode",
      "similarity_score": 78.5,
      "message_index": 0,
      "recalled_at": "2024-12-15T10:30:00Z"
    }
  ],
  "count": 1,
  "chat_id": "550e8400-e29b-41d4-a716-446655440000"
}
Query by Memory (with stats)
curl -G "https://www.mnexium.com/api/v1/memories/recalls" \
  -H "Authorization: Bearer $MNX_KEY" \
  --data-urlencode "memory_id=mem_xyz789" \
  --data-urlencode "stats=true"
Response
{
  "memory_id": "mem_xyz789",
  "stats": {
    "total_recalls": 15,
    "unique_chats": 8,
    "avg_score": 72.4,
    "first_recalled_at": "2024-12-01T09:00:00Z",
    "last_recalled_at": "2024-12-15T10:30:00Z"
  }
}
Note: The chat_logged field indicates whether the chat was saved to history (log: true). When chat_logged = 0, the recall event is tracked but the chat messages are not stored.
Profiles

Overview

Profiles provide structured, schema-defined data about subjects. Unlike free-form memories, profile fields have defined keys (like name, email, timezone) and are automatically extracted from conversations or can be set via API.

Automatic Extraction

When learn: true, the LLM extracts profile fields from conversation context.

Superseding

New values automatically supersede old ones. Higher confidence or manual edits take priority.

GET/api/v1/profiles

Get the profile for a subject. Returns all profile fields with their values and metadata.

Scope:profiles:read
subject_id*
string
The subject ID to get profile for.
format
string
Response format: "simple" (default) returns key-value pairs, "full" returns detailed metadata including confidence, source, and timestamps.
Request (Simple)
curl -G "https://www.mnexium.com/api/v1/profiles" \
  -H "Authorization: Bearer $MNX_KEY" \
  --data-urlencode "subject_id=user_123"
Response (Simple)
{
  "data": {
    "name": "Sarah Chen",
    "email": "sarah@example.com",
    "timezone": "America/New_York",
    "language": "English"
  }
}
Request (Full)
curl -G "https://www.mnexium.com/api/v1/profiles" \
  -H "Authorization: Bearer $MNX_KEY" \
  --data-urlencode "subject_id=user_123" \
  --data-urlencode "format=full"
Response (Full)
{
  "data": {
    "name": {
      "value": "Sarah Chen",
      "confidence": 0.95,
      "source_type": "chat",
      "updated_at": "2024-12-15T10:30:00Z",
      "memory_id": "mem_abc123"
    },
    "timezone": {
      "value": "America/New_York",
      "confidence": 0.85,
      "source_type": "chat",
      "updated_at": "2024-12-14T09:00:00Z",
      "memory_id": "mem_xyz789"
    }
  }
}
PATCH/api/v1/profiles

Update profile fields for a subject. Supports batch updates with confidence scores.

Scope:profiles:write
subject_id*
string
The subject ID to update profile for.
updates*
array
Array of field updates. Each update must have field_key and value.
Request
curl -X PATCH "https://www.mnexium.com/api/v1/profiles" \
  -H "Authorization: Bearer $MNX_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "subject_id": "user_123",
    "updates": [
      { "field_key": "name", "value": "Sarah Chen", "confidence": 1.0 },
      { "field_key": "timezone", "value": "America/New_York" }
    ]
  }'
Response
{
  "ok": true,
  "updated": 2,
  "results": [
    { "field_key": "name", "success": true },
    { "field_key": "timezone", "success": true }
  ]
}
Note: Updates with confidence: 1.0 are treated as manual edits and will supersede any existing value regardless of its confidence. Lower confidence values may be rejected if a higher-confidence value already exists.
DELETE/api/v1/profiles

Delete a specific profile field for a subject. The underlying memory is soft-deleted.

Scope:profiles:write
subject_id*
string
The subject ID.
field_key*
string
The profile field key to delete (e.g., "timezone").
Request
curl -X DELETE "https://www.mnexium.com/api/v1/profiles" \
  -H "Authorization: Bearer $MNX_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "subject_id": "user_123",
    "field_key": "timezone"
  }'
Response
{
  "ok": true,
  "deleted": true,
  "field_key": "timezone"
}

Profile Schema

Each project has a configurable profile schema that defines which fields are available. The schema includes both system fields (name, email, timezone, language) and custom fields you define.

Default System Fields

nameUser's full name
emailEmail address
timezoneUser's timezone (e.g., "America/New_York")
languagePreferred language

Source Types

chatAutomatically extracted from conversation
manualSet via UI or API with high confidence
apiSet via API
Agent State

Overview

Agent State provides short-term, task-scoped storage for agentic workflows. Unlike memories (long-term facts), state tracks the agent's current working context: task progress, pending actions, and session variables.

Use cases: Multi-step task automation, workflow position tracking, pending tool call results, session variables, and resumable conversations.

PUT /state/:key

Create or update agent state for a given key.

X-Subject-ID*
header
Subject/user identifier
X-Session-ID
header
Optional session identifier
value*
object
JSON state to store
ttl_seconds
number
Time-to-live in seconds (optional, omit for no expiration)
curl -X PUT "https://www.mnexium.com/api/v1/state/current_task" \
  -H "Authorization: Bearer $MNX_KEY" \
  -H "Content-Type: application/json" \
  -H "X-Subject-ID: user_123" \
  -d '{
    "value": {
      "status": "in_progress",
      "task": "Plan trip to Tokyo",
      "steps_completed": ["research", "book_flights"],
      "next_step": "book_hotels"
    },
    "ttl_seconds": 3600
  }'

GET /state/:key

Retrieve agent state for a given key.

X-Subject-ID*
header
Subject/user identifier
// Response
{
  "key": "current_task",
  "value": {
    "status": "in_progress",
    "task": "Plan trip to Tokyo",
    "next_step": "book_hotels"
  },
  "ttl": "2025-01-01T12:00:00Z",
  "updated_at": "2025-01-01T11:00:00Z"
}

DELETE /state/:key

Delete agent state (soft delete via TTL expiration).

X-Subject-ID*
header
Subject/user identifier

State Injection in Proxy

Load and inject agent state into LLM context via the mnx.state config:

curl -X POST "https://www.mnexium.com/api/v1/chat/completions" \  -H "Authorization: Bearer $MNX_KEY" \  -H "x-openai-key: $OPENAI_KEY" \  -d '{    "model": "gpt-4o-mini",    "messages": [{ "role": "user", "content": "What should I do next?" }],    "mnx": {      "subject_id": "user_123",      "state": {        "load": true,        "key": "current_task"      }    }  }'

When state.load: true, the agent's current state is injected as a system message, allowing the LLM to resume tasks and avoid repeating completed work.

Key Naming Conventions

Recommended patterns for state keys:

current_taskDefault key for general task state
task:onboardingNamed workflow state
tool:weather:tc_123Pending tool call result
flow:checkoutMulti-step flow position
System Prompts

Overview

System prompts are managed instructions automatically injected into LLM requests. They support scoping at project, subject, or chat level.

project
scope
Applies to all requests in the project (default).
subject
scope
Applies only to requests with a matching subject_id.
chat
scope
Applies only to requests with a matching chat_id.

Prompts are layered: project → subject → chat. Multiple prompts are concatenated.

GET/api/v1/prompts

List all system prompts for your project.

Scope:prompts:read
Request
curl "https://www.mnexium.com/api/v1/prompts" \
  -H "Authorization: Bearer $MNX_KEY"
Response
{
  "data": [
    {
      "id": "sp_abc123",
      "name": "Default Assistant",
      "prompt_text": "You are a helpful assistant.",
      "scope": "project",
      "is_default": true,
      "priority": 100
    }
  ]
}
POST/api/v1/prompts

Create a new system prompt. Set is_default: true for auto-injection.

Scope:prompts:write
name*
string
Display name for the prompt.
prompt_text*
string
The system prompt content.
scope
string
One of: project, subject, chat. Default: project
scope_id
string
Required if scope is subject or chat.
is_default
boolean
Set as default for auto-injection.
priority
number
Lower = injected first. Default: 100
Request
curl -X POST "https://www.mnexium.com/api/v1/prompts" \
  -H "Authorization: Bearer $MNX_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Default Assistant",
    "prompt_text": "You are a helpful assistant.",
    "scope": "project",
    "is_default": true
  }'
Response
{
  "id": "sp_abc123",
  "name": "Default Assistant",
  "scope": "project",
  "created": true
}
PATCH/api/v1/prompts/:id

Update an existing system prompt. Only provided fields are updated.

Scope:prompts:write
id*
path
The prompt ID to update.
name
string
New display name.
prompt_text
string
New prompt content.
is_default
boolean
Set/unset as default.
is_active
boolean
Enable/disable the prompt.
priority
number
New priority value.
Request
curl -X PATCH "https://www.mnexium.com/api/v1/prompts/sp_abc123" \
  -H "Authorization: Bearer $MNX_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt_text": "You are a friendly assistant.",
    "is_default": true
  }'
Response
{
  "id": "sp_abc123",
  "updated": true
}
DELETE/api/v1/prompts/:id

Soft-delete a system prompt. The prompt is deactivated but retained for audit purposes.

Scope:prompts:write
id*
path
The prompt ID to delete.
Request
curl -X DELETE "https://www.mnexium.com/api/v1/prompts/sp_abc123" \
  -H "Authorization: Bearer $MNX_KEY"
Response
{
  "ok": true,
  "deleted": true
}
GET/api/v1/prompts/resolve

Preview which prompts will be injected for a given context.

Scope:prompts:read
subject_id
string
Include subject-scoped prompts.
chat_id
string
Include chat-scoped prompts.
combined
boolean
Return single concatenated string.
Request
curl -G "https://www.mnexium.com/api/v1/prompts/resolve" \
  -H "Authorization: Bearer $MNX_KEY" \
  --data-urlencode "subject_id=user_123" \
  --data-urlencode "combined=true"
Response
{
  "combined": "You are a helpful assistant.\n\nThis user prefers concise responses.",
  "prompts": [
    { "id": "sp_abc123", "scope": "project" },
    { "id": "sp_def456", "scope": "subject" }
  ]
}

Using system_prompt in Requests

Control system prompt injection via the mnx.system_prompt field:

// Auto-resolve based on context (default)
"mnx": { "subject_id": "user_123" }

// Skip system prompt injection
"mnx": { "system_prompt": false }

// Use a specific prompt by ID
"mnx": { "system_prompt": "sp_sales_assistant" }
Governance & Privacy

Overview

Mnexium provides fine-grained access control, data lifecycle management, and privacy-conscious design to help you build enterprise-ready AI applications.

PII Guidelines

Best practices for handling personally identifiable information:

⚠️ Don't store secrets in memory text

Never put passwords, API keys, or tokens in memory text fields. These are searchable and may be included in LLM context.

✓ Use metadata for IDs

Store user IDs, order numbers, and references in metadata. Keep memory text for semantic meaning.

✓ Scope by subject_id

Always use subject_id to isolate user data. Memories are never shared across subjects unless explicitly marked visibility: "shared".

Audit Trail

Every API call is logged with full context. View your activity log at /activity-log.

action
string
API action performed (e.g., memory.create, chat.completion)
subject_id
string
User the action was performed for
status
string
Result: success or failure
timestamp
datetime
When the action occurred
metadata
object
Additional context (model, tokens, etc.)
Errors

Error Response Format

All errors return a JSON object with an error field describing the issue.

{
  "error": "error_code_here"
}

HTTP Status Codes

400
Bad Request — Invalid request body, missing required fields, or malformed input.
401
Unauthorized — Missing or invalid API key, or token has been revoked/expired.
403
Forbidden — API key lacks required scopes for this endpoint.
404
Not Found — Resource does not exist or has been deleted.
429
Too Many Requests — Monthly usage limit exceeded. Please reach out to mnexium for assistance.
500
Internal Error — Server error. Contact support if persistent.

Common Error Codes

unauthorized
401
API key is missing, invalid, or malformed.
token_revoked
401
API key has been revoked. Generate a new one in the dashboard.
token_expired
401
API key has expired. Generate a new one in the dashboard.
forbidden
403
API key doesn't have the required scope (e.g., prompts:write).
prompt_not_found
404
The specified prompt ID does not exist.
usage_limit_exceeded
429
Monthly usage limit exceeded. The response includes current and limit fields showing your usage.
subject_id_required
400
subject_id is required when history: true.
name_required
400
Missing required name field when creating a prompt.
prompt_text_required
400
Missing required prompt_text field when creating a prompt.