API Documentation
Reference documentation for Mnexium public HTTP APIs. Use these endpoints to run OpenAI-powered requests with project-scoped continuity through chat history, memories, and system prompts.
Concepts & Architecture
Before diving into the API, it helps to understand the core concepts that power Mnexium's memory system.
Your agent sends a normal API request to Mnexium, along with a few mnx options. Mnexium automatically retrieves conversation history, relevant long-term memory, and agent state — and builds an enriched prompt for the model.
The model returns a response, and Mnexium optionally learns from the interaction. Every step is visible through logs, traces, and recall events so you can debug exactly what happened.
Who This Is For
Use Mnexium if you're building AI assistants or agents that must remember users across sessions, resume multi-step tasks, and be configurable per project, user, or conversation. It's the memory and state layer so you can focus on your product.
Works with developers using ChatGPT (OpenAI) and Claude (Anthropic) — bring your own API key and Mnexium handles the rest. Support for additional model providers coming soon.
Chat History, Memory & State
Three distinct but complementary systems for context management:
The raw conversation log — every message sent and received within a chat_id. Used for context continuity within a single conversation session. Think of it as short-term, session-scoped memory.
Enabled with history: true
Extracted facts, preferences, and context about a subject_id (user). Persists across all conversations and sessions. Think of it as long-term, user-scoped memory that the agent "remembers" about someone.
Created with learn: true, recalled with recall: true
Short-term, task-scoped working context for agentic workflows. Tracks task progress, pending actions, and session variables. Think of it as the agent's "scratchpad" for multi-step tasks.
Stored with PUT /state/:key, loaded with state.load: true
Message Assembly Order
For chat completions, Mnexium assembles the final messages array in this order:
system_prompt is not false)state.load: true)recall: true)history: true)Items 1-3 are appended to the system message. Item 4 is prepended to the messages array. Item 5 is your original request.
Memory Fields
Each memory has metadata that helps with organization, recall, and lifecycle management:
statusactive (current, will be recalled) or superseded (replaced by newer memory, won't be recalled)kindfact, preference, context, or noteimportancevisibilityprivate (subject only), shared (project-wide), or publicseen_countlast_seen_atsuperseded_byMemory Versioning
When new memories are created, the system automatically handles conflicts using semantic similarity. There are only two status values: active and superseded.
Example: "User likes coffee" → "User enjoys coffee" (new one skipped)
superseded and the new one is created as active.Example: "Favorite fruit is blueberry" → "Favorite fruit is apple" (old becomes superseded)
active memory.Example: "User likes coffee" + "User works remotely" (both remain active)
Superseded memories are preserved for audit purposes and can be restored via the POST /memories/:id/restore endpoint.
Memory Decay & Reinforcement
Memories naturally decay over time, similar to human memory. Frequently recalled memories become stronger, while unused memories gradually fade in relevance. This ensures the most important and actively-used information surfaces during recall.
explicit (created via API), inferred (extracted from conversation), or corrected (user corrected an inference).The Memory Lifecycle
learn: true)recall: true)Mnexium provides a proxy layer for OpenAI APIs with built-in support for conversation persistence, memory management, and system prompt injection.
Quick Example
A request to the Chat Completions API with history, memory extraction, and all Mnexium features enabled:
curl -X POST "https://www.mnexium.com/api/v1/chat/completions" \ -H "Authorization: Bearer $MNX_KEY" \ -H "Content-Type: application/json" \ -H "x-openai-key: $OPENAI_KEY" \ -d '{ "model": "gpt-4o-mini", "messages": [{ "role": "user", "content": "What IDE should I use?" }], "mnx": { "subject_id": "user_123", "chat_id": "550e8400-e29b-41d4-a716-446655440000", // UUID "log": true, "learn": true, "recall": true, "history": true } }'What happens:
log: true— Saves this conversation turn to chat historylearn: true— LLM analyzes the message and may extract memoriesrecall: true— Injects relevant stored memories into context (e.g., "User prefers dark mode", "User is learning Rust")history: true— Prepends previous messages from this chat_id for context
Use learn: "force" to always create a memory, or learn: false to skip memory extraction entirely.
Get Started Repository
Clone our starter repo for working examples in Node.js and Python:
github.com/mariusndini/mnexium-get-startedUse Your Favorite SDK
Mnexium works with native SDKs from OpenAI, Anthropic, and Google. Simply point your SDK's base URL to Mnexium and get persistent memory across all providers.
| Provider | SDK | Base URL |
|---|---|---|
| OpenAI | openai | https://mnexium.com/api/v1 |
| Anthropic | @anthropic-ai/sdk | https://mnexium.com/api |
@google/genai | https://mnexium.com |
OpenAI SDK
Use the native OpenAI SDK with Mnexium by setting the baseURL.
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.OPENAI_KEY,
baseURL: "https://mnexium.com/api/v1",
defaultHeaders: {
"Authorization": `Bearer ${process.env.MNX_KEY}`,
},
});
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Hello!" }],
mnx: {
subject_id: "user_123",
learn: true,
recall: true,
},
});Anthropic SDK
Use the native Anthropic SDK. Note the base URL ends with /api (the SDK adds /v1/messages).
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic({
apiKey: process.env.CLAUDE_API_KEY,
baseURL: "https://mnexium.com/api",
defaultHeaders: {
"Authorization": `Bearer ${process.env.MNX_KEY}`,
},
});
const response = await client.messages.create({
model: "claude-3-haiku-20240307",
messages: [{ role: "user", content: "Hello!" }],
max_tokens: 1024,
});Google Gemini SDK
Use the native Google Generative AI SDK. The base URL should be the root domain.
import { GoogleGenAI } from "@google/genai";
const client = new GoogleGenAI({
apiKey: process.env.GEMINI_KEY,
httpOptions: {
baseUrl: "https://mnexium.com",
headers: {
"Authorization": `Bearer ${process.env.MNX_KEY}`,
},
},
});
const response = await client
.models.generateContent({
model: "gemini-2.0-flash-lite",
contents: "Hello!",
});Cross-Provider Memory Sharing
Memories learned with one provider are automatically available to all others. Use the same subject_id across providers to share context.
// Learn a fact with OpenAI
await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "My favorite color is purple" }],
mnx: { subject_id: "user_123", learn: "force" },
});
// Recall with Claude - it knows the color!
const response = await fetch("https://mnexium.com/api/v1/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${MNX_KEY}`,
"x-anthropic-key": CLAUDE_KEY,
},
body: JSON.stringify({
model: "claude-3-haiku-20240307",
messages: [{ role: "user", content: "What is my favorite color?" }],
mnx: { subject_id: "user_123", recall: true },
}),
});
// Claude responds: "Your favorite color is purple!"This enables powerful workflows where you can use the best model for each task while maintaining consistent user context across all interactions.
API Keys
All requests require a Mnexium API key passed via the Authorization header.
Authorization*Bearer mnx_live_... — Your Mnexium API keyx-openai-keysk-... — Your OpenAI API key (required for OpenAI models)x-anthropic-keysk-ant-... — Your Anthropic API key (required for Claude models)Provide the API key for the provider matching your chosen model. For example, use x-openai-key forChatGPT or x-anthropic-key for Claude.
API Key Permissions
API keys can be scoped to limit access. Available scopes:
| Scope | GET | POST/PATCH | DELETE |
|---|---|---|---|
read | ✓ | ✗ | ✗ |
write | ✗ | ✓ | ✗ |
delete | ✗ | ✗ | ✓ |
* | ✓ | ✓ | ✓ |
The mnx Object
Include the mnx object in your request body to control Mnexium features:
subject_idsubj_ prefix if omitted.chat_idlogtruelearntrue (LLM decides), "force" (always), false (never). Default: truerecallfalsehistoryfalsesystem_prompttrue (auto-resolve, default), false (skip injection), or a prompt ID like "sp_abc" for explicit selection.metadata/api/v1/responsesProxy for OpenAI and Anthropic APIs with Mnexium extensions for history, persistence, and system prompts. Supports GPT-4, Claude, and other models.
responses:writecurl -X POST "https://www.mnexium.com/api/v1/responses" \
-H "Authorization: Bearer $MNX_KEY" \
-H "Content-Type: application/json" \
-H "x-openai-key: $OPENAI_KEY" \
-d '{
"model": "gpt-4o-mini",
"input": "What is the weather like?",
"mnx": {
"subject_id": "user_123",
"chat_id": "550e8400-e29b-41d4-a716-446655440000", // Must be a UUID
"log": true,
"learn": true
}
}'subject_idchat_idloglearnhistorysystem_prompt{
"id": "resp_abc123",
"object": "response",
"created_at": 1702847400,
"output": [
{
"type": "message",
"role": "assistant",
"content": [
{ "type": "output_text", "text": "I don't have access to real-time weather data..." }
]
}
],
"usage": { "input_tokens": 12, "output_tokens": 45 }
}X-Mnx-Chat-Id and X-Mnx-Subject-Id▶Show Claude (Anthropic) exampleHide Claude (Anthropic) example
Use x-anthropic-key header and a Claude model name.
curl -X POST "https://www.mnexium.com/api/v1/responses" \
-H "Authorization: Bearer $MNX_KEY" \
-H "Content-Type: application/json" \
-H "x-anthropic-key: $ANTHROPIC_KEY" \
-d '{
"model": "claude-sonnet-4-20250514",
"input": "What is the weather like?",
"mnx": {
"subject_id": "user_123",
"log": true,
"learn": true
}
}'▶Show streaming exampleHide streaming example
Set "stream": true to receive Server-Sent Events (SSE).
curl -X POST "https://www.mnexium.com/api/v1/responses" \
-H "Authorization: Bearer $MNX_KEY" \
-H "Content-Type: application/json" \
-H "x-openai-key: $OPENAI_KEY" \
-d '{ "model": "gpt-4o-mini", "input": "Hello", "stream": true }'data: {"type":"response.output_text.delta","delta":"Hello"}
data: {"type":"response.output_text.delta","delta":"!"}
data: {"type":"response.output_text.delta","delta":" How"}
data: {"type":"response.output_text.delta","delta":" can"}
data: {"type":"response.output_text.delta","delta":" I"}
data: {"type":"response.output_text.delta","delta":" help?"}
data: {"type":"response.completed","response":{...}}
data: [DONE]Parse each data: line as JSON. Collect delta values to build the full response.
/api/v1/chat/completionsProxy for OpenAI and Anthropic Chat APIs with automatic history prepending and system prompt injection. Supports GPT-4, Claude, and other models.
chat:writecurl -X POST "https://www.mnexium.com/api/v1/chat/completions" \
-H "Authorization: Bearer $MNX_KEY" \
-H "Content-Type: application/json" \
-H "x-openai-key: $OPENAI_KEY" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{ "role": "user", "content": "Hello!" }
],
"mnx": {
"subject_id": "user_123",
"chat_id": "550e8400-e29b-41d4-a716-446655440000", // Must be a UUID
"log": true,
"learn": true,
"history": true
}
}'subject_idchat_idloglearnhistorysystem_prompt{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1702847400,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": { "prompt_tokens": 10, "completion_tokens": 12, "total_tokens": 22 }
}X-Mnx-Chat-Id and X-Mnx-Subject-Id▶Show streaming exampleHide streaming example
Set "stream": true to receive Server-Sent Events (SSE).
curl -X POST "https://www.mnexium.com/api/v1/chat/completions" \
-H "Authorization: Bearer $MNX_KEY" \
-H "Content-Type: application/json" \
-H "x-openai-key: $OPENAI_KEY" \
-d '{ "model": "gpt-4o-mini", "messages": [{"role":"user","content":"Hi"}], "stream": true }'data: {"choices":[{"delta":{"role":"assistant"},"index":0}]}
data: {"choices":[{"delta":{"content":"Hello"},"index":0}]}
data: {"choices":[{"delta":{"content":"!"},"index":0}]}
data: {"choices":[{"delta":{"content":" How"},"index":0}]}
data: {"choices":[{"delta":{"content":" can"},"index":0}]}
data: {"choices":[{"delta":{"content":" I"},"index":0}]}
data: {"choices":[{"delta":{"content":" help?"},"index":0}]}
data: {"choices":[{"delta":{},"finish_reason":"stop","index":0}]}
data: [DONE]Parse each data: line as JSON. Concatenate delta.content values to build the response.
/api/v1/chat/history/listList all chats for a subject. Returns chat summaries with message counts — useful for building chat sidebars.
history:readsubject_id*limitcurl -G "https://www.mnexium.com/api/v1/chat/history/list" \
-H "Authorization: Bearer $MNX_KEY" \
--data-urlencode "subject_id=user_123" \
--data-urlencode "limit=50"{
"chats": [
{
"subject_id": "user_123",
"chat_id": "550e8400-e29b-41d4-a716-446655440000",
"last_time": "2024-12-17T19:00:01Z",
"message_count": 12
},
{
"subject_id": "user_123",
"chat_id": "660e8400-e29b-41d4-a716-446655440001",
"last_time": "2024-12-16T14:30:00Z",
"message_count": 8
}
]
}/api/v1/chat/history/readRetrieve message history for a specific conversation. Use after listing chats to load full messages.
history:readchat_id*subject_idlimitcurl -G "https://www.mnexium.com/api/v1/chat/history/read" \
-H "Authorization: Bearer $MNX_KEY" \
--data-urlencode "chat_id=550e8400-e29b-41d4-a716-446655440000" \
--data-urlencode "subject_id=user_123" \
--data-urlencode "limit=50"{
"messages": [
{
"role": "user",
"message": "Hello!",
"event_time": "2024-12-17T19:00:00Z"
},
{
"role": "assistant",
"message": "Hi there! How can I help?",
"event_time": "2024-12-17T19:00:01Z"
}
]
}/api/v1/chat/history/deleteDelete all messages in a chat. This is a soft delete — messages are marked as deleted but retained for audit purposes.
history:writechat_id*subject_idcurl -X DELETE "https://www.mnexium.com/api/v1/chat/history/delete?chat_id=550e8400-e29b-41d4-a716-446655440000&subject_id=user_123" \
-H "Authorization: Bearer $MNX_KEY"{
"success": true,
"chat_id": "550e8400-e29b-41d4-a716-446655440000"
}Long conversations can exceed context window limits and increase costs. Mnexium's Summarization feature automatically compresses older messages into concise summaries while preserving recent messages verbatim.
When enabled, Mnexium uses gpt-4o-mini to generate rolling summaries of your conversation history. Summaries are cached and reused across requests, so you only pay for summarization once per conversation segment.
Use the summarize parameter in your mnx object to enable automatic summarization. Choose a preset mode based on your cost/fidelity tradeoff:
| Mode | Start At | Keep Recent | Summary Target | Best For |
|---|---|---|---|---|
| off | — | All | — | Maximum fidelity (default) |
| light | 70K tokens | 25 msgs | ~1,800 tokens | Safe compression |
| balanced | 55K tokens | 15 msgs | ~1,100 tokens | Best cost/performance |
| aggressive | 35K tokens | 8 msgs | ~700 tokens | Cheapest possible |
{
"model": "gpt-4o-mini",
"messages": [{ "role": "user", "content": "..." }],
"mnx": {
"subject_id": "user_123",
"chat_id": "550e8400-e29b-41d4-a716-446655440000",
"summarize": "balanced"
}
}{
"model": "gpt-4o-mini",
"messages": [{ "role": "user", "content": "..." }],
"mnx": {
"subject_id": "user_123",
"chat_id": "550e8400-e29b-41d4-a716-446655440000",
"summarize_config": {
"start_at_tokens": 40000,
"chunk_size": 15000,
"keep_recent_messages": 10,
"summary_target": 800
}
}
}start_at_tokens— Token threshold to trigger summarization. History below this is sent verbatim.chunk_size— How many tokens to summarize at a time when history exceeds threshold.keep_recent_messages— Always keep this many recent messages verbatim (not summarized).summary_target— Target token count for each generated summary.- When a chat request comes in, Mnexium counts tokens in the conversation history using tiktoken.
- If history exceeds
start_at_tokens, older messages are summarized. - The summary is generated using
gpt-4o-miniand cached in the database. - Future requests reuse the cached summary until new messages push past the threshold again.
- The final context sent to the LLM is:
[Summary] + [Recent Messages] + [New Message]
Mnexium uses a rolling summary by default: we maintain a single condensed memory block for older messages and inject that plus the most recent turns into the model.
This is the most token-efficient strategy and is recommended for almost all workloads.
For specialized use cases that need more detailed historical context inside the prompt (at higher token cost), granular summaries can be enabled in a future release, which keep multiple smaller summary blocks instead of one.
/api/v1/memoriesList all memories for a subject. Use this for full memory management.
memories:readsubject_id*limitoffsetcurl -G "https://www.mnexium.com/api/v1/memories" \
-H "Authorization: Bearer $MNX_KEY" \
--data-urlencode "subject_id=user_123" \
--data-urlencode "limit=20"{
"data": [
{
"id": "mem_abc123",
"text": "User prefers dark mode interfaces",
"kind": "preference",
"importance": 75,
"created_at": "2024-12-15T10:30:00Z"
}
],
"count": 1
}/api/v1/memories/searchSemantic search over a subject's memories. Returns the most relevant items by similarity score.
memories:searchsubject_id*q*limitcurl -G "https://www.mnexium.com/api/v1/memories/search" \
-H "Authorization: Bearer $MNX_KEY" \
--data-urlencode "subject_id=user_123" \
--data-urlencode "q=food preferences" \
--data-urlencode "limit=5"{
"data": [
{
"id": "mem_xyz789",
"text": "User is vegetarian and enjoys Italian cuisine",
"score": 0.92
},
{
"id": "mem_uvw012",
"text": "User is allergic to peanuts",
"score": 0.78
}
],
"query": "food preferences",
"count": 2
}/api/v1/memoriesManually create a memory. For automatic extraction with LLM-chosen classification, use the Responses or Chat API with learn: true instead.
memories:writelearn: true, the LLM automatically extracts memories and intelligently chooses the kind, importance, and tags based on conversation context. Use learn: "force" to always create a memory. This endpoint is for manual injection when you need direct control.subject_id*text*kindvisibilityimportancetagsmetadatalearn: true with the Responses/Chat API, the LLM intelligently chooses kind, visibility, importance, and tags based on context. The fallback values above only apply when manually creating memories via this endpoint.curl -X POST "https://www.mnexium.com/api/v1/memories" \
-H "Authorization: Bearer $MNX_KEY" \
-H "Content-Type: application/json" \
-d '{
"subject_id": "user_123",
"text": "User prefers dark mode interfaces",
"kind": "preference",
"importance": 75
}'{
"id": "mem_abc123",
"subject_id": "user_123",
"text": "User prefers dark mode interfaces",
"kind": "preference",
"created": true
}/api/v1/memories/:idGet a specific memory by ID.
memories:readid*curl "https://www.mnexium.com/api/v1/memories/mem_abc123" \
-H "Authorization: Bearer $MNX_KEY"{
"data": {
"id": "mem_abc123",
"subject_id": "user_123",
"text": "User prefers dark mode interfaces",
"kind": "preference",
"importance": 75,
"created_at": "2024-12-15T10:30:00Z"
}
}/api/v1/memories/:idUpdate an existing memory. Embeddings are regenerated if text changes.
memories:writeid*textkindimportancetagscurl -X PATCH "https://www.mnexium.com/api/v1/memories/mem_abc123" \
-H "Authorization: Bearer $MNX_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "User strongly prefers dark mode",
"importance": 90
}'{
"id": "mem_abc123",
"updated": true
}/api/v1/memories/:idSoft-delete a memory. The memory is deactivated but retained for audit.
memories:writeid*curl -X DELETE "https://www.mnexium.com/api/v1/memories/mem_abc123" \
-H "Authorization: Bearer $MNX_KEY"{
"ok": true,
"deleted": true
}/api/v1/memories/supersededList memories that have been superseded (replaced by newer memories). Useful for audit and debugging.
memories:readsubject_id*limitoffsetcurl -G "https://www.mnexium.com/api/v1/memories/superseded" \
-H "Authorization: Bearer $MNX_KEY" \
--data-urlencode "subject_id=user_123"{
"data": [
{
"id": "mem_old123",
"text": "Favorite fruit is blueberry",
"status": "superseded",
"superseded_by": "mem_new456",
"created_at": "2024-12-10T10:00:00Z"
}
],
"count": 1
}/api/v1/memories/:id/restoreRestore a superseded memory back to active status. Use this to undo an incorrect supersede.
memories:writeid*curl -X POST "https://www.mnexium.com/api/v1/memories/mem_old123/restore" \
-H "Authorization: Bearer $MNX_KEY"{
"ok": true,
"restored": true,
"id": "mem_old123",
"subject_id": "user_123",
"text": "Favorite fruit is blueberry"
}Memory Versioning & Conflict Resolution
Mnexium automatically handles conflicting memories. When a user updates a preference or fact, the system detects semantically similar memories and supersedes them.
Example: If a user has the memory "Favorite fruit is blueberry" and later says "my new favorite fruit is strawberry", the system will:
- Extract the new memory: "User's favorite fruit is strawberry"
- Detect the old "blueberry" memory as a conflict
- Mark the old memory as
superseded - Only the new "strawberry" memory will be recalled in future conversations
Memory Status
activeMemory is current and will be included in recall searches.supersededMemory has been replaced by a newer one. Excluded from recall but retained for audit.Usage Tracking
When memories are recalled during a chat completion with recall: true, the system automatically tracks:
last_seen_at— Timestamp of the most recent recallseen_count— Total number of times the memory has been recalled
/api/v1/memories/recallsQuery memory recall events for auditability. Track which memories were used in which conversations.
memories:readchat_idmemory_idstatslimitcurl -G "https://www.mnexium.com/api/v1/memories/recalls" \
-H "Authorization: Bearer $MNX_KEY" \
--data-urlencode "chat_id=550e8400-e29b-41d4-a716-446655440000"{
"data": [
{
"event_id": "evt_abc123",
"memory_id": "mem_xyz789",
"memory_text": "User prefers dark mode",
"similarity_score": 78.5,
"message_index": 0,
"recalled_at": "2024-12-15T10:30:00Z"
}
],
"count": 1,
"chat_id": "550e8400-e29b-41d4-a716-446655440000"
}curl -G "https://www.mnexium.com/api/v1/memories/recalls" \
-H "Authorization: Bearer $MNX_KEY" \
--data-urlencode "memory_id=mem_xyz789" \
--data-urlencode "stats=true"{
"memory_id": "mem_xyz789",
"stats": {
"total_recalls": 15,
"unique_chats": 8,
"avg_score": 72.4,
"first_recalled_at": "2024-12-01T09:00:00Z",
"last_recalled_at": "2024-12-15T10:30:00Z"
}
}chat_logged field indicates whether the chat was saved to history (log: true). When chat_logged = 0, the recall event is tracked but the chat messages are not stored.Overview
Profiles provide structured, schema-defined data about subjects. Unlike free-form memories, profile fields have defined keys (like name, email, timezone) and are automatically extracted from conversations or can be set via API.
Automatic Extraction
When learn: true, the LLM extracts profile fields from conversation context.
Superseding
New values automatically supersede old ones. Higher confidence or manual edits take priority.
/api/v1/profilesGet the profile for a subject. Returns all profile fields with their values and metadata.
profiles:readsubject_id*formatcurl -G "https://www.mnexium.com/api/v1/profiles" \
-H "Authorization: Bearer $MNX_KEY" \
--data-urlencode "subject_id=user_123"{
"data": {
"name": "Sarah Chen",
"email": "sarah@example.com",
"timezone": "America/New_York",
"language": "English"
}
}curl -G "https://www.mnexium.com/api/v1/profiles" \
-H "Authorization: Bearer $MNX_KEY" \
--data-urlencode "subject_id=user_123" \
--data-urlencode "format=full"{
"data": {
"name": {
"value": "Sarah Chen",
"confidence": 0.95,
"source_type": "chat",
"updated_at": "2024-12-15T10:30:00Z",
"memory_id": "mem_abc123"
},
"timezone": {
"value": "America/New_York",
"confidence": 0.85,
"source_type": "chat",
"updated_at": "2024-12-14T09:00:00Z",
"memory_id": "mem_xyz789"
}
}
}/api/v1/profilesUpdate profile fields for a subject. Supports batch updates with confidence scores.
profiles:writesubject_id*updates*curl -X PATCH "https://www.mnexium.com/api/v1/profiles" \
-H "Authorization: Bearer $MNX_KEY" \
-H "Content-Type: application/json" \
-d '{
"subject_id": "user_123",
"updates": [
{ "field_key": "name", "value": "Sarah Chen", "confidence": 1.0 },
{ "field_key": "timezone", "value": "America/New_York" }
]
}'{
"ok": true,
"updated": 2,
"results": [
{ "field_key": "name", "success": true },
{ "field_key": "timezone", "success": true }
]
}confidence: 1.0 are treated as manual edits and will supersede any existing value regardless of its confidence. Lower confidence values may be rejected if a higher-confidence value already exists./api/v1/profilesDelete a specific profile field for a subject. The underlying memory is soft-deleted.
profiles:writesubject_id*field_key*curl -X DELETE "https://www.mnexium.com/api/v1/profiles" \
-H "Authorization: Bearer $MNX_KEY" \
-H "Content-Type: application/json" \
-d '{
"subject_id": "user_123",
"field_key": "timezone"
}'{
"ok": true,
"deleted": true,
"field_key": "timezone"
}Profile Schema
Each project has a configurable profile schema that defines which fields are available. The schema includes both system fields (name, email, timezone, language) and custom fields you define.
Default System Fields
nameUser's full nameemailEmail addresstimezoneUser's timezone (e.g., "America/New_York")languagePreferred languageSource Types
chatAutomatically extracted from conversationmanualSet via UI or API with high confidenceapiSet via APIOverview
Agent State provides short-term, task-scoped storage for agentic workflows. Unlike memories (long-term facts), state tracks the agent's current working context: task progress, pending actions, and session variables.
Use cases: Multi-step task automation, workflow position tracking, pending tool call results, session variables, and resumable conversations.
PUT /state/:key
Create or update agent state for a given key.
X-Subject-ID*X-Session-IDvalue*ttl_secondscurl -X PUT "https://www.mnexium.com/api/v1/state/current_task" \
-H "Authorization: Bearer $MNX_KEY" \
-H "Content-Type: application/json" \
-H "X-Subject-ID: user_123" \
-d '{
"value": {
"status": "in_progress",
"task": "Plan trip to Tokyo",
"steps_completed": ["research", "book_flights"],
"next_step": "book_hotels"
},
"ttl_seconds": 3600
}'GET /state/:key
Retrieve agent state for a given key.
X-Subject-ID*// Response
{
"key": "current_task",
"value": {
"status": "in_progress",
"task": "Plan trip to Tokyo",
"next_step": "book_hotels"
},
"ttl": "2025-01-01T12:00:00Z",
"updated_at": "2025-01-01T11:00:00Z"
}DELETE /state/:key
Delete agent state (soft delete via TTL expiration).
X-Subject-ID*State Injection in Proxy
Load and inject agent state into LLM context via the mnx.state config:
curl -X POST "https://www.mnexium.com/api/v1/chat/completions" \ -H "Authorization: Bearer $MNX_KEY" \ -H "x-openai-key: $OPENAI_KEY" \ -d '{ "model": "gpt-4o-mini", "messages": [{ "role": "user", "content": "What should I do next?" }], "mnx": { "subject_id": "user_123", "state": { "load": true, "key": "current_task" } } }'When state.load: true, the agent's current state is injected as a system message, allowing the LLM to resume tasks and avoid repeating completed work.
Key Naming Conventions
Recommended patterns for state keys:
current_taskDefault key for general task statetask:onboardingNamed workflow statetool:weather:tc_123Pending tool call resultflow:checkoutMulti-step flow positionOverview
System prompts are managed instructions automatically injected into LLM requests. They support scoping at project, subject, or chat level.
projectsubjectsubject_id.chatchat_id.Prompts are layered: project → subject → chat. Multiple prompts are concatenated.
/api/v1/promptsList all system prompts for your project.
prompts:readcurl "https://www.mnexium.com/api/v1/prompts" \
-H "Authorization: Bearer $MNX_KEY"{
"data": [
{
"id": "sp_abc123",
"name": "Default Assistant",
"prompt_text": "You are a helpful assistant.",
"scope": "project",
"is_default": true,
"priority": 100
}
]
}/api/v1/promptsCreate a new system prompt. Set is_default: true for auto-injection.
prompts:writename*prompt_text*scopescope_idis_defaultprioritycurl -X POST "https://www.mnexium.com/api/v1/prompts" \
-H "Authorization: Bearer $MNX_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Default Assistant",
"prompt_text": "You are a helpful assistant.",
"scope": "project",
"is_default": true
}'{
"id": "sp_abc123",
"name": "Default Assistant",
"scope": "project",
"created": true
}/api/v1/prompts/:idUpdate an existing system prompt. Only provided fields are updated.
prompts:writeid*nameprompt_textis_defaultis_activeprioritycurl -X PATCH "https://www.mnexium.com/api/v1/prompts/sp_abc123" \
-H "Authorization: Bearer $MNX_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt_text": "You are a friendly assistant.",
"is_default": true
}'{
"id": "sp_abc123",
"updated": true
}/api/v1/prompts/:idSoft-delete a system prompt. The prompt is deactivated but retained for audit purposes.
prompts:writeid*curl -X DELETE "https://www.mnexium.com/api/v1/prompts/sp_abc123" \
-H "Authorization: Bearer $MNX_KEY"{
"ok": true,
"deleted": true
}/api/v1/prompts/resolvePreview which prompts will be injected for a given context.
prompts:readsubject_idchat_idcombinedcurl -G "https://www.mnexium.com/api/v1/prompts/resolve" \
-H "Authorization: Bearer $MNX_KEY" \
--data-urlencode "subject_id=user_123" \
--data-urlencode "combined=true"{
"combined": "You are a helpful assistant.\n\nThis user prefers concise responses.",
"prompts": [
{ "id": "sp_abc123", "scope": "project" },
{ "id": "sp_def456", "scope": "subject" }
]
}Using system_prompt in Requests
Control system prompt injection via the mnx.system_prompt field:
// Auto-resolve based on context (default)
"mnx": { "subject_id": "user_123" }
// Skip system prompt injection
"mnx": { "system_prompt": false }
// Use a specific prompt by ID
"mnx": { "system_prompt": "sp_sales_assistant" }Overview
Mnexium provides fine-grained access control, data lifecycle management, and privacy-conscious design to help you build enterprise-ready AI applications.
PII Guidelines
Best practices for handling personally identifiable information:
Never put passwords, API keys, or tokens in memory text fields. These are searchable and may be included in LLM context.
Store user IDs, order numbers, and references in metadata. Keep memory text for semantic meaning.
Always use subject_id to isolate user data. Memories are never shared across subjects unless explicitly marked visibility: "shared".
Audit Trail
Every API call is logged with full context. View your activity log at /activity-log.
actionmemory.create, chat.completion)subject_idstatussuccess or failuretimestampmetadataError Response Format
All errors return a JSON object with an error field describing the issue.
{
"error": "error_code_here"
}HTTP Status Codes
Common Error Codes
unauthorizedtoken_revokedtoken_expiredforbiddenprompts:write).prompt_not_foundusage_limit_exceededcurrent and limit fields showing your usage.subject_id_requiredsubject_id is required when history: true.name_requiredname field when creating a prompt.prompt_text_requiredprompt_text field when creating a prompt.