Product Launch
You already control response behavior with system prompts. Now you can control memory extraction behavior with memory policies, scoped by project, subject, or chat.
Marius Ndini
Founder ยท Feb 18, 2026
Not every app wants to memorize everything. Some teams need strict extraction rules for compliance, quality, or cost. Others need per-workflow behavior, like high-signal extraction in support chats and minimal extraction in casual chats.
Memory Policies let you define those rules once, then apply them automatically with scope-aware resolution.
Memory Policies are now available on the v1 API surface:
GET /api/v1/memory/policies
POST /api/v1/memory/policies
GET /api/v1/memory/policies/:id
PATCH /api/v1/memory/policies/:id
DELETE /api/v1/memory/policies/:id
GET /api/v1/memory/policies/resolveLike mnx.system_prompt, memory policy can be controlled per request:
{
"model": "gpt-4o-mini",
"messages": [{ "role": "user", "content": "Remember I prefer concise weekly summaries." }],
"mnx": {
"subject_id": "user_123",
"chat_id": "550e8400-e29b-41d4-a716-446655440000",
"learn": true,
"memory_policy": "mem_pol_support_assistant"
}
}memory_policy accepts a policy ID, false to disable, or omitted for scoped default resolution.The same memory_policy override works directly in the Mnexium SDKs:
// npm: @mnexium/sdk
import { Mnexium } from "@mnexium/sdk";
const mnx = new Mnexium({
apiKey: process.env.MNX_KEY,
openai: { apiKey: process.env.OPENAI_API_KEY },
});
const alice = mnx.subject("user_123");
const response = await alice.process({
content: "Remember that I prefer concise weekly summaries.",
model: "gpt-4o-mini",
learn: true,
recall: true,
memory_policy: "mem_pol_support_assistant",
});
console.log(response.content);# Python: mnexium
import os
from mnexium import Mnexium, ProviderConfig, ProcessOptions
mnx = Mnexium(
api_key=os.environ["MNX_KEY"],
openai=ProviderConfig(api_key=os.environ["OPENAI_API_KEY"]),
)
alice = mnx.subject("user_123")
response = alice.process(ProcessOptions(
content="Remember that I prefer concise weekly summaries.",
model="gpt-4o-mini",
learn=True,
recall=True,
memory_policy="mem_pol_support_assistant",
))
print(response.content)For provider-native SDK routes that rely on headers, you can also pass: x-mnx-memory-policy.
x-mnx-memory-policy: mem_pol_support_assistant
# or
x-mnx-memory-policy: falseMemory Policies give teams a practical control plane for extraction quality. You can tune behavior once and keep it consistent across chat/completions, responses, messages, and Gemini routes.
The result: cleaner memory, lower noise, and more predictable assistant behavior over time.
Memory Policies are available now in both SDKs and the REST API. Create a policy, set scoped defaults, or pass memory_policy per request. Your assistant will start extracting cleaner, higher-signal memory immediately.
npm install @mnexium/sdk # JavaScript
pip install mnexium # PythonControl extraction quality with scoped defaults and request-level overrides. Keep memory relevant, reduce noise, and make behavior predictable across routes.