Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .claude-plugin/marketplace.json
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
"name": "claude-code",
"source": "./plugins/claude-code",
"description": "Persistent semantic memory for Claude Code — user preferences, project context, prior decisions, and codebase facts that survive across sessions.",
"version": "0.1.7",
"version": "0.1.9",
"category": "productivity",
"homepage": "https://docs.atomicmemory.ai/integrations/coding-agents/claude-code",
"license": "Apache-2.0"
Expand Down
6 changes: 5 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,14 @@ plugins/ # coding-agent wrappers
├── codex/ # Codex plugin (manifest + MCP config + SKILL.md)
└── openclaw/ # OpenClaw plugin (openclaw.plugin.json + skill.yaml)

adapters/ # framework adapters
├── vercel-ai-sdk/ # @atomicmemory/vercel-ai
└── openai-agents-sdk/ # @atomicmemory/openai-agents

examples/ # runnable examples (coming soon)
```

Framework adapters (`adapters/vercel-ai-sdk`, `adapters/langchain-js`, `adapters/mastra`, `adapters/openai-agents`, `adapters/langgraph-js`) are tracked as planned work — see the docs site at https://docs.atomicmemory.ai/integrations/ for status.
Additional framework adapters (`adapters/langchain-js`, `adapters/mastra`, `adapters/langgraph-js`) are tracked as planned work — see the docs site at https://docs.atomicmemory.ai/integrations/ for status.

## Architecture

Expand Down
139 changes: 139 additions & 0 deletions adapters/openai-agents-sdk/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
# AtomicMemory for OpenAI Agents SDK

Source-only adapter for the [OpenAI Agents SDK for TypeScript](https://openai.github.io/openai-agents-js/). It wires AtomicMemory into agent runs without replacing the SDK's own `Session` implementations.

## Install

This package is source-only in this repo for now:

```bash
pnpm --filter @atomicmemory/openai-agents build
```

In a local workspace, import from the package once it is linked by `pnpm-workspace.yaml`:

```ts
import { MemoryClient } from '@atomicmemory/atomicmemory-sdk';
import { Agent, run } from '@openai/agents';
import { runWithMemory } from '@atomicmemory/openai-agents';

const memory = new MemoryClient({
providers: {
atomicmemory: {
apiUrl: process.env.ATOMICMEMORY_API_URL!,
apiKey: process.env.ATOMICMEMORY_API_KEY,
},
},
defaultProvider: 'atomicmemory',
});
await memory.initialize();

const agent = new Agent({
name: 'Assistant',
instructions: 'You are a helpful assistant.',
});

const { result, retrieved } = await runWithMemory({
client: memory,
scope: { user: 'user-123', namespace: 'support' },
input: 'What did we decide about billing retries?',
run: (input) => run(agent, input),
});

console.log(result.finalOutput, retrieved.length);
```

## Primitives

### `augmentInputWithMemory(client, options)`

Searches AtomicMemory before an agent run and prepends a `system()` message containing retrieved context when matches exist.

```ts
const { input, retrieved } = await augmentInputWithMemory(memory, {
scope: { user: 'user-123' },
input: 'What should I remember?',
});

const result = await run(agent, input);
```

### `ingestAgentTurn(client, options)`

Persists completed turns after `run()`. System messages are excluded by default; the assistant output is appended as the final assistant message.

```ts
await ingestAgentTurn(memory, {
scope: { user: 'user-123' },
input,
result,
metadata: { source: 'openai-agents', event: 'run_completed' },
});
```

For streamed results, wait for `completed` and pass explicit output text if needed:

```ts
const stream = await run(agent, input, { stream: true });
await stream.completed;

await ingestAgentTurn(memory, {
scope,
input,
output: String(stream.finalOutput ?? ''),
});
```

### `createMemoryTools(client, options)`

Creates two OpenAI Agents SDK function tools:

- `memory_search` - search AtomicMemory during a run.
- `memory_ingest` - store durable preferences, decisions, conventions, or facts.

```ts
const agent = new Agent({
name: 'Assistant',
instructions: 'Use memory tools when prior context or durable learning matters.',
tools: createMemoryTools(memory, {
scope: { user: 'user-123', namespace: 'support' },
metadata: { source: 'openai-agents-tool' },
}),
});
```

## Verify

Run local adapter checks:

```bash
pnpm --filter @atomicmemory/openai-agents test
pnpm --filter @atomicmemory/openai-agents typecheck
pnpm --filter @atomicmemory/openai-agents build
```

Run the backend smoke test without making an OpenAI API call:

```bash
export ATOMICMEMORY_API_URL="http://localhost:3050"
export ATOMICMEMORY_API_KEY="..."
export ATOMICMEMORY_PROVIDER="atomicmemory"
export ATOMICMEMORY_SCOPE_USER="$USER"
export ATOMICMEMORY_SCOPE_NAMESPACE="openai-agents-sdk-smoke"

pnpm --filter @atomicmemory/openai-agents smoke:backend
```

The smoke test writes a unique marker, verifies `augmentInputWithMemory()` retrieves it, then runs `runWithMemory()` with a fake runner and reports the post-run ingest AUDN outcome.

Set `OPENAI_API_KEY` only when you want to test the real `Agent + run()` path from the install example.

## Notes

- AtomicMemory is long-term semantic memory. The OpenAI Agents SDK `Session` surface is still useful for short-term conversation state.
- Retrieved memories are injected as reference context only. The adapter's default prompt explicitly tells the model not to follow instructions embedded in retrieved memories.
- `ingestAgentTurn` requires text output. For structured outputs, it serializes `finalOutput` as JSON unless you pass an explicit `output`.

## License

Apache-2.0.
45 changes: 45 additions & 0 deletions adapters/openai-agents-sdk/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
{
"name": "@atomicmemory/openai-agents",
"version": "0.1.0",
"description": "AtomicMemory adapter for the OpenAI Agents SDK — pre-run memory retrieval, post-run ingest, and function tools.",
"type": "module",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"exports": {
".": {
"types": "./dist/index.d.ts",
"import": "./dist/index.js"
}
},
"files": [
"dist",
"README.md"
],
"repository": {
"type": "git",
"url": "git+https://github.com/atomicmemory/atomicmemory-integrations.git",
"directory": "adapters/openai-agents-sdk"
},
"license": "Apache-2.0",
"engines": {
"node": ">=20"
},
"scripts": {
"build": "tsc -p tsconfig.json",
"typecheck": "tsc -p tsconfig.json --noEmit",
"test": "node --test --import tsx 'src/**/*.test.ts'",
"lint": "tsc -p tsconfig.json --noEmit",
"smoke:backend": "pnpm build && node scripts/smoke-backend.mjs",
"prepublishOnly": "node -e \"const v=require('./package.json').dependencies['@atomicmemory/atomicmemory-sdk'];if(v.startsWith('file:')||v.startsWith('link:')){console.error('refusing to publish: @atomicmemory/atomicmemory-sdk is '+v+'. Publish the SDK first, then pin to a registry version here.');process.exit(1)}\""
},
"dependencies": {
"@atomicmemory/atomicmemory-sdk": "file:../../../atomicmemory-sdk",
"@openai/agents": "^0.8.5",
"zod": "^4.3.6"
},
"devDependencies": {
"@types/node": "^20.0.0",
"tsx": "^4.19.0",
"typescript": "^5.6.0"
}
}
116 changes: 116 additions & 0 deletions adapters/openai-agents-sdk/scripts/smoke-backend.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
import { MemoryClient } from '@atomicmemory/atomicmemory-sdk';
import { augmentInputWithMemory, runWithMemory } from '../dist/index.js';

const apiUrl = process.env.ATOMICMEMORY_API_URL;
const apiKey = process.env.ATOMICMEMORY_API_KEY;
const provider = process.env.ATOMICMEMORY_PROVIDER || 'atomicmemory';

if (!apiUrl || !apiKey) {
throw new Error('ATOMICMEMORY_API_URL and ATOMICMEMORY_API_KEY are required');
}
if (provider !== 'atomicmemory' && provider !== 'mem0') {
throw new Error(`Unsupported ATOMICMEMORY_PROVIDER: ${provider}`);
}

const providers =
provider === 'mem0'
? { mem0: { apiUrl, apiKey } }
: { atomicmemory: { apiUrl, apiKey } };

const client = new MemoryClient({ providers, defaultProvider: provider });
await client.initialize();

const scope = {
user: process.env.ATOMICMEMORY_SCOPE_USER || 'openai-agents-smoke-user',
namespace:
process.env.ATOMICMEMORY_SCOPE_NAMESPACE || 'openai-agents-sdk-smoke',
};
const marker = `openai-agents-sdk-smoke-${Date.now()}`;
const content = `AtomicMemory OpenAI Agents SDK smoke fact: marker ${marker}.`;

await client.ingest(
provider === 'atomicmemory'
? {
mode: 'verbatim',
content,
kind: 'fact',
scope,
metadata: { source: 'openai-agents-sdk-smoke', marker },
}
: {
mode: 'text',
content,
scope,
metadata: { source: 'openai-agents-sdk-smoke', marker },
},
);

const augmented = await augmentInputWithMemory(client, {
scope,
query: marker,
input: `What is the smoke marker ${marker}?`,
limit: 5,
});

const found = augmented.retrieved.some((result) =>
result.memory.content.includes(marker),
);

console.log(
JSON.stringify(
{
phase: 'augment',
marker,
retrieved: augmented.retrieved.length,
found,
},
null,
2,
),
);

if (!found) {
console.log(
JSON.stringify(
{
retrievedContents: augmented.retrieved.map((result) => result.memory.content),
},
null,
2,
),
);
process.exit(2);
}

const wrapped = await runWithMemory({
client,
scope,
input: `Confirm marker ${marker}`,
search: { query: marker },
ingest: {
metadata: {
source: 'openai-agents-sdk-smoke',
event: 'fake_run_completed',
marker,
},
},
async run(input) {
return {
finalOutput: `Confirmed marker ${marker}. Input items: ${input.length}`,
};
},
});

console.log(
JSON.stringify(
{
phase: 'runWithMemory',
retrieved: wrapped.retrieved.length,
created: wrapped.ingestResult?.created?.length ?? 0,
updated: wrapped.ingestResult?.updated?.length ?? 0,
unchanged: wrapped.ingestResult?.unchanged?.length ?? 0,
},
null,
2,
),
);
63 changes: 63 additions & 0 deletions adapters/openai-agents-sdk/src/augment.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
import { test } from 'node:test';
import assert from 'node:assert/strict';
import { user } from '@openai/agents';
import type { AgentInputItem } from '@openai/agents';
import { augmentInputWithMemory } from './augment.js';
import { makeFakeClient, makeMemory } from './test-fixtures.js';

const scope = { user: 'u1' };

test('returns normalized input unchanged when no memories match', async () => {
const { client } = makeFakeClient({ searchResults: [] });
const result = await augmentInputWithMemory(client, {
input: 'hello',
scope,
});
assert.equal(result.retrieved.length, 0);
assert.equal(result.input.length, 1);
assert.equal((result.input[0] as { role?: string }).role, 'user');
});

test('prepends a system item when memories match', async () => {
const { client } = makeFakeClient({
searchResults: [makeMemory('user prefers pnpm')],
});
const result = await augmentInputWithMemory(client, {
input: 'what package manager?',
scope,
});
assert.equal(result.input.length, 2);
assert.equal((result.input[0] as { role?: string }).role, 'system');
assert.match(
String((result.input[0] as { content?: unknown }).content),
/user prefers pnpm/,
);
assert.equal((result.input[1] as { role?: string }).role, 'user');
});

test('derives query from the latest text-bearing user item', async () => {
const { client, searchCalls } = makeFakeClient();
await augmentInputWithMemory(client, {
input: [
user('first'),
{
role: 'assistant',
content: [{ type: 'output_text', text: 'hi' }],
type: 'message',
} as AgentInputItem,
user('second'),
],
scope,
});
assert.equal(searchCalls[0]?.query, 'second');
});

test('prefers explicit query when provided', async () => {
const { client, searchCalls } = makeFakeClient();
await augmentInputWithMemory(client, {
input: 'ignored',
query: 'explicit',
scope,
});
assert.equal(searchCalls[0]?.query, 'explicit');
});
Loading