Skip to content

Conversation

@codefromthecrypt
Copy link
Contributor

@codefromthecrypt codefromthecrypt commented Apr 22, 2025

This adds OpenInference instrumentation for MCP, which connects the trace context between the MCP client and the server it launches as a sub-process.

Technically, this injects trace identifiers into the request.params._meta field on the client side, and extracts them on the server.

Screenshot 2025-04-25 at 11 47 39 AM

Signed-off-by: Adrian Cole <[email protected]>
@codefromthecrypt
Copy link
Contributor Author

to make this easier, I folded telemetry.js into main.js and turned OTEL_LOG_LEVEL=debug

Then, I see a patch failed like this which fails like this "module TypeError: Cannot add property openInferencePatched, object is not extensible"

{"name":"elastic-otel-node","level":20,"msg":"Manually instrumenting @modelcontextprotocol/sdk/client/index","time":"2025-04-22T11:50:54.906Z"}
{"name":"elastic-otel-node","level":20,"msg":"Applying patch for @modelcontextprotocol/sdk/client/index@undefined","time":"2025-04-22T11:50:54.906Z"}
{"name":"elastic-otel-node","level":20,"msg":"Failed to set @modelcontextprotocol/sdk/client/index patched flag on the module TypeError: Cannot add property openInferencePatched, object is not extensible\n    at MCPInstrumentation._patchClientModule (file:///Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/node_modules/@arizeai/openinference-instrumentation-mcp/dist/esm/mcp.js:68:41)\n    at MCPInstrumentation.manuallyInstrument (file:///Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/node_modules/@arizeai/openinference-instrumentation-mcp/dist/esm/mcp.js:46:18)\n    at file:///Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/index.js:8:20\n    at ModuleJob.run (node:internal/modules/esm/module_job:271:25)\n    at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:578:26)\n    at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:116:5)","time":"2025-04-22T11:50:54.906Z"}
{"name":"elastic-otel-node","level":20,"msg":"Manually instrumenting @modelcontextprotocol/sdk/server/index","time":"2025-04-22T11:50:54.906Z"}
{"name":"elastic-otel-node","level":20,"msg":"Applying patch for @modelcontextprotocol/sdk/server/index@undefined","time":"2025-04-22T11:50:54.906Z"}
{"name":"elastic-otel-node","level":20,"msg":"Failed to set @modelcontextprotocol/sdk/server/index patched flag on the module TypeError: Cannot add property openInferencePatched, object is not extensible\n    at MCPInstrumentation._patchServerModule (file:///Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/node_modules/@arizeai/openinference-instrumentation-mcp/dist/esm/mcp.js:104:41)\n    at MCPInstrumentation.manuallyInstrument (file:///Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/node_modules/@arizeai/openinference-instrumentation-mcp/dist/esm/mcp.js:50:18)\n    at file:///Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/index.js:8:20\n    at ModuleJob.run (node:internal/modules/esm/module_job:271:25)\n    at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:578:26)\n    at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:116:5)","time":"2025-04-22T11:50:54.906Z"}
The latest version of Elasticsearch 8 is 8.18.0.

Signed-off-by: Adrian Cole <[email protected]>
@codefromthecrypt
Copy link
Contributor Author

ok I changed to manually wire instead of using EDOT import and the MCP propagation works now.

Screenshot 2025-04-22 at 9 11 15 PM

@trentm any ideas on this one? notably the comment I made here which latest commit works around?

"Cannot add property openInferencePatched, object is not extensible"

https://github.com/Arize-ai/openinference/tree/main/js/packages/openinference-instrumentation-mcp is the source of the experimental propagation instrumentation we're using to support an MCP spec discussion

Copy link
Collaborator

@anuraaga anuraaga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for reference, the example here uses commonjs and seemed fine.

https://github.com/elastic/genai-instrumentation/blob/main/experiments%2Finstrumentation-mcp-js%2Fexamples%2Ftelemetry.js

But with node there are so many variables, can check things out at least by Thurs.

Edit: Ah sorry see you got to using manually instrument. It's in the readme fwiw :p

@anuraaga
Copy link
Collaborator

The debug log is here

https://github.com/Arize-ai/openinference/blob/main/js%2Fpackages%2Fopeninference-instrumentation-mcp%2Fsrc%2Fmcp.ts#L188

All the openinference instrumentation includes it.

@codefromthecrypt
Copy link
Contributor Author

right cjs probably was a red herring, I'll try to revert esm now

Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
@codefromthecrypt
Copy link
Contributor Author

codefromthecrypt commented Apr 22, 2025

@anuraaga @trentm here's my summary so far.

To recap @arizeai/openinference-instrumentation-mcp when working injects trace identifiers into the JSON-RPC message request.params._meta field on the MCP client side and correspondingly extracts them on the server side. This is how the traces connect when it works.

My test case is running npm run start -- --mcp which is what actually uses the mcp subsystem and then the instrumentation.

Explicit NodeSDK initialization works

ok I got it working with explicit NodeSDK initialization with cjs (no split trace between MCP client and server, so @arizeai/openinference-instrumentation-mcp is working)

Details

explicit SDK initialization means run via node --env-file .env -r ./telemetry.js index.js with the following telemetry.js

const { NodeSDK } = require("@opentelemetry/sdk-node");
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const { OTLPTraceExporter } = require("@opentelemetry/exporter-trace-otlp-proto");
const { MCPInstrumentation } = require("@arizeai/openinference-instrumentation-mcp");
const MCPClientModule = require("@modelcontextprotocol/sdk/client/index.js");
const MCPServerModule = require("@modelcontextprotocol/sdk/server/index.js");

const mcpInstrumentation = new MCPInstrumentation();
// MCP must be manually instrumented as it doesn't have a traditional module structure
mcpInstrumentation.manuallyInstrument({
    clientModule: MCPClientModule,
    serverModule: MCPServerModule,
});

const sdk = new NodeSDK({
    traceExporter: new OTLPTraceExporter(),
    instrumentations: [getNodeAutoInstrumentations()],
});

sdk.start();

process.on("SIGTERM", async () => {
    try {
        await sdk.shutdown();
    } catch (err) {
        console.warn("warning: error shutting down OTel SDK", err);
    }
    process.exit();
});

process.once("beforeExit", async () => {
    // Flush recent telemetry data if about to shutdown.
    try {
        await sdk.shutdown();
    } catch (err) {
        console.warn("warning: error shutting down OTel SDK", err);
    }
});

Here's the package.json

{
  "name": "genai-function-calling",
  "version": "1.0.0",
  "private": true,
  "type": "commonjs",
  "engines": {
    "node": ">=22"
  },
  "scripts": {
    "start": "node --env-file .env -r ./telemetry.js index.js"
  },
  "dependencies": {
    "ai": "^4.3.9",
    "@ai-sdk/azure": "^1.3.17",
    "@ai-sdk/openai": "^1.3.16",
    "@modelcontextprotocol/sdk": "^1.10.1",
    "@opentelemetry/api": "^1.9.0",
    "@opentelemetry/auto-instrumentations-node": "^0.58.0",
    "@opentelemetry/sdk-node": "^0.57.2",
    "@opentelemetry/sdk-trace-node": "^1.30.1",
    "@arizeai/openinference-instrumentation-mcp": "^0.1.0"
  }
}

EDOT has an initialization failure with Arize instructions

Then I switched back to EDOT (-r @elastic/opentelemetry-node), using Arize usage instructions

https://github.com/Arize-ai/openinference/tree/main/js/packages/openinference-instrumentation-mcp#usage

Then I get an error in OTEL_LOG_LEVEL=debug output:

{"name":"elastic-otel-node","level":40,"msg":"@arizeai/openinference-instrumentation-mcp Module @modelcontextprotocol/sdk/client/index.js has been loaded before @arizeai/openinference-instrumentation-mcp so it might not work, please initialize it before requiring @modelcontextprotocol/sdk/client/index.js","time":"2025-04-22T23:36:02.107Z"}
Details

switch back to EDOT means run via node --env-file .env -r @elastic/opentelemetry-node -r ./telemetry.js index.js with the following telemetry.js

const { MCPInstrumentation } = require("@arizeai/openinference-instrumentation-mcp");
const MCPClientModule = require("@modelcontextprotocol/sdk/client/index.js");
const MCPServerModule = require("@modelcontextprotocol/sdk/server/index.js");

const mcpInstrumentation = new MCPInstrumentation();
// MCP must be manually instrumented as it doesn't have a traditional module structure
mcpInstrumentation.manuallyInstrument({
    clientModule: MCPClientModule,
    serverModule: MCPServerModule,
});

Here's the package.json

{
  "name": "genai-function-calling",
  "version": "1.0.0",
  "private": true,
  "type": "commonjs",
  "engines": {
    "node": ">=22"
  },
  "scripts": {
    "start": "node --env-file .env -r @elastic/opentelemetry-node -r ./telemetry.js index.js"
  },
  "dependencies": {
    "ai": "^4.3.9",
    "@ai-sdk/azure": "^1.3.17",
    "@ai-sdk/openai": "^1.3.16",
    "@modelcontextprotocol/sdk": "^1.10.1",
    "@elastic/opentelemetry-node": "^1",
    "@arizeai/openinference-instrumentation-mcp": "^0.1.0"
  }
}

Then, I changed to initialize @arizeai/openinference-instrumentation-mcp and there are no errors in the OTEL_LOG_LEVEL=debug output, but the MCP client and server are split into different traces (so I think @arizeai/openinference-instrumentation-mcp isn't working even if there are no errors).

OTEL_LOG_LEVEL=debug output:

{"name":"elastic-otel-node","level":20,"msg":"Manually instrumenting @modelcontextprotocol/sdk/client/index","time":"2025-04-22T23:38:42.443Z"}
{"name":"elastic-otel-node","level":20,"msg":"Applying patch for @modelcontextprotocol/sdk/client/index@undefined","time":"2025-04-22T23:38:42.443Z"}
{"name":"elastic-otel-node","level":20,"msg":"Manually instrumenting @modelcontextprotocol/sdk/server/index","time":"2025-04-22T23:38:42.443Z"}
{"name":"elastic-otel-node","level":20,"msg":"Applying patch for @modelcontextprotocol/sdk/server/index@undefined","time":"2025-04-22T23:38:42.443Z"}
The latest version of Elasticsearch 8 is 8.18.0.
Details

changed to initialize @arizeai/openinference-instrumentation-mcp means run via node --env-file .env -r @elastic/opentelemetry-node -r ./telemetry.js index.js with the following telemetry.js

// Import and initialize MCPInstrumentation before importing the MCP modules.
const { MCPInstrumentation } = require("@arizeai/openinference-instrumentation-mcp");
const mcpInstrumentation = new MCPInstrumentation();

// Then, manually instrument MCP, as it doesn't have a traditional module structure.
mcpInstrumentation.manuallyInstrument({
    clientModule: require("@modelcontextprotocol/sdk/client/index.js"),
    serverModule: require("@modelcontextprotocol/sdk/server/index.js"),
});

Here's the package.json

{
  "name": "genai-function-calling",
  "version": "1.0.0",
  "private": true,
  "type": "commonjs",
  "engines": {
    "node": ">=22"
  },
  "scripts": {
    "start": "node --env-file .env -r @elastic/opentelemetry-node -r ./telemetry.js index.js"
  },
  "dependencies": {
    "ai": "^4.3.9",
    "@ai-sdk/azure": "^1.3.17",
    "@ai-sdk/openai": "^1.3.16",
    "@modelcontextprotocol/sdk": "^1.10.1",
    "@elastic/opentelemetry-node": "^1",
    "@arizeai/openinference-instrumentation-mcp": "^0.1.0"
  }
}

I'm not sure if this is something I am doing wrong or a limitation but hope the details help! I will raise a PR to update Arize docs about initialization order in a minute. https://github.com/Arize-ai/openinference/tree/main/js/packages/openinference-instrumentation-mcp/src#usage

Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
@trentm
Copy link
Member

trentm commented Apr 23, 2025

Some behaviour is different between:

  1. node --env-file .env -r @elastic/opentelemetry-node -r ./telemetry.js index.js
  2. node --env-file .env --import @elastic/opentelemetry-node -r ./telemetry.js index.js

In the second case (when using --import for EDOT), I don't see this log output:

{"name":"elastic-otel-node","level":20,"msg":"Manually instrumenting @modelcontextprotocol/sdk/client/index","time":"2025-04-23T00:11:53.671Z"}
{"name":"elastic-otel-node","level":20,"msg":"Applying patch for @modelcontextprotocol/sdk/client/index@undefined","time":"2025-04-23T00:11:53.671Z"}
{"name":"elastic-otel-node","level":20,"msg":"Manually instrumenting @modelcontextprotocol/sdk/server/index","time":"2025-04-23T00:11:53.671Z"}
{"name":"elastic-otel-node","level":20,"msg":"Applying patch for @modelcontextprotocol/sdk/server/index@undefined","time":"2025-04-23T00:11:53.671Z"}

A guess is that Node.js runs the --require ... before the --import ..., so the order has changed. And then the mcpInstrumentation.manuallyInstrument(...) in telemetry.js is diag.debug(...) logging before EDOT Node.js has installed a DiagLogger.

Update

Yah. I get the same behaviour (w.r.t. to the log output) as (2.) above if I run:

node --env-file .env -r ./telemetry.js -r @elastic/opentelemetry-node  index.js

@codefromthecrypt
Copy link
Contributor Author

fyi this is a pending PR to fix the usage in openinference docs Arize-ai/openinference#1551

@trentm
Copy link
Member

trentm commented Apr 23, 2025

Then, I changed to initialize @arizeai/openinference-instrumentation-mcp and there are no errors in the OTEL_LOG_LEVEL=debug output

FWIW, I think you can ignore those errors. I looked at the manuallyInstrument code and I think it will behave fine even if getting that debug message.

    try {
      // This can fail if the module is made immutable via the runtime or bundler
      module.openInferencePatched = true;
    } catch (e) {
      diag.debug(
        `Failed to set ${CLIENT_MODULE_NAME} patched flag on the module`,
        e,
      );
    }

I'd expect to get that debug message with ESM usage and perhaps with some bundlers.

@trentm
Copy link
Member

trentm commented Apr 23, 2025

Possibly you could change "telemetry.js" to this:

require('@elastic/opentelemetry-node');

// Import and initialize MCPInstrumentation before importing the MCP modules.
const { MCPInstrumentation } = require("@arizeai/openinference-instrumentation-mcp");
const mcpInstrumentation = new MCPInstrumentation();

// Then, manually instrument MCP, as it doesn't have a traditional module structure.
mcpInstrumentation.manuallyInstrument({
    clientModule: require("@modelcontextprotocol/sdk/client/index.js"),
    serverModule: require("@modelcontextprotocol/sdk/server/index.js"),
});

Then you just need the one -r ./telemetry.js argument (instead of both -r @elastic/opentelemetry-node -r ./telemetry.js).

@codefromthecrypt
Copy link
Contributor Author

thanks for trying, but the last approach didn't join the trace :(

That said, I think I may have crossed wires at some point because I can't get an older commit to join the trace anymore (regardless of cjs ESM). I should have caught a screen capture. I'll look at our internal demo and see if I can make a simpler reproduction.

@codefromthecrypt
Copy link
Contributor Author

ok this is the closest port I could make from the internal copy, and it isn't joining traces either... hmm

// First, configure and start your OpenTelemetry NodeSDK per documentation.
const {NodeSDK} = require("@opentelemetry/sdk-node");
const {OTLPTraceExporter} = require("@opentelemetry/exporter-trace-otlp-proto");

const {diag, DiagConsoleLogger, DiagLogLevel} = require('@opentelemetry/api');
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);

// Then, import and initialize MCPInstrumentation before importing the MCP modules.
const {MCPInstrumentation} = require("@arizeai/openinference-instrumentation-mcp");
const mcpInstrumentation = new MCPInstrumentation();

// Now, manually instrument MCP, as it doesn't have a traditional module structure.
mcpInstrumentation.manuallyInstrument({
    clientModule: require("@modelcontextprotocol/sdk/client/index.js"),
    serverModule: require("@modelcontextprotocol/sdk/server/index.js"),
});

// Finally, configure any other instrumentation and start the sdk
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const sdk = new NodeSDK({
    traceExporter: new OTLPTraceExporter(),
    instrumentations: [getNodeAutoInstrumentations()],
});

sdk.start();

process.on("SIGTERM", async () => {
    try {
        await sdk.shutdown();
    } catch (err) {
        console.warn("warning: error shutting down OTel SDK", err);
    }
    process.exit();
});

process.once("beforeExit", async () => {
    // Flush recent telemetry data if about to shutdown.
    try {
        await sdk.shutdown();
    } catch (err) {
        console.warn("warning: error shutting down OTel SDK", err);
    }
});
Details

logs

> [email protected] start
> node --env-file .env -r ./telemetry.js index.js --mcp

@opentelemetry/api: Registered a global for diag v1.9.0.
Manually instrumenting @modelcontextprotocol/sdk/client/index
Applying patch for @modelcontextprotocol/sdk/client/index@undefined
Manually instrumenting @modelcontextprotocol/sdk/server/index
Applying patch for @modelcontextprotocol/sdk/server/index@undefined
Loading instrumentation for @opentelemetry/instrumentation-amqplib
Loading instrumentation for @opentelemetry/instrumentation-aws-lambda
@opentelemetry/instrumentation-aws-lambda Skipping lambda instrumentation: no _HANDLER/lambdaHandler or LAMBDA_TASK_ROOT. { taskRoot: undefined, handlerDef: undefined }
Loading instrumentation for @opentelemetry/instrumentation-aws-sdk
Loading instrumentation for @opentelemetry/instrumentation-bunyan
Loading instrumentation for @opentelemetry/instrumentation-cassandra-driver
Loading instrumentation for @opentelemetry/instrumentation-connect
Loading instrumentation for @opentelemetry/instrumentation-cucumber
Loading instrumentation for @opentelemetry/instrumentation-dataloader
Loading instrumentation for @opentelemetry/instrumentation-dns
Loading instrumentation for @opentelemetry/instrumentation-express
Disabling instrumentation for @opentelemetry/instrumentation-fastify
Disabling instrumentation for @opentelemetry/instrumentation-fs
Loading instrumentation for @opentelemetry/instrumentation-generic-pool
Loading instrumentation for @opentelemetry/instrumentation-graphql
Loading instrumentation for @opentelemetry/instrumentation-grpc
Loading instrumentation for @opentelemetry/instrumentation-hapi
Loading instrumentation for @opentelemetry/instrumentation-http
Loading instrumentation for @opentelemetry/instrumentation-ioredis
Loading instrumentation for @opentelemetry/instrumentation-kafkajs
Loading instrumentation for @opentelemetry/instrumentation-knex
Loading instrumentation for @opentelemetry/instrumentation-koa
Loading instrumentation for @opentelemetry/instrumentation-lru-memoizer
Loading instrumentation for @opentelemetry/instrumentation-memcached
Loading instrumentation for @opentelemetry/instrumentation-mongodb
Loading instrumentation for @opentelemetry/instrumentation-mongoose
Loading instrumentation for @opentelemetry/instrumentation-mysql2
Loading instrumentation for @opentelemetry/instrumentation-mysql
Loading instrumentation for @opentelemetry/instrumentation-nestjs-core
Loading instrumentation for @opentelemetry/instrumentation-net
Loading instrumentation for @opentelemetry/instrumentation-pg
Loading instrumentation for @opentelemetry/instrumentation-pino
Loading instrumentation for @opentelemetry/instrumentation-redis
Loading instrumentation for @opentelemetry/instrumentation-redis-4
Loading instrumentation for @opentelemetry/instrumentation-restify
Loading instrumentation for @opentelemetry/instrumentation-router
Loading instrumentation for @opentelemetry/instrumentation-runtime-node
Loading instrumentation for @opentelemetry/instrumentation-socket.io
Loading instrumentation for @opentelemetry/instrumentation-tedious
Loading instrumentation for @opentelemetry/instrumentation-undici
Loading instrumentation for @opentelemetry/instrumentation-winston
Current logger will be overwritten from Error
    at DiagAPI.setLogger (/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/node_modules/@opentelemetry/api/build/src/api/diag.js:65:37)
    at new NodeSDK (/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/node_modules/@opentelemetry/sdk-node/build/src/sdk.js:130:24)
    at Object.<anonymous> (/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/telemetry.js:20:13)
    at Module._compile (node:internal/modules/cjs/loader:1554:14)
    at Object..js (node:internal/modules/cjs/loader:1706:10)
    at Module.load (node:internal/modules/cjs/loader:1289:32)
    at Function._load (node:internal/modules/cjs/loader:1108:12)
    at TracingChannel.traceSync (node:diagnostics_channel:322:14)
    at wrapModuleLoad (node:internal/modules/cjs/loader:220:24)
    at Module.<anonymous> (node:internal/modules/cjs/loader:1311:12)
Current logger will overwrite one already registered from Error
    at DiagAPI.setLogger (/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/node_modules/@opentelemetry/api/build/src/api/diag.js:65:37)
    at new NodeSDK (/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/node_modules/@opentelemetry/sdk-node/build/src/sdk.js:130:24)
    at Object.<anonymous> (/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/telemetry.js:20:13)
    at Module._compile (node:internal/modules/cjs/loader:1554:14)
    at Object..js (node:internal/modules/cjs/loader:1706:10)
    at Module.load (node:internal/modules/cjs/loader:1289:32)
    at Function._load (node:internal/modules/cjs/loader:1108:12)
    at TracingChannel.traceSync (node:diagnostics_channel:322:14)
    at wrapModuleLoad (node:internal/modules/cjs/loader:220:24)
    at Module.<anonymous> (node:internal/modules/cjs/loader:1311:12)
@opentelemetry/api: Registered a global for diag v1.9.0.
OTEL_LOGS_EXPORTER is empty. Using default otlp exporter.
@opentelemetry/api: Registered a global for trace v1.9.0.
@opentelemetry/api: Registered a global for context v1.9.0.
@opentelemetry/api: Registered a global for propagation v1.9.0.
EnvDetectorSync found resource. Resource {
  _attributes: { 'service.name': 'genai-function-calling' },
  asyncAttributesPending: false,
  _syncAttributes: { 'service.name': 'genai-function-calling' },
  _asyncAttributesPromise: undefined
}
OSDetectorSync found resource. Resource {
  _attributes: { 'os.type': 'darwin', 'os.version': '24.4.0' },
  asyncAttributesPending: false,
  _syncAttributes: { 'os.type': 'darwin', 'os.version': '24.4.0' },
  _asyncAttributesPromise: undefined
}
ServiceInstanceIdDetectorSync found resource. Resource {
  _attributes: { 'service.instance.id': 'a30bb5e3-4470-405a-ac4d-34103746233f' },
  asyncAttributesPending: false,
  _syncAttributes: { 'service.instance.id': 'a30bb5e3-4470-405a-ac4d-34103746233f' },
  _asyncAttributesPromise: undefined
}
ProcessDetectorSync found resource. Resource {
  _attributes: {
    'process.pid': 72997,
    'process.executable.name': 'node',
    'process.executable.path': '/Users/adriancole/.nvm/versions/node/v22.14.0/bin/node',
    'process.command_args': [
      '/Users/adriancole/.nvm/versions/node/v22.14.0/bin/node',
      '--env-file',
      '.env',
      '-r',
      './telemetry.js',
      '/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/index.js',
      '--mcp'
    ],
    'process.runtime.version': '22.14.0',
    'process.runtime.name': 'nodejs',
    'process.runtime.description': 'Node.js',
    'process.command': '/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/index.js',
    'process.owner': 'adriancole'
  },
  asyncAttributesPending: false,
  _syncAttributes: {
    'process.pid': 72997,
    'process.executable.name': 'node',
    'process.executable.path': '/Users/adriancole/.nvm/versions/node/v22.14.0/bin/node',
    'process.command_args': [
      '/Users/adriancole/.nvm/versions/node/v22.14.0/bin/node',
      '--env-file',
      '.env',
      '-r',
      './telemetry.js',
      '/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/index.js',
      '--mcp'
    ],
    'process.runtime.version': '22.14.0',
    'process.runtime.name': 'nodejs',
    'process.runtime.description': 'Node.js',
    'process.command': '/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/index.js',
    'process.owner': 'adriancole'
  },
  _asyncAttributesPromise: undefined
}
HostDetectorSync found resource. Resource {
  _attributes: {
    'host.name': 'Adrians-MacBook-Pro.local',
    'host.arch': 'arm64',
    'host.id': '0BD5E4F8-C9FD-5300-9F52-36C38F9D1C54'
  },
  asyncAttributesPending: false,
  _syncAttributes: { 'host.name': 'Adrians-MacBook-Pro.local', 'host.arch': 'arm64' },
  _asyncAttributesPromise: Promise { { 'host.id': '0BD5E4F8-C9FD-5300-9F52-36C38F9D1C54' } }
}
Current logger will be overwritten from Error
    at DiagAPI.setLogger (/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/node_modules/@opentelemetry/api/build/src/api/diag.js:65:37)
    at new NodeSDK (/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/node_modules/@opentelemetry/sdk-node/build/src/sdk.js:130:24)
    at Object.<anonymous> (/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/telemetry.js:20:13)
    at Module._compile (node:internal/modules/cjs/loader:1554:14)
    at Object..js (node:internal/modules/cjs/loader:1706:10)
    at Module.load (node:internal/modules/cjs/loader:1289:32)
    at Function._load (node:internal/modules/cjs/loader:1108:12)
    at TracingChannel.traceSync (node:diagnostics_channel:322:14)
    at wrapModuleLoad (node:internal/modules/cjs/loader:220:24)
    at Module.<anonymous> (node:internal/modules/cjs/loader:1311:12)
Current logger will overwrite one already registered from Error
    at DiagAPI.setLogger (/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/node_modules/@opentelemetry/api/build/src/api/diag.js:65:37)
    at new NodeSDK (/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/node_modules/@opentelemetry/sdk-node/build/src/sdk.js:130:24)
    at Object.<anonymous> (/Users/adriancole/oss/observability-examples/genai-function-calling/vercel-ai/telemetry.js:20:13)
    at Module._compile (node:internal/modules/cjs/loader:1554:14)
    at Object..js (node:internal/modules/cjs/loader:1706:10)
    at Module.load (node:internal/modules/cjs/loader:1289:32)
    at Function._load (node:internal/modules/cjs/loader:1108:12)
    at TracingChannel.traceSync (node:diagnostics_channel:322:14)
    at wrapModuleLoad (node:internal/modules/cjs/loader:220:24)
    at Module.<anonymous> (node:internal/modules/cjs/loader:1311:12)
The latest version of Elasticsearch 8 is 8.18.0.
OTLPExportDelegate items to be sent [
  Span {
    attributes: {
      'http.request.method': 'POST',
      'http.request.method_original': 'POST',
      'url.full': 'http://localhost:11434/v1/chat/completions',
      'url.path': '/v1/chat/completions',
      'url.query': '',
      'url.scheme': 'http',
      'server.address': 'localhost',
      'server.port': 11434,
      'user_agent.original': 'node',
      'network.peer.address': '127.0.0.1',
      'network.peer.port': 11434,
      'http.response.status_code': 200,
      'http.response.header.content-length': 446
    },
    links: [],
    events: [],
    _droppedAttributesCount: 0,
    _droppedEventsCount: 0,
    _droppedLinksCount: 0,
    status: { code: 0 },
    endTime: [ 1745371658, 860247667 ],
    _ended: true,
    _duration: [ 1, 21247667 ],
    name: 'POST',
    _spanContext: {
      traceId: '1470598e56da6475cfb9691ae25e22f6',
      spanId: '64a889227ea9bb9e',
      traceFlags: 1,
      traceState: undefined
    },
    parentSpanId: '20cd461ad528be2f',
    kind: 2,
    _performanceStartTime: 1440.817958,
    _performanceOffset: -0.950927734375,
    _startTimeProvided: false,
    startTime: [ 1745371657, 839000000 ],
    resource: Resource {
      _attributes: [Object],
      asyncAttributesPending: false,
      _syncAttributes: [Object],
      _asyncAttributesPromise: [Promise]
    },
    instrumentationLibrary: {
      name: '@opentelemetry/instrumentation-undici',
      version: '0.11.0',
      schemaUrl: undefined
    },
    _spanLimits: {
      attributeValueLengthLimit: Infinity,
      attributeCountLimit: 128,
      linkCountLimit: 128,
      eventCountLimit: 128,
      attributePerEventCountLimit: 128,
      attributePerLinkCountLimit: 128
    },
    _attributeValueLengthLimit: Infinity,
    _spanProcessor: MultiSpanProcessor { _spanProcessors: [Array] }
  },
  Span {
    attributes: {
      'operation.name': 'ai.generateText.doGenerate',
      'ai.operationId': 'ai.generateText.doGenerate',
      'ai.model.provider': 'openai.chat',
      'ai.model.id': 'qwen2.5',
      'ai.settings.temperature': 0,
      'ai.settings.tool_choice': 'auto',
      'ai.settings.maxRetries': 2,
      'ai.prompt.format': 'messages',
      'ai.prompt.messages': '[{"role":"user","content":[{"type":"text","text":"What is the latest version of Elasticsearch 8?"}]}]',
      'ai.prompt.tools': [Array],
      'ai.prompt.toolChoice': '{"type":"auto"}',
      'gen_ai.system': 'openai.chat',
      'gen_ai.request.model': 'qwen2.5',
      'gen_ai.request.temperature': 0,
      'ai.response.finishReason': 'tool-calls',
      'ai.response.text': '',
      'ai.response.toolCalls': '[{"toolCallType":"function","toolCallId":"call_8k9le99q","toolName":"get_latest_elasticsearch_version","args":"{\\"majorVersion\\":8}"}]',
      'ai.response.id': 'chatcmpl-459',
      'ai.response.model': 'qwen2.5',
      'ai.response.timestamp': '2025-04-23T01:27:38.000Z',
      'ai.usage.promptTokens': 185,
      'ai.usage.completionTokens': 24,
      'gen_ai.response.finish_reasons': [Array],
      'gen_ai.response.id': 'chatcmpl-459',
      'gen_ai.response.model': 'qwen2.5',
      'gen_ai.usage.input_tokens': 185,
      'gen_ai.usage.output_tokens': 24
    },
    links: [],
    events: [],
    _droppedAttributesCount: 0,
    _droppedEventsCount: 0,
    _droppedLinksCount: 0,
    status: { code: 0 },
    endTime: [ 1745371658, 870031958 ],
    _ended: true,
    _duration: [ 1, 39031958 ],
    name: 'ai.generateText.doGenerate',
    _spanContext: {
      traceId: '1470598e56da6475cfb9691ae25e22f6',
      spanId: '20cd461ad528be2f',
      traceFlags: 1,
      traceState: undefined
    },
    parentSpanId: 'ab6bec3a78044b10',
    kind: 0,
    _performanceStartTime: 1432.268375,
    _performanceOffset: -0.4013671875,
    _startTimeProvided: false,
    startTime: [ 1745371657, 831000000 ],
    resource: Resource {
      _attributes: [Object],
      asyncAttributesPending: false,
      _syncAttributes: [Object],
      _asyncAttributesPromise: [Promise]
    },
    instrumentationLibrary: { name: 'ai', version: undefined, schemaUrl: undefined },
    _spanLimits: {
      attributeValueLengthLimit: Infinity,
      attributeCountLimit: 128,
      linkCountLimit: 128,
      eventCountLimit: 128,
      attributePerEventCountLimit: 128,
      attributePerLinkCountLimit: 128
    },
    _attributeValueLengthLimit: Infinity,
    _spanProcessor: MultiSpanProcessor { _spanProcessors: [Array] }
  },
  Span {
    attributes: {
      'operation.name': 'ai.toolCall',
      'ai.operationId': 'ai.toolCall',
      'ai.toolCall.name': 'get_latest_elasticsearch_version',
      'ai.toolCall.id': 'call_8k9le99q',
      'ai.toolCall.args': '{"majorVersion":8}',
      'ai.toolCall.result': '{"content":[{"type":"text","text":"8.18.0"}]}'
    },
    links: [],
    events: [],
    _droppedAttributesCount: 0,
    _droppedEventsCount: 0,
    _droppedLinksCount: 0,
    status: { code: 0 },
    endTime: [ 1745371659, 56501250 ],
    _ended: true,
    _duration: [ 0, 185501250 ],
    name: 'ai.toolCall',
    _spanContext: {
      traceId: '1470598e56da6475cfb9691ae25e22f6',
      spanId: 'c0d2bfb535d95117',
      traceFlags: 1,
      traceState: undefined
    },
    parentSpanId: 'ab6bec3a78044b10',
    kind: 0,
    _performanceStartTime: 2472.615375,
    _performanceOffset: -0.74853515625,
    _startTimeProvided: false,
    startTime: [ 1745371658, 871000000 ],
    resource: Resource {
      _attributes: [Object],
      asyncAttributesPending: false,
      _syncAttributes: [Object],
      _asyncAttributesPromise: [Promise]
    },
    instrumentationLibrary: { name: 'ai', version: undefined, schemaUrl: undefined },
    _spanLimits: {
      attributeValueLengthLimit: Infinity,
      attributeCountLimit: 128,
      linkCountLimit: 128,
      eventCountLimit: 128,
      attributePerEventCountLimit: 128,
      attributePerLinkCountLimit: 128
    },
    _attributeValueLengthLimit: Infinity,
    _spanProcessor: MultiSpanProcessor { _spanProcessors: [Array] }
  },
  Span {
    attributes: {
      'http.request.method': 'POST',
      'http.request.method_original': 'POST',
      'url.full': 'http://localhost:11434/v1/chat/completions',
      'url.path': '/v1/chat/completions',
      'url.query': '',
      'url.scheme': 'http',
      'server.address': 'localhost',
      'server.port': 11434,
      'user_agent.original': 'node',
      'network.peer.address': '127.0.0.1',
      'network.peer.port': 11434,
      'http.response.status_code': 200,
      'http.response.header.content-length': 332
    },
    links: [],
    events: [],
    _droppedAttributesCount: 0,
    _droppedEventsCount: 0,
    _droppedLinksCount: 0,
    status: { code: 0 },
    endTime: [ 1745371659, 905519792 ],
    _ended: true,
    _duration: [ 0, 847519792 ],
    name: 'POST',
    _spanContext: {
      traceId: '1470598e56da6475cfb9691ae25e22f6',
      spanId: '5f898f78963862d3',
      traceFlags: 1,
      traceState: undefined
    },
    parentSpanId: '531a50e0b0a3e65c',
    kind: 2,
    _performanceStartTime: 2659.513708,
    _performanceOffset: -0.646728515625,
    _startTimeProvided: false,
    startTime: [ 1745371659, 58000000 ],
    resource: Resource {
      _attributes: [Object],
      asyncAttributesPending: false,
      _syncAttributes: [Object],
      _asyncAttributesPromise: [Promise]
    },
    instrumentationLibrary: {
      name: '@opentelemetry/instrumentation-undici',
      version: '0.11.0',
      schemaUrl: undefined
    },
    _spanLimits: {
      attributeValueLengthLimit: Infinity,
      attributeCountLimit: 128,
      linkCountLimit: 128,
      eventCountLimit: 128,
      attributePerEventCountLimit: 128,
      attributePerLinkCountLimit: 128
    },
    _attributeValueLengthLimit: Infinity,
    _spanProcessor: MultiSpanProcessor { _spanProcessors: [Array] }
  },
  Span {
    attributes: {
      'operation.name': 'ai.generateText.doGenerate',
      'ai.operationId': 'ai.generateText.doGenerate',
      'ai.model.provider': 'openai.chat',
      'ai.model.id': 'qwen2.5',
      'ai.settings.temperature': 0,
      'ai.settings.tool_choice': 'auto',
      'ai.settings.maxRetries': 2,
      'ai.prompt.format': 'messages',
      'ai.prompt.messages': '[{"role":"user","content":[{"type":"text","text":"What is the latest version of Elasticsearch 8?"}]},{"role":"assistant","content":[{"type":"tool-call","toolCallId":"call_8k9le99q","toolName":"get_latest_elasticsearch_version","args":{"majorVersion":8}}]},{"role":"tool","content":[{"type":"tool-result","toolCallId":"call_8k9le99q","toolName":"get_latest_elasticsearch_version","result":{"content":[{"type":"text","text":"8.18.0"}]}}]}]',
      'ai.prompt.tools': [Array],
      'ai.prompt.toolChoice': '{"type":"auto"}',
      'gen_ai.system': 'openai.chat',
      'gen_ai.request.model': 'qwen2.5',
      'gen_ai.request.temperature': 0,
      'ai.response.finishReason': 'stop',
      'ai.response.text': 'The latest version of Elasticsearch 8 is 8.18.0.',
      'ai.response.id': 'chatcmpl-988',
      'ai.response.model': 'qwen2.5',
      'ai.response.timestamp': '2025-04-23T01:27:39.000Z',
      'ai.usage.promptTokens': 220,
      'ai.usage.completionTokens': 17,
      'gen_ai.response.finish_reasons': [Array],
      'gen_ai.response.id': 'chatcmpl-988',
      'gen_ai.response.model': 'qwen2.5',
      'gen_ai.usage.input_tokens': 220,
      'gen_ai.usage.output_tokens': 17
    },
    links: [],
    events: [],
    _droppedAttributesCount: 0,
    _droppedEventsCount: 0,
    _droppedLinksCount: 0,
    status: { code: 0 },
    endTime: [ 1745371659, 906256042 ],
    _ended: true,
    _duration: [ 0, 849256042 ],
    name: 'ai.generateText.doGenerate',
    _spanContext: {
      traceId: '1470598e56da6475cfb9691ae25e22f6',
      spanId: '531a50e0b0a3e65c',
      traceFlags: 1,
      traceState: undefined
    },
    parentSpanId: 'ab6bec3a78044b10',
    kind: 0,
    _performanceStartTime: 2658.837708,
    _performanceOffset: -0.970703125,
    _startTimeProvided: false,
    startTime: [ 1745371659, 57000000 ],
    resource: Resource {
      _attributes: [Object],
      asyncAttributesPending: false,
      _syncAttributes: [Object],
      _asyncAttributesPromise: [Promise]
    },
    instrumentationLibrary: { name: 'ai', version: undefined, schemaUrl: undefined },
    _spanLimits: {
      attributeValueLengthLimit: Infinity,
      attributeCountLimit: 128,
      linkCountLimit: 128,
      eventCountLimit: 128,
      attributePerEventCountLimit: 128,
      attributePerLinkCountLimit: 128
    },
    _attributeValueLengthLimit: Infinity,
    _spanProcessor: MultiSpanProcessor { _spanProcessors: [Array] }
  },
  Span {
    attributes: {
      'operation.name': 'ai.generateText',
      'ai.operationId': 'ai.generateText',
      'ai.model.provider': 'openai.chat',
      'ai.model.id': 'qwen2.5',
      'ai.settings.temperature': 0,
      'ai.settings.tool_choice': 'auto',
      'ai.settings.maxRetries': 2,
      'ai.prompt': '{"messages":[{"role":"user","content":"What is the latest version of Elasticsearch 8?"}]}',
      'ai.settings.maxSteps': 10,
      'ai.response.finishReason': 'stop',
      'ai.response.text': 'The latest version of Elasticsearch 8 is 8.18.0.',
      'ai.usage.promptTokens': 220,
      'ai.usage.completionTokens': 17
    },
    links: [],
    events: [],
    _droppedAttributesCount: 0,
    _droppedEventsCount: 0,
    _droppedLinksCount: 0,
    status: { code: 0 },
    endTime: [ 1745371659, 907026042 ],
    _ended: true,
    _duration: [ 2, 77026042 ],
    name: 'ai.generateText',
    _spanContext: {
      traceId: '1470598e56da6475cfb9691ae25e22f6',
      spanId: 'ab6bec3a78044b10',
      traceFlags: 1,
      traceState: undefined
    },
    parentSpanId: undefined,
    kind: 0,
    _performanceStartTime: 1431.324333,
    _performanceOffset: -0.457275390625,
    _startTimeProvided: false,
    startTime: [ 1745371657, 830000000 ],
    resource: Resource {
      _attributes: [Object],
      asyncAttributesPending: false,
      _syncAttributes: [Object],
      _asyncAttributesPromise: [Promise]
    },
    instrumentationLibrary: { name: 'ai', version: undefined, schemaUrl: undefined },
    _spanLimits: {
      attributeValueLengthLimit: Infinity,
      attributeCountLimit: 128,
      linkCountLimit: 128,
      eventCountLimit: 128,
      attributePerEventCountLimit: 128,
      attributePerLinkCountLimit: 128
    },
    _attributeValueLengthLimit: Infinity,
    _spanProcessor: MultiSpanProcessor { _spanProcessors: [Array] }
  }
]
@opentelemetry/instrumentation-http Applying instrumentation patch for nodejs core module on require hook { module: 'http' }
@opentelemetry/instrumentation-http Applying instrumentation patch for nodejs core module on require hook { module: 'https' }
Instrumentation suppressed, returning Noop Span
@opentelemetry/instrumentation-http http instrumentation outgoingRequest
@opentelemetry/instrumentation-http http.ClientRequest return request
OTLPExportDelegate shutdown started
@opentelemetry/instrumentation-http outgoingRequest on response()
@opentelemetry/instrumentation-http outgoingRequest on end()
@opentelemetry/instrumentation-http outgoingRequest on request close()
OTLPExportDelegate shutdown started

@codefromthecrypt
Copy link
Contributor Author

for this I'll use docker for now on (docker compose run --build --rm genai-function-calling -- --mcp). all commits have split traces from my view anyway. I'm using otel-tui version 0.4.10 and docker controls npm etc at this point.

Details

ec03ebf
Screenshot 2025-04-23 at 9 52 36 AM

3de0d70 Screenshot 2025-04-23 at 9 49 01 AM

a143b7a Screenshot 2025-04-23 at 9 54 37 AM

a5719d7 Screenshot 2025-04-23 at 9 57 22 AM

@codefromthecrypt
Copy link
Contributor Author

update: seems to be more about the path vercel is using mcp being different than tested paths. @anuraaga will chase it up tomorrow

Signed-off-by: Adrian Cole <[email protected]>
@codefromthecrypt codefromthecrypt marked this pull request as ready for review April 25, 2025 03:50
@codefromthecrypt
Copy link
Contributor Author

ok all good with the tiniest diff!

Signed-off-by: Adrian Cole <[email protected]>
@codefromthecrypt codefromthecrypt merged commit 8dcd0e2 into main Apr 25, 2025
2 checks passed
@codefromthecrypt codefromthecrypt deleted the vercel-ai-mcp-propagation branch April 25, 2025 05:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants