Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions genai-function-calling/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,14 +45,14 @@ sequenceDiagram
activate Agent
Note over Agent: invokes get_latest_elasticsearch_version(majorVersion=8)

Agent -->> LLM: [user, assistant, tool: "8.17.4"]
Agent -->> LLM: [user, assistant, tool: "8.18.0"]
Note over Agent: LLM is stateless, the tool result is sent back with prior messages
deactivate Agent
activate LLM

LLM ->> Agent: content: "The latest version of Elasticsearch 8 is 8.17.4"
LLM ->> Agent: content: "The latest version of Elasticsearch 8 is 8.18.0"
deactivate LLM
Note over Agent: "The latest version of Elasticsearch 8 is 8.17.4"
Note over Agent: "The latest version of Elasticsearch 8 is 8.18.0"
```

The GenAI framework not only abstracts the above loop, but also LLM plugability
Expand Down Expand Up @@ -152,14 +152,14 @@ sequenceDiagram
activate Agent
Note over Agent: invokes get_latest_elasticsearch_version(majorVersion=8)

Agent -->> LLM: [user, assistant, tool: "8.17.4"]
Agent -->> LLM: [user, assistant, tool: "8.18.0"]
Note over Agent: LLM is stateless, the tool result is sent back with prior messages
deactivate Agent
activate LLM

LLM ->> Agent: content: "The latest version of Elasticsearch 8 is 8.17.4"
LLM ->> Agent: content: "The latest version of Elasticsearch 8 is 8.18.0"
deactivate LLM
Note over Agent: "The latest version of Elasticsearch 8 is 8.17.4"
Note over Agent: "The latest version of Elasticsearch 8 is 8.18.0"

Agent ->> MCP: Close stdin
activate MCP
Expand Down
9 changes: 2 additions & 7 deletions genai-function-calling/openai-agents/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,7 @@ Agents support is via [OpenInference][openinference].
Copy [env.example](env.example) to `.env` and update its `OPENAI_API_KEY`.

An OTLP compatible endpoint should be listening for traces, metrics and logs on
`http://localhost:4317`. If not, update `OTEL_EXPORTER_OTLP_ENDPOINT` as well.

For example, if Elastic APM server is running locally, edit `.env` like this:
```
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
```
`http://localhost:4318`. If not, update `OTEL_EXPORTER_OTLP_ENDPOINT` as well.

## Run with Docker

Expand Down Expand Up @@ -92,7 +87,7 @@ dotenv -f ../.env run -- pytest
## Notes

The LLM should generate something like "The latest stable version of
Elasticsearch is 8.17.4", unless it hallucinates. Just run it again, if you
Elasticsearch is 8.18.0", unless it hallucinates. Just run it again, if you
see something else.

OpenAI Agents SDK's OpenTelemetry instrumentation is via
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -834,9 +834,9 @@ interactions:
"manifest": "https://artifacts.elastic.co/downloads/8.17.2.json"
},
{
"version": "8.17.3",
"version": "8.18.0",
"public_release_date": "2025-03-04",
"manifest": "https://artifacts.elastic.co/downloads/8.17.3.json"
"manifest": "https://artifacts.elastic.co/downloads/8.18.0.json"
},
{
"version": "8.2.0",
Expand Down Expand Up @@ -1007,7 +1007,7 @@ interactions:
{
"role": "tool",
"tool_call_id": "call_pT4CJ0D2kmnTP5WoVhv2edrt",
"content": "8.17.3"
"content": "8.18.0"
}
],
"model": "gpt-4o-mini",
Expand Down Expand Up @@ -1083,7 +1083,7 @@ interactions:
"index": 0,
"message": {
"role": "assistant",
"content": "The latest version of Elasticsearch 8 is 8.17.3.",
"content": "The latest version of Elasticsearch 8 is 8.18.0.",
"refusal": null,
"annotations": []
},
Expand Down
9 changes: 5 additions & 4 deletions genai-function-calling/openai-agents/env.example
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,11 @@ OPENAI_API_KEY=

OTEL_SERVICE_NAME=genai-function-calling

# OTEL_EXPORTER_* variables are not required. If you would like to change your
# OTLP endpoint to Elastic APM server using HTTP, uncomment the following:
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
# OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
# Default to send logs, traces and metrics to an OpenTelemetry collector,
# accessible via localhost. For example, Elastic Distribution of OpenTelemetry
# (EDOT) Collector.
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf

# Change to 'false' to hide prompt and completion content
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true
Expand Down
2 changes: 1 addition & 1 deletion genai-function-calling/openai-agents/main_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ async def test_main(default_openai_env, capsys):

reply = capsys.readouterr().out.strip()

assert reply == "The latest version of Elasticsearch 8 is 8.17.3."
assert reply == "The latest version of Elasticsearch 8 is 8.18.0."
9 changes: 2 additions & 7 deletions genai-function-calling/semantic-kernel-dotnet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,7 @@ of OpenTelemetry (EDOT) .NET, by prepending its command with `instrument.sh`.
Copy [env.example](env.example) to `.env` and update its `OPENAI_API_KEY`.

An OTLP compatible endpoint should be listening for traces, metrics and logs on
`http://localhost:4317`. If not, update `OTEL_EXPORTER_OTLP_ENDPOINT` as well.

For example, if Elastic APM server is running locally, edit `.env` like this:
```
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
```
`http://localhost:4318`. If not, update `OTEL_EXPORTER_OTLP_ENDPOINT` as well.

## Run with Docker

Expand All @@ -27,7 +22,7 @@ docker compose run --build --rm genai-function-calling
## Notes

The LLM should generate something like "The latest stable version of
Elasticsearch is 8.17.4", unless it hallucinates. Just run it again, if you
Elasticsearch is 8.18.0", unless it hallucinates. Just run it again, if you
see something else.

Semantic Kernel .NET's OpenTelemetry instrumentation uses the following custom
Expand Down
9 changes: 5 additions & 4 deletions genai-function-calling/semantic-kernel-dotnet/env.example
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,10 @@ SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS=true
SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS_SENSITIVE=true
OTEL_DOTNET_AUTO_TRACES_ADDITIONAL_SOURCES="Microsoft.SemanticKernel*"

# OTEL_EXPORTER_* variables are not required. If you would like to change your
# OTLP endpoint to Elastic APM server using HTTP, uncomment the following:
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
# OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
# Default to send logs, traces and metrics to an OpenTelemetry collector,
# accessible via localhost. For example, Elastic Distribution of OpenTelemetry
# (EDOT) Collector.
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf

OTEL_SERVICE_NAME=genai-function-calling
7 changes: 1 addition & 6 deletions genai-function-calling/spring-ai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,7 @@ of Spring AI.
Copy [env.example](env.example) to `.env` and update its `OPENAI_API_KEY`.

An OTLP compatible endpoint should be listening for traces, metrics and logs on
`http://localhost:4317`. If not, update `OTEL_EXPORTER_OTLP_ENDPOINT` as well.

For example, if Elastic APM server is running locally, edit `.env` like this:
```
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
```
`http://localhost:4318`. If not, update `OTEL_EXPORTER_OTLP_ENDPOINT` as well.

## Run with Docker

Expand Down
9 changes: 5 additions & 4 deletions genai-function-calling/spring-ai/env.example
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,10 @@ SPRING_AUTOCONFIGURE_EXCLUDE=org.springframework.boot.actuate.autoconfigure.obse
# SPRING_AUTOCONFIGURE_EXCLUDE=org.springframework.ai.autoconfigure.openai.OpenAiAutoConfiguration
OTEL_INSTRUMENTATION_MICROMETER_ENABLED=true

# OTEL_EXPORTER_* variables are not required. If you would like to change your
# OTLP endpoint to Elastic APM server using HTTP, uncomment the following:
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
# OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
# Default to send logs, traces and metrics to an OpenTelemetry collector,
# accessible via localhost. For example, Elastic Distribution of OpenTelemetry
# (EDOT) Collector.
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf

OTEL_SERVICE_NAME=genai-function-calling
7 changes: 1 addition & 6 deletions genai-function-calling/vercel-ai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,7 @@ of OpenTelemetry (EDOT) Node.js, by requiring `@elastic/opentelemetry-node`
Copy [env.example](env.example) to `.env` and update its `OPENAI_API_KEY`.

An OTLP compatible endpoint should be listening for traces, metrics and logs on
`http://localhost:4317`. If not, update `OTEL_EXPORTER_OTLP_ENDPOINT` as well.

For example, if Elastic APM server is running locally, edit `.env` like this:
```
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
```
`http://localhost:4318`. If not, update `OTEL_EXPORTER_OTLP_ENDPOINT` as well.

## Run with Docker

Expand Down
9 changes: 5 additions & 4 deletions genai-function-calling/vercel-ai/env.example
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,11 @@ OPENAI_API_KEY=
## "Name" from https://oai.azure.com/resource/deployments
# CHAT_MODEL=YOUR_DEPLOYMENT_NAME

# OTEL_EXPORTER_* variables are not required. If you would like to change your
# OTLP endpoint to Elastic APM server using HTTP, uncomment the following:
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
# OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
# Default to send logs, traces and metrics to an OpenTelemetry collector,
# accessible via localhost. For example, Elastic Distribution of OpenTelemetry
# (EDOT) Collector.
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf

OTEL_SERVICE_NAME=genai-function-calling
# Don't print status message on startup
Expand Down