Skip to content

Commit e1abea3

Browse files
authored
Merge branch 'main' into UpdateGkeAgentDeployPreReqsDocumentation
2 parents 15b18f1 + 48a4abf commit e1abea3

File tree

6 files changed

+289
-110
lines changed

6 files changed

+289
-110
lines changed

docs/agents/models.md

Lines changed: 70 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -28,17 +28,18 @@ The following sections guide you through using these methods based on your needs
2828

2929
## Using Google Gemini Models
3030

31-
This is the most direct way to use Google's flagship models within ADK.
31+
This section covers authenticating with Google's Gemini models, either through Google AI Studio for rapid development or Google Cloud Vertex AI for enterprise applications. This is the most direct way to use Google's flagship models within ADK.
32+
33+
**Integration Method:** Once you are authenticated using one of the below methods, you can pass the model's identifier string directly to the
34+
`model` parameter of `LlmAgent`.
3235

33-
**Integration Method:** Pass the model's identifier string directly to the
34-
`model` parameter of `LlmAgent` (or its alias, `Agent`).
3536

36-
**Backend Options & Setup:**
37+
!!!tip
3738

38-
The `google-genai` library, used internally by ADK for Gemini, can connect
39-
through either Google AI Studio or Vertex AI.
39+
The `google-genai` library, used internally by ADK for Gemini models, can connect
40+
through either Google AI Studio or Vertex AI.
4041

41-
!!!note "Model support for voice/video streaming"
42+
**Model support for voice/video streaming**
4243

4344
In order to use voice/video streaming in ADK, you will need to use Gemini
4445
models that support the Live API. You can find the **model ID(s)** that
@@ -49,51 +50,76 @@ through either Google AI Studio or Vertex AI.
4950

5051
### Google AI Studio
5152

52-
* **Use Case:** Google AI Studio is the easiest way to get started with Gemini.
53-
All you need is the [API key](https://aistudio.google.com/app/apikey). Best
54-
for rapid prototyping and development.
55-
* **Setup:** Typically requires an API key:
56-
* Set as an environment variable or
57-
* Passed during the model initialization via the `Client` (see example below)
53+
This is the simplest method and is recommended for getting started quickly.
5854

59-
```shell
60-
export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
61-
export GOOGLE_GENAI_USE_VERTEXAI=FALSE
62-
```
55+
* **Authentication Method:** API Key
56+
* **Setup:**
57+
1. **Get an API key:** Obtain your key from [Google AI Studio](https://aistudio.google.com/apikey).
58+
2. **Set environment variables:** Create a `.env` file (Python) or `.properties` (Java) in your project's root directory and add the following lines. ADK will automatically load this file.
59+
60+
```shell
61+
export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
62+
export GOOGLE_GENAI_USE_VERTEXAI=FALSE
63+
```
64+
65+
(or)
66+
67+
Pass these variables during the model initialization via the `Client` (see example below).
6368

6469
* **Models:** Find all available models on the
6570
[Google AI for Developers site](https://ai.google.dev/gemini-api/docs/models).
6671

67-
### Vertex AI
72+
### Google Cloud Vertex AI
6873

69-
* **Use Case:** Recommended for production applications, leveraging Google Cloud
70-
infrastructure. Gemini on Vertex AI supports enterprise-grade features,
71-
security, and compliance controls.
72-
* **Setup:**
73-
* Authenticate using Application Default Credentials (ADC):
74+
For scalable and production-oriented use cases, Vertex AI is the recommended platform. Gemini on Vertex AI supports enterprise-grade features, security, and compliance controls. Based on your development environment and usecase, *choose one of the below methods to authenticate*.
7475

75-
```shell
76-
gcloud auth application-default login
77-
```
76+
**Pre-requisites:** A Google Cloud Project with [Vertex AI enabled](https://console.cloud.google.com/apis/enableflow;apiid=aiplatform.googleapis.com).
7877

79-
* Configure these variables either as environment variables or by providing them directly when initializing the Model.
80-
81-
Set your Google Cloud project and location:
82-
83-
```shell
84-
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"
85-
export GOOGLE_CLOUD_LOCATION="YOUR_VERTEX_AI_LOCATION" # e.g., us-central1
86-
```
87-
88-
Explicitly tell the library to use Vertex AI:
78+
### **Method A: User Credentials (for Local Development)**
79+
80+
1. **Install the gcloud CLI:** Follow the official [installation instructions](https://cloud.google.com/sdk/docs/install).
81+
2. **Log in using ADC:** This command opens a browser to authenticate your user account for local development.
82+
```bash
83+
gcloud auth application-default login
84+
```
85+
3. **Set environment variables:**
86+
```shell
87+
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"
88+
export GOOGLE_CLOUD_LOCATION="YOUR_VERTEX_AI_LOCATION" # e.g., us-central1
89+
```
8990

90-
```shell
91-
export GOOGLE_GENAI_USE_VERTEXAI=TRUE
92-
```
91+
Explicitly tell the library to use Vertex AI:
9392

94-
* **Models:** Find available model IDs in the
93+
```shell
94+
export GOOGLE_GENAI_USE_VERTEXAI=TRUE
95+
```
96+
97+
4. **Models:** Find available model IDs in the
9598
[Vertex AI documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models).
9699

100+
### **Method B: Vertex AI Express Mode**
101+
[Vertex AI Express Mode](https://cloud.google.com/vertex-ai/generative-ai/docs/start/express-mode/overview) offers a simplified, API-key-based setup for rapid prototyping.
102+
103+
1. **Sign up for Express Mode** to get your API key.
104+
2. **Set environment variables:**
105+
```shell
106+
export GOOGLE_API_KEY="PASTE_YOUR_EXPRESS_MODE_API_KEY_HERE"
107+
export GOOGLE_GENAI_USE_VERTEXAI=TRUE
108+
```
109+
110+
### **Method C: Service Account (for Production & Automation)**
111+
112+
For deployed applications, a service account is the standard method.
113+
114+
1. [**Create a Service Account**](https://cloud.google.com/iam/docs/service-accounts-create#console) and grant it the `Vertex AI User` role.
115+
2. **Provide credentials to your application:**
116+
* **On Google Cloud:** If you are running the agent in Cloud Run, GKE, VM or other Google Cloud services, the environment can automatically provide the service account credentials. You don't have to create a key file.
117+
* **Elsewhere:** Create a [service account key file](https://cloud.google.com/iam/docs/keys-create-delete#console) and point to it with an environment variable:
118+
```bash
119+
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/keyfile.json"
120+
```
121+
Instead of the key file, you can also authenticate the service account using Workload Identity. But this is outside the scope of this guide.
122+
97123
**Example:**
98124
99125
=== "Python"
@@ -157,6 +183,9 @@ export GOOGLE_GENAI_USE_VERTEXAI=FALSE
157183
// different availability or quota limitations.
158184
```
159185
186+
!!!warning "Secure Your Credentials"
187+
Service account credentials or API keys are powerful credentials. Never expose them publicly. Use a secret manager like [Google Secret Manager](https://cloud.google.com/secret-manager) to store and access them securely in production.
188+
160189
## Using Anthropic models
161190
162191
![java_only](https://img.shields.io/badge/Supported_in-Java-orange){ title="This feature is currently available for Java. Python support for direct Anthropic API (non-Vertex) is via LiteLLM."}
@@ -753,4 +782,4 @@ Vertex AI.
753782
}
754783
}
755784
}
756-
```
785+
```

docs/deploy/agent-engine.md

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Agent Engine is part of the Vertex AI SDK for Python. For more information, you
2626
### Install the Vertex AI SDK
2727

2828
```shell
29-
pip install google-cloud-aiplatform[adk,agent_engines]
29+
pip install "google-cloud-aiplatform[adk,agent_engines]" cloudpickle
3030
```
3131

3232
!!!info
@@ -148,15 +148,19 @@ remote_app = agent_engines.create(
148148
)
149149
```
150150

151-
This step may take several minutes to finish. Each deployed agent has a unique identifier. You can run the following command to get the resource_name identifier for your deployed agent:
151+
This step may take several minutes to finish.
152+
153+
You can check and monitor the deployment of your ADK agent on the [Agent Engine UI](https://console.cloud.google.com/vertex-ai/agents/agent-engines) on Google Cloud.
154+
155+
Each deployed agent has a unique identifier. You can run the following command to get the resource_name identifier for your deployed agent:
152156

153157
```python
154158
remote_app.resource_name
155159
```
156160

157161
The response should look like the following string:
158162

159-
```
163+
```shell
160164
f"projects/{PROJECT_NUMBER}/locations/{LOCATION}/reasoningEngines/{RESOURCE_ID}"
161165
```
162166

@@ -218,7 +222,7 @@ Expected output for `stream_query` (remote):
218222
{'parts': [{'text': 'The weather in New York is sunny with a temperature of 25 degrees Celsius (41 degrees Fahrenheit).'}], 'role': 'model'}
219223
```
220224

221-
225+
## Using the Agent Engine UI
222226

223227
## Clean up
224228

@@ -231,3 +235,5 @@ remote_app.delete(force=True)
231235
```
232236

233237
`force=True` will also delete any child resources that were generated from the deployed agent, such as sessions.
238+
239+
You can also delete your deployed agent via the [Agent Engine UI](https://console.cloud.google.com/vertex-ai/agents/agent-engines) on Google Cloud.

docs/get-started/quickstart.md

Lines changed: 20 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -133,10 +133,14 @@ application entirely on your machine and is recommended for internal development
133133

134134
Your agent's ability to understand user requests and generate responses is
135135
powered by a Large Language Model (LLM). Your agent needs to make secure calls
136-
to this external LLM service, which requires authentication credentials. Without
136+
to this external LLM service, which **requires authentication credentials**. Without
137137
valid authentication, the LLM service will deny the agent's requests, and the
138138
agent will be unable to function.
139139

140+
!!!tip "Model Authentication guide"
141+
For a detailed guide on authenticating to different models, see the [Authentication guide](../agents/models.md#google-ai-studio).
142+
This is a critical step to ensure your agent can make calls to the LLM service.
143+
140144
=== "Gemini - Google AI Studio"
141145
1. Get an API key from [Google AI Studio](https://aistudio.google.com/apikey).
142146
2. When using Python, open the **`.env`** file located inside (`multi_tool_agent/`)
@@ -157,17 +161,10 @@ agent will be unable to function.
157161
3. Replace `PASTE_YOUR_ACTUAL_API_KEY_HERE` with your actual `API KEY`.
158162

159163
=== "Gemini - Google Cloud Vertex AI"
160-
1. You need an existing
161-
[Google Cloud](https://cloud.google.com/?e=48754805&hl=en) account and a
162-
project.
163-
* Set up a
164-
[Google Cloud project](https://cloud.google.com/vertex-ai/generative-ai/docs/start/quickstarts/quickstart-multimodal#setup-gcp)
165-
* Set up the
166-
[gcloud CLI](https://cloud.google.com/vertex-ai/generative-ai/docs/start/quickstarts/quickstart-multimodal#setup-local)
167-
* Authenticate to Google Cloud, from the terminal by running
168-
`gcloud auth login`.
169-
* [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).
170-
2. When using Python, open the **`.env`** file located inside (`multi_tool_agent/`). Copy-paste
164+
1. Set up a [Google Cloud project](https://cloud.google.com/vertex-ai/generative-ai/docs/start/quickstarts/quickstart-multimodal#setup-gcp) and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).
165+
2. Set up the [gcloud CLI](https://cloud.google.com/vertex-ai/generative-ai/docs/start/quickstarts/quickstart-multimodal#setup-local).
166+
3. Authenticate to Google Cloud from the terminal by running `gcloud auth login`.
167+
4. When using Python, open the **`.env`** file located inside (`multi_tool_agent/`). Copy-paste
171168
the following code and update the project ID and location.
172169

173170
```env title="multi_tool_agent/.env"
@@ -222,6 +219,17 @@ agent will be unable to function.
222219
There are multiple ways to interact with your agent:
223220

224221
=== "Dev UI (adk web)"
222+
223+
!!! success "Authentication Setup for Vertex AI Users"
224+
If you selected **"Gemini - Google Cloud Vertex AI"** in the previous step, you must authenticate with Google Cloud before launching the dev UI.
225+
226+
Run this command and follow the prompts:
227+
```bash
228+
gcloud auth application-default login
229+
```
230+
231+
**Note:** Skip this step if you're using "Gemini - Google AI Studio".
232+
225233
Run the following command to launch the **dev UI**.
226234

227235
```shell

docs/observability/weave.md

Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# Agent Observability with Weave by WandB
2+
3+
[Weave by Weights & Biases (WandB)](https://weave-docs.wandb.ai/) provides a powerful platform for logging and visualizing model calls. By integrating Google ADK with Weave, you can track and analyze your agent's performance and behavior using OpenTelemetry (OTEL) traces.
4+
5+
## Prerequisites
6+
7+
1. Sign up for an account at [WandB](https://wandb.ai).
8+
9+
2. Obtain your API key from [WandB Authorize](https://wandb.ai/authorize).
10+
11+
3. Configure your environment with the required API keys:
12+
13+
```bash
14+
export WANDB_API_KEY=<your-wandb-api-key>
15+
export GOOGLE_API_KEY=<your-google-api-key>
16+
```
17+
18+
## Install Dependencies
19+
20+
Ensure you have the necessary packages installed:
21+
22+
```bash
23+
pip install google-adk opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
24+
```
25+
26+
## Sending Traces to Weave
27+
28+
This example demonstrates how to configure OpenTelemetry to send Google ADK traces to Weave.
29+
30+
```python
31+
# math_agent/agent.py
32+
33+
import base64
34+
import os
35+
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
36+
from opentelemetry.sdk import trace as trace_sdk
37+
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
38+
from opentelemetry import trace
39+
40+
from google.adk.agents import LlmAgent
41+
from google.adk.tools import FunctionTool
42+
43+
from dotenv import load_dotenv
44+
45+
load_dotenv()
46+
47+
# Configure Weave endpoint and authentication
48+
WANDB_BASE_URL = "https://trace.wandb.ai"
49+
PROJECT_ID = "your-entity/your-project" # e.g., "teamid/projectid"
50+
OTEL_EXPORTER_OTLP_ENDPOINT = f"{WANDB_BASE_URL}/otel/v1/traces"
51+
52+
# Set up authentication
53+
WANDB_API_KEY = os.getenv("WANDB_API_KEY")
54+
AUTH = base64.b64encode(f"api:{WANDB_API_KEY}".encode()).decode()
55+
56+
OTEL_EXPORTER_OTLP_HEADERS = {
57+
"Authorization": f"Basic {AUTH}",
58+
"project_id": PROJECT_ID,
59+
}
60+
61+
# Create the OTLP span exporter with endpoint and headers
62+
exporter = OTLPSpanExporter(
63+
endpoint=OTEL_EXPORTER_OTLP_ENDPOINT,
64+
headers=OTEL_EXPORTER_OTLP_HEADERS,
65+
)
66+
67+
# Create a tracer provider and add the exporter
68+
tracer_provider = trace_sdk.TracerProvider()
69+
tracer_provider.add_span_processor(SimpleSpanProcessor(exporter))
70+
71+
# Set the global tracer provider BEFORE importing/using ADK
72+
trace.set_tracer_provider(tracer_provider)
73+
74+
# Define a simple tool for demonstration
75+
def calculator(a: float, b: float) -> str:
76+
"""Add two numbers and return the result.
77+
78+
Args:
79+
a: First number
80+
b: Second number
81+
82+
Returns:
83+
The sum of a and b
84+
"""
85+
return str(a + b)
86+
87+
calculator_tool = FunctionTool(func=calculator)
88+
89+
# Create an LLM agent
90+
root_agent = LlmAgent(
91+
name="MathAgent",
92+
model="gemini-2.0-flash-exp",
93+
instruction=(
94+
"You are a helpful assistant that can do math. "
95+
"When asked a math problem, use the calculator tool to solve it."
96+
),
97+
tools=[calculator_tool],
98+
)
99+
```
100+
101+
## View Traces in Weave dashboard
102+
103+
Once the agent runs, all its traces are logged to the corresponding project on [the Weave dashboard](https://wandb.ai/home).
104+
105+
![Traces in Weave](https://wandb.github.io/weave-public-assets/google-adk/traces-overview.png)
106+
107+
You can view a timeline of calls that your ADK agent made during execution -
108+
109+
![Timeline view](https://wandb.github.io/weave-public-assets/google-adk/adk-weave-timeline.gif)
110+
111+
112+
## Notes
113+
114+
- **Environment Variables**: Ensure your environment variables are correctly set for both WandB and Google API keys.
115+
- **Project Configuration**: Replace `<your-entity>/<your-project>` with your actual WandB entity and project name.
116+
- **Entity Name**: You can find your entity name by visiting your [WandB dashboard](https://wandb.ai/home) and checking the **Teams** field in the left sidebar.
117+
- **Tracer Provider**: It's critical to set the global tracer provider before using any ADK components to ensure proper tracing.
118+
119+
By following these steps, you can effectively integrate Google ADK with Weave, enabling comprehensive logging and visualization of your AI agents' model calls, tool invocations, and reasoning processes.
120+
121+
## Resources
122+
123+
- **[Send OpenTelemetry Traces to Weave](https://weave-docs.wandb.ai/guides/tracking/otel)** - Comprehensive guide on configuring OTEL with Weave, including authentication and advanced configuration options.
124+
125+
- **[Navigate the Trace View](https://weave-docs.wandb.ai/guides/tracking/trace-tree)** - Learn how to effectively analyze and debug your traces in the Weave UI, including understanding trace hierarchies and span details.
126+
127+
- **[Weave Integrations](https://weave-docs.wandb.ai/guides/integrations/)** - Explore other framework integrations and see how Weave can work with your entire AI stack.

0 commit comments

Comments
 (0)