Skip to content

Commit 759d258

Browse files
committed
docs(java): add guide for using local models with LangChain4j
This commit introduces a new section to the agents/models.md documentation, detailing how to integrate the Java ADK with open-source and local models. The guide focuses on leveraging the LangChain4j integration, specifically with the Docker Model Runner. It provides a step-by-step walkthrough that includes: - Choosing and pulling a compatible model from a registry. - The necessary Maven dependencies for the ADK wrapper and LangChain4j. - A complete code example for creating an agent backed by a local model. - Actionable tips for debugging the integration.
1 parent 32b0be6 commit 759d258

File tree

1 file changed

+89
-3
lines changed

1 file changed

+89
-3
lines changed

docs/agents/models.md

Lines changed: 89 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,5 @@
11
# Using Different Models with ADK
22

3-
!!! Note
4-
Java ADK currently supports Gemini and Anthropic models. More model support coming soon.
5-
63
The Agent Development Kit (ADK) is designed for flexibility, allowing you to
74
integrate various Large Language Models (LLMs) into your agents. While the setup
85
for Google Gemini models is covered in the
@@ -471,6 +468,95 @@ http://localhost:11434/api/chat \
471468
-d '{'model': 'mistral-small3.1', 'messages': [{'role': 'system', 'content': ...
472469
```
473470
471+
## Using Open and Local Models via LangChain4j
472+
473+
![java_only](https://img.shields.io/badge/Supported_in-Java-orange){ title="This feature is currently available for Java."}
474+
475+
For Java developers, ADK provides an integration with [LangChain4j](https://github.com/langchain4j/langchain4j), which offers a streamlined way to work with a [variety of model providers](https://docs.langchain4j.dev/integrations/language-models/), including the models you can serve locally.
476+
477+
**Integration Method:** Instantiate the LangChain4j wrapper class, configured with the model object from the Langchain4j project.
478+
479+
### Example based on Docker Model Runner
480+
[Docker Model Runner](https://docs.docker.com/ai/model-runner/) allows you to easily run open-source models locally.
481+
You can [enable it in Docker Desktop or Docker CE environment](https://docs.docker.com/ai/model-runner/#enable-docker-model-runner), and expose it on the host machine via a TCP port. The default port for it is 12434 which we will use in the example below.
482+
483+
#### Model Choice
484+
When using LangChain4j, you have the flexibility to choose any model provider and model supported by it. For agents that require tool-use capabilities, it is essential to select a model that has been fine-tuned for function calling. You can obtain models from any OCI registry, for example Docker Hub: [https://hub.docker.com/u/ai](https://hub.docker.com/u/ai).
485+
486+
```shell
487+
docker model pull $model_name
488+
```
489+
490+
#### Using LangChain4j wrapper
491+
To connect your agent to a model served via LangChain4j, you use the `com.google.adk.models.langchain4j.LangChain4j` class. You need to configure it with the `ChatModel` instance from the LangChain4j library. Note that you need specific dependencies for particular model providers like for the OpenAI compatible endpoint Docker Model Runner uses. Add the LangChain4j OpenAI dependency and the LangChain4j wrapper dependency from adk-java.
492+
493+
```xml
494+
<dependency>
495+
<groupId>com.google.adk</groupId>
496+
<artifactId>google-adk-contrib-langchain4j</artifactId>
497+
<version>${adk-java.version}</version>
498+
</dependency>
499+
<dependency>
500+
<groupId>dev.langchain4j</groupId>
501+
<artifactId>langchain4j-open-ai</artifactId>
502+
<version>${langchain4j.version}</version>
503+
</dependency>
504+
```
505+
**Example:**
506+
Build the `ChatModel` instance, connecting it to the Docker Model Runner on localhost.
507+
508+
```java
509+
OpenAiChatModel chatModel = OpenAiChatModel.builder()
510+
.baseUrl("http://localhost:12434/engines/llama.cpp/v1")
511+
.modelName("ai/qwen3:8B-Q4_0")
512+
.build();
513+
```
514+
515+
This example uses "localhost:12434" port, which is the default
516+
Then wire it into the `LlmAgent` via the `LangChain4j` wrapper:
517+
518+
```java
519+
import com.google.adk.agents.LlmAgent;
520+
import com.google.adk.models.langchain4j.LangChain4j;
521+
import dev.langchain4j.model.openai.OpenAiChatModel;
522+
public class LangChain4jExampleAgent {
523+
private static OpenAiChatModel chatModel = OpenAiChatModel.builder()
524+
.baseUrl("http://localhost:12434/engines/llama.cpp/v1")
525+
.modelName("ai/qwen3:8B-Q4_0")
526+
.build();
527+
public static LlmAgent createAgent() {
528+
return LlmAgent.builder()
529+
.name("tiny-agent")
530+
.description("tiny agent example")
531+
.instruction("""
532+
You are a friendly assistant. You answer questions in a concise manner.
533+
""")
534+
.model(new LangChain4j(chatModel))
535+
.build();
536+
}
537+
}
538+
```
539+
540+
### Debugging
541+
542+
To debug interactions with your LangChain4j-backed model, you can enable logging within your model server or use LangChain4j's built-in logging capabilities.
543+
544+
```java
545+
OpenAiChatModel chatModel = OpenAiChatModel.builder()
546+
.baseUrl("http://localhost:12434/engines/llama.cpp/v1")
547+
.modelName("ai/qwen3:8B-Q4_0")
548+
**.logRequests(true)**
549+
**.logResponses(true)**
550+
.build();
551+
```
552+
553+
Additionally, you can inspect the logs from your Docker environment running the model to see the direct input it receives and the output it generates by running:
554+
555+
```shell
556+
docker model logs
557+
```
558+
559+
474560
### Self-Hosted Endpoint (e.g., vLLM)
475561
476562
![python_only](https://img.shields.io/badge/Supported_in-Python-blue)

0 commit comments

Comments
 (0)