Skip to content

00dhkim/gemini-cli-observability

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gemini CLI Observability Hands On

한국어 README

A local development environment that sends Gemini CLI requests to a LiteLLM Proxy, which then forwards traffic to the Google Gemini API. Request/response logs can be viewed in Phoenix.

🚀 Components

  • LiteLLM Proxy: OpenAI-compatible proxy that routes requests to Google Gemini.
  • Phoenix: LLM observability UI.
  • Postgres: Stores proxy metadata.
  • gemini-cli: CLI used by developers.

📁 Key Files

  • docker-compose.yml – Runs Proxy / Phoenix / Postgres
  • litellm_config.yaml – Defines model aliases and actual Gemini model IDs
  • .env – Google API Key + Proxy Master Key
  • env.sh – Environment variables for gemini-cli

▶️ How to Run

1) Generate an API Key from Google AI Studio

2) Start Containers

docker compose up -d

3) Apply gemini-cli Proxy Environment

source env.sh

4) Test

gemini
# Enter /auth, type sk-1234567890, then exit
gemini --model="gemini-2.5-flash-lite" -p "hello"

🔎 Observability

Phoenix UI: 👉 http://localhost:6006

🧪 Test Calling the Proxy Directly

curl -X POST "http://localhost:4000/v1beta/models/gemini-2.5-flash-lite:generateContent" \
  -H "Authorization: Bearer $GEMINI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"contents":[{"parts":[{"text":"hi"}],"role":"user"}]}'

❗ Troubleshooting

  • If the gemini-cli model name ≠ model_name, a 404/500 may occur.

  • Ensure GEMINI_API_KEY in .env is a valid Google API key.

  • Check LiteLLM logs:

    docker logs -f litellm

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages