Skip to content

[Question]: Error in docker logs for embedding model #1397

@ceinstaller

Description

@ceinstaller

Your Question

I'm working on setting up OpenViking in a docker container in my local environment. I'm trying to use openrouter for the embedding model and a local model for everything else. Here's my ov.conf:

{
  "server": {
    "host": "0.0.0.0",
    "port": 1933,
    "root_api_key": "viking_for_hermes"
  },
  "storage": {
    "workspace": "./data",
    "agfs": { "backend": "local" },
    "vectordb": { "backend": "local" }
  },
  "embedding": {
    "dense": {
      "provider": "openai",
      "api_base": "https://openrouter.ai/api/v1",
      "api_key": "sk-or-v1-secret-key-goes-here",
      "model": "perplexity/pplx-embed-v1-0.6b",
      "dimension": 1024
    }
  },
  "vlm": {
    "provider": "openai",
    "api_base": "http://10.0.0.65:4000/v1",
    "api_key": "dummy",
    "model": "glm-flash-30B-A3B"
  }
}

The container starts, the /health and /ready endpoints both look good, but I see this message in my logs:

openviking.models.embedder.openai_embedders - WARNING - OpenAI async embedding slow call provider=unknown model=perplexity/pplx-embed-v1-0.6b wait_ms=0.01 duration_ms=1517.32

Not sure why I'm having trouble here, hoping someone can point me in the right direction.

Thanks!

Context

No response

Code Example (Optional)

Related Area

None

Before Asking

Metadata

Metadata

Assignees

Labels

questionFurther information is requested

Type

No type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions