Skip to content

Conversation

fzowl
Copy link
Contributor

@fzowl fzowl commented Oct 13, 2025

Description of changes

Voyageai contextual and multimodal
Adding token counting
Adding more tests

Test plan

How are these changes tested?

  • Tests pass locally with pytest for python, yarn test for js, cargo test for rust

Migration plan

Are there any migrations, or any forwards/backwards compatibility changes needed in order to make sure this change deploys reliably?
No

Observability plan

What is the plan to instrument and monitor this change?

Documentation Changes

Are all docstrings for user-facing APIs updated if required? Do we need to make documentation changes in the docs section?

Copy link

Reviewer Checklist

Please leverage this checklist to ensure your code review is thorough before approving

Testing, Bugs, Errors, Logs, Documentation

  • Can you think of any use case in which the code does not behave as intended? Have they been tested?
  • Can you think of any inputs or external events that could break the code? Is user input validated and safe? Have they been tested?
  • If appropriate, are there adequate property based tests?
  • If appropriate, are there adequate unit tests?
  • Should any logging, debugging, tracing information be added or removed?
  • Are error messages user-friendly?
  • Have all documentation changes needed been made?
  • Have all non-obvious changes been commented?

System Compatibility

  • Are there any potential impacts on other parts of the system or backward compatibility?
  • Does this change intersect with any items on our roadmap, and if so, is there a plan for fitting them together?

Quality

  • Is this code of a unexpectedly high quality (Readability, Modularity, Intuitiveness)

Copy link
Contributor

propel-code-bot bot commented Oct 13, 2025

VoyageAI Contextual and Multimodal Model Integration & Token Counting Support

This PR introduces comprehensive support for contextual and multimodal embedding models for VoyageAI, adding functionalities for multimodal (text+image) embeddings, contextual model handling, and a robust batching/token counting mechanism designed to operate within VoyageAI model token limits. The embedding function API is enhanced to allow flexible batch sizing, token counting, and support for a wide range of VoyageAI models with specific configuration options. The update also comes with a major expansion of test coverage across various contextual, multimodal, batching, and token-counting scenarios. API compatibility and error handling are improved, and new test suites verify correct behavior for all modes, including integration with Chroma's multimodal collections.

Key Changes

• Expanded VoyageAIEmbeddingFunction to support contextual and multimodal models (voyage-context-*, voyage-multimodal-*), including mixed document/image batches.
• Introduced per-model token limit enforcement and dynamic batching logic to efficiently batch inputs by token count and/or batch size.
• Added count_tokens method and supporting API for tokenization and token limit awareness.
• Revised constructor/config support for new parameters (dimensions, embedding_type, batch_size), now required to specify model_name explicitly.
• Enhanced type checking, validation, and error handling (including better conversion logic for mixed input types).
• Added new and comprehensive end-to-end tests for multimodal and contextual models, batch splitting, token counting, and configuration handling (chromadb/test/ef/test_voyageai_ef.py, chromadb/test/ef/test_voyage_multimodal.py).
• Refactored API compatibility surfaces to handle Embeddable inputs consistently; updated configuration schemas and tests.
• Improved docstrings, type annotations, and internal documentation throughout.

Affected Areas

• Embedding functions and abstraction in chromadb/utils/embedding_functions/voyageai_embedding_function.py
• Test specifications for VoyageAI embedding function, including new and updated tests
• Multimodal and contextual model handling
• Batching and token counting logic
• Integration with Chroma's test and collection system

This summary was automatically generated by @propel-code-bot

Adding token counting and flexible batch size
Extending the tests
Comment on lines +156 to +158
# Tokenize all texts in one API call
all_token_lists = self._client.tokenize(texts, model=self.model_name)
token_counts = [len(tokens) for tokens in all_token_lists]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[BestPractice]

Potential API call failure: The _build_batches method calls self._client.tokenize(texts, model=self.model_name) but there's no error handling if this API call fails. If the tokenize API is unavailable or returns an error, this will cause the entire embedding process to crash.

Add error handling:

Suggested Change
Suggested change
# Tokenize all texts in one API call
all_token_lists = self._client.tokenize(texts, model=self.model_name)
token_counts = [len(tokens) for tokens in all_token_lists]
# Tokenize all texts in one API call
try:
all_token_lists = self._client.tokenize(texts, model=self.model_name)
token_counts = [len(tokens) for tokens in all_token_lists]
except Exception:
# Fallback to simple batching by batch_size if tokenization fails
if self.batch_size:
for i in range(0, len(texts), self.batch_size):
yield texts[i:i + self.batch_size]
else:
yield texts
return

Committable suggestion

Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Context for Agents
[**BestPractice**]

Potential API call failure: The `_build_batches` method calls `self._client.tokenize(texts, model=self.model_name)` but there's no error handling if this API call fails. If the tokenize API is unavailable or returns an error, this will cause the entire embedding process to crash.

Add error handling:

<details>
<summary>Suggested Change</summary>

```suggestion
        # Tokenize all texts in one API call
        try:
            all_token_lists = self._client.tokenize(texts, model=self.model_name)
            token_counts = [len(tokens) for tokens in all_token_lists]
        except Exception:
            # Fallback to simple batching by batch_size if tokenization fails
            if self.batch_size:
                for i in range(0, len(texts), self.batch_size):
                    yield texts[i:i + self.batch_size]
            else:
                yield texts
            return
```

⚡ **Committable suggestion**

Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

</details>

File: chromadb/utils/embedding_functions/voyageai_embedding_function.py
Line: 158

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self._client.tokenize is running locally. Since tokenization is an important feature here, i feel that if it fails, it is better the whole process to fail.

Co-authored-by: propel-code-bot[bot] <203372662+propel-code-bot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant