-
Notifications
You must be signed in to change notification settings - Fork 548
feat(benchmark): Create mock LLM server for use in benchmarks #1403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
tgasser-nv
wants to merge
15
commits into
develop
Choose a base branch
from
feat/mock-llm-server
base: develop
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
15 commits
Select commit
Hold shift + click to select a range
1bb4443
Initial scaffold of mock OpenAI-compatible server
tgasser-nv d9b73be
Refactor mock LLM, fix tests
tgasser-nv 9021b81
Added tests to load YAML config. Still debugging dependency-injection…
tgasser-nv 687e33b
Move FastAPI app import **after** the dependencies are loaded and cached
tgasser-nv c0afd8d
Remove debugging print statements
tgasser-nv e62f394
Temporary checkin
tgasser-nv 6ddcaca
Add refusal probability and tests to check it
tgasser-nv 3b3f49a
Use YAML configs for Nemoguard and app LLMs
tgasser-nv f142c0f
Add Mock configs for content-safety and App LLM
tgasser-nv a18b514
Add async sleep statements and logging to record request time
tgasser-nv 6beb888
Change content-safety mock to have latency of 0.5s
tgasser-nv c056b3b
Add unit-tests to mock llm
tgasser-nv 4104a1f
Check for config file
tgasser-nv 1cca2ff
Rename test files to avoid conflicts with other tests
tgasser-nv e87715c
Remove example_usage.py script and type-clean config.py
tgasser-nv File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
# SPDX-License-Identifier: Apache-2.0 | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
# SPDX-License-Identifier: Apache-2.0 | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,240 @@ | ||
# SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
# SPDX-License-Identifier: Apache-2.0 | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
|
||
import asyncio | ||
import logging | ||
import time | ||
from typing import Annotated, Optional, Union | ||
|
||
from fastapi import Depends, FastAPI, HTTPException, Request, Response | ||
|
||
from nemoguardrails.benchmark.mock_llm_server.config import ( # get_config, | ||
ModelSettings, | ||
get_settings, | ||
) | ||
from nemoguardrails.benchmark.mock_llm_server.models import ( | ||
ChatCompletionChoice, | ||
ChatCompletionRequest, | ||
ChatCompletionResponse, | ||
CompletionChoice, | ||
CompletionRequest, | ||
CompletionResponse, | ||
Message, | ||
Model, | ||
ModelsResponse, | ||
Usage, | ||
) | ||
from nemoguardrails.benchmark.mock_llm_server.response_data import ( | ||
calculate_tokens, | ||
generate_id, | ||
get_latency_seconds, | ||
get_response, | ||
) | ||
|
||
# Create a console logging handler | ||
log = logging.getLogger(__name__) | ||
log.setLevel(logging.INFO) # TODO Control this from the CLi args | ||
|
||
# Create a formatter to define the log message format | ||
formatter = logging.Formatter( | ||
"%(asctime)s %(levelname)s: %(message)s", datefmt="%Y-%m-%d %H:%M:%S" | ||
) | ||
|
||
# Create a console handler to print logs to the console | ||
console_handler = logging.StreamHandler() | ||
console_handler.setLevel(logging.INFO) # DEBUG and higher will go to the console | ||
console_handler.setFormatter(formatter) | ||
|
||
# Add console handler to logs | ||
log.addHandler(console_handler) | ||
|
||
|
||
ModelSettingsDep = Annotated[ModelSettings, Depends(get_settings)] | ||
|
||
|
||
def _validate_request_model( | ||
config: ModelSettingsDep, | ||
request: Union[CompletionRequest, ChatCompletionRequest], | ||
) -> None: | ||
"""Check the Completion or Chat Completion `model` field is in our supported model list""" | ||
if request.model != config.model: | ||
raise HTTPException( | ||
status_code=400, | ||
detail=f"Model '{request.model}' not found. Available models: {config.model}", | ||
) | ||
|
||
|
||
app = FastAPI( | ||
title="Mock LLM Server", | ||
description="OpenAI-compatible mock LLM server for testing and benchmarking", | ||
version="0.0.1", | ||
) | ||
|
||
|
||
@app.middleware("http") | ||
async def log_http_duration(request: Request, call_next): | ||
""" | ||
Middleware to log incoming requests and their responses. | ||
""" | ||
request_time = time.time() | ||
response = await call_next(request) | ||
response_time = time.time() | ||
|
||
duration_seconds = response_time - request_time | ||
log.info( | ||
"Request finished: %s, took %.3f seconds", | ||
response.status_code, | ||
duration_seconds, | ||
) | ||
return response | ||
|
||
|
||
@app.get("/") | ||
async def root(config: ModelSettingsDep): | ||
"""Root endpoint with basic server information.""" | ||
return { | ||
"message": "Mock LLM Server", | ||
"version": "0.0.1", | ||
"description": f"OpenAI-compatible mock LLM server for model: {config.model}", | ||
"endpoints": ["/v1/models", "/v1/chat/completions", "/v1/completions"], | ||
"model_configuration": config, | ||
} | ||
|
||
|
||
@app.get("/v1/models", response_model=ModelsResponse) | ||
async def list_models(config: ModelSettingsDep): | ||
"""List available models.""" | ||
log.debug("/v1/models request") | ||
|
||
model = Model( | ||
id=config.model, object="model", created=int(time.time()), owned_by="system" | ||
) | ||
response = ModelsResponse(object="list", data=[model]) | ||
log.debug("/v1/models response: %s", response) | ||
return response | ||
|
||
|
||
@app.post("/v1/chat/completions", response_model=ChatCompletionResponse) | ||
async def chat_completions( | ||
request: ChatCompletionRequest, config: ModelSettingsDep | ||
) -> ChatCompletionResponse: | ||
"""Create a chat completion.""" | ||
|
||
log.debug("/v1/chat/completions request: %s", request) | ||
|
||
# Validate model exists | ||
_validate_request_model(config, request) | ||
|
||
# Generate dummy response | ||
response_content = get_response(config) | ||
response_latency_seconds = get_latency_seconds(config, seed=12345) | ||
|
||
# Calculate token usage | ||
prompt_text = " ".join([msg.content for msg in request.messages]) | ||
prompt_tokens = calculate_tokens(prompt_text) | ||
completion_tokens = calculate_tokens(response_content) | ||
|
||
# Create response | ||
completion_id = generate_id("chatcmpl") | ||
created_timestamp = int(time.time()) | ||
|
||
choices = [] | ||
for i in range(request.n or 1): | ||
choice = ChatCompletionChoice( | ||
index=i, | ||
message=Message(role="assistant", content=response_content), | ||
finish_reason="stop", | ||
) | ||
choices.append(choice) | ||
|
||
response = ChatCompletionResponse( | ||
id=completion_id, | ||
object="chat.completion", | ||
created=created_timestamp, | ||
model=request.model, | ||
choices=choices, | ||
usage=Usage( | ||
prompt_tokens=prompt_tokens, | ||
completion_tokens=completion_tokens, | ||
total_tokens=prompt_tokens + completion_tokens, | ||
), | ||
) | ||
await asyncio.sleep(response_latency_seconds) | ||
log.debug("/v1/chat/completions response: %s", response) | ||
return response | ||
|
||
|
||
@app.post("/v1/completions", response_model=CompletionResponse) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. are you using completions in your benchmarking? If not, I think it is better not to support this legacy interface (https://platform.openai.com/docs/api-reference/completions/create) |
||
async def completions( | ||
request: CompletionRequest, config: ModelSettingsDep | ||
) -> CompletionResponse: | ||
"""Create a text completion.""" | ||
|
||
log.debug("/v1/completions request: %s", request) | ||
|
||
# Validate model exists | ||
_validate_request_model(config, request) | ||
|
||
# Handle prompt (can be string or list) | ||
if isinstance(request.prompt, list): | ||
prompt_text = " ".join(request.prompt) | ||
else: | ||
prompt_text = request.prompt | ||
|
||
# Generate dummy response | ||
response_text = get_response(config) | ||
response_latency_seconds = get_latency_seconds(config, seed=12345) | ||
|
||
# Calculate token usage | ||
prompt_tokens = calculate_tokens(prompt_text) | ||
completion_tokens = calculate_tokens(response_text) | ||
|
||
# Create response | ||
completion_id = generate_id("cmpl") | ||
created_timestamp = int(time.time()) | ||
|
||
choices = [] | ||
for i in range(request.n or 1): | ||
choice = CompletionChoice( | ||
text=response_text, index=i, logprobs=None, finish_reason="stop" | ||
) | ||
choices.append(choice) | ||
|
||
response = CompletionResponse( | ||
id=completion_id, | ||
object="text_completion", | ||
created=created_timestamp, | ||
model=request.model, | ||
choices=choices, | ||
usage=Usage( | ||
prompt_tokens=prompt_tokens, | ||
completion_tokens=completion_tokens, | ||
total_tokens=prompt_tokens + completion_tokens, | ||
), | ||
) | ||
|
||
await asyncio.sleep(response_latency_seconds) | ||
log.debug("/v1/completions response: %s", response) | ||
return response | ||
|
||
|
||
@app.get("/health") | ||
async def health_check(): | ||
"""Health check endpoint.""" | ||
log.debug("/health request") | ||
response = {"status": "healthy", "timestamp": int(time.time())} | ||
log.debug("/health response: %s", response) | ||
return response |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,68 @@ | ||
# SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
# SPDX-License-Identifier: Apache-2.0 | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
import os | ||
from functools import lru_cache | ||
from pathlib import Path | ||
from typing import Any, Optional, Union | ||
|
||
import yaml | ||
from pydantic import BaseModel, Field | ||
from pydantic_settings import ( | ||
BaseSettings, | ||
PydanticBaseSettingsSource, | ||
SettingsConfigDict, | ||
) | ||
|
||
CONFIG_FILE_ENV_VAR = "MOCK_LLM_CONFIG_FILE" | ||
config_file_path = os.getenv(CONFIG_FILE_ENV_VAR, "model_settings.yml") | ||
CONFIG_FILE = Path(config_file_path) | ||
|
||
|
||
class ModelSettings(BaseSettings): | ||
"""Pydantic model to configure the Mock LLM Server.""" | ||
|
||
# Mandatory fields | ||
model: str = Field(..., description="Model name served by mock server") | ||
unsafe_probability: float = Field( | ||
default=0.1, description="Probability of unsafe response (between 0 and 1)" | ||
) | ||
unsafe_text: str = Field(..., description="Refusal response to unsafe prompt") | ||
safe_text: str = Field(..., description="Safe response") | ||
|
||
# Config with default values | ||
# Latency sampled from a truncated-normal distribution. | ||
# Plain Normal distributions have infinite support, and can be negative | ||
latency_min_seconds: float = Field( | ||
default=0.1, description="Minimum latency in seconds" | ||
) | ||
latency_max_seconds: float = Field( | ||
default=5, description="Maximum latency in seconds" | ||
) | ||
latency_mean_seconds: float = Field( | ||
default=0.5, description="The average response time in seconds" | ||
) | ||
latency_std_seconds: float = Field( | ||
default=0.1, description="Standard deviation of response time" | ||
) | ||
|
||
model_config = SettingsConfigDict(env_file=CONFIG_FILE) | ||
|
||
|
||
def get_settings() -> ModelSettings: | ||
"""Singleton-pattern to get settings once via lru_cache""" | ||
settings = ModelSettings() # type: ignore (These are filled in by loading from CONFIG_FILE) | ||
print("Returning ModelSettings: %s", settings) | ||
return settings |
21 changes: 21 additions & 0 deletions
21
...ils/benchmark/mock_llm_server/configs/guardrail_configs/content_safety_colang1/config.yml
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
models: | ||
- type: main | ||
engine: nim | ||
model: meta/llama-3.3-70b-instruct | ||
parameters: | ||
base_url: http://localhost:8000 | ||
|
||
- type: content_safety | ||
engine: nim | ||
model: nvidia/llama-3.1-nemoguard-8b-content-safety | ||
parameters: | ||
base_url: http://localhost:8001 | ||
|
||
|
||
rails: | ||
input: | ||
flows: | ||
- content safety check input $model=content_safety | ||
output: | ||
flows: | ||
- content safety check output $model=content_safety |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be updating the copyright date on new files?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
glad you pointed this out. We should update our LICENSE.md. I'll open a PR