Our model is deployed with triton. Before the tritonserver starts, environment variables are set. kvcached takes effect, but since multiple vllm models need to be started, environment variables need to be set before the model script initializes the engine. The contents of the involved documents are as follows:
The following environment variable are set in model.py, and configured "gpu_memory_utilization": 0.2,
import os os.environ["ENABLE_KVCACHED"] = "1" os.environ["KVCACHED_AUTOPATCH"] = "1"
Tests have found that kvcached does not dynamically and automatically allocate video memory; it still allocates it based on the gpu_memory parameter.
To verify if it was triton's issue, we directly used a python script for testing and found that it was the same.
`import os
os.environ["ENABLE_KVCACHED"] = "1"
os.environ["KVCACHED_AUTOPATCH"] = "1"
from vllm.engine.arg_utils import AsyncEngineArgs
from vllm.usage.usage_lib import UsageContext
from vllm.v1.engine.async_llm import AsyncLLM
vllm_engine_config = {
"model": "/weights/pretrain/hunyuan-ocr",
"trust_remote_code": True,
"gpu_memory_utilization": 0.7,
"max_model_len": 4096,
"enable_prefix_caching": False,
"max_num_batched_tokens": 8192,
"mm_processor_cache_gb": 0
}
engine_args = AsyncEngineArgs(**vllm_engine_config)
vllm_config = engine_args.create_engine_config(usage_context=UsageContext.OPENAI_API_SERVER)
async_llm = AsyncLLM.from_vllm_config(
vllm_config=vllm_config,
usage_context=UsageContext.OPENAI_API_SERVER,
stat_loggers=None,
enable_log_requests=engine_args.enable_log_requests,
aggregate_engine_logging=engine_args.aggregate_engine_logging,
disable_log_stats=engine_args.disable_log_stats,
)
print(1)`
It didn't take effect. The video memory was still increased as set. How to solve it? Thanks!
Our model is deployed with triton. Before the tritonserver starts, environment variables are set. kvcached takes effect, but since multiple vllm models need to be started, environment variables need to be set before the model script initializes the engine. The contents of the involved documents are as follows:
The following environment variable are set in model.py, and configured "gpu_memory_utilization": 0.2,
import os os.environ["ENABLE_KVCACHED"] = "1" os.environ["KVCACHED_AUTOPATCH"] = "1"Tests have found that kvcached does not dynamically and automatically allocate video memory; it still allocates it based on the gpu_memory parameter.
To verify if it was triton's issue, we directly used a python script for testing and found that it was the same.
`import os
os.environ["ENABLE_KVCACHED"] = "1"
os.environ["KVCACHED_AUTOPATCH"] = "1"
from vllm.engine.arg_utils import AsyncEngineArgs
from vllm.usage.usage_lib import UsageContext
from vllm.v1.engine.async_llm import AsyncLLM
vllm_engine_config = {
"model": "/weights/pretrain/hunyuan-ocr",
"trust_remote_code": True,
"gpu_memory_utilization": 0.7,
"max_model_len": 4096,
"enable_prefix_caching": False,
"max_num_batched_tokens": 8192,
"mm_processor_cache_gb": 0
}
engine_args = AsyncEngineArgs(**vllm_engine_config)
vllm_config = engine_args.create_engine_config(usage_context=UsageContext.OPENAI_API_SERVER)
async_llm = AsyncLLM.from_vllm_config(
vllm_config=vllm_config,
usage_context=UsageContext.OPENAI_API_SERVER,
stat_loggers=None,
enable_log_requests=engine_args.enable_log_requests,
aggregate_engine_logging=engine_args.aggregate_engine_logging,
disable_log_stats=engine_args.disable_log_stats,
)
print(1)`
It didn't take effect. The video memory was still increased as set. How to solve it? Thanks!