diff --git a/genai-perf/notebooks/README.md b/genai-perf/notebooks/README.md new file mode 100644 index 00000000..56e22551 --- /dev/null +++ b/genai-perf/notebooks/README.md @@ -0,0 +1,5 @@ +# GenAI-Perf Utility Notebooks + +This folder contains the various utility notebooks for GenAI-Perf. + +1. [TCO_calculator.ipynb](TCO_calculator.ipynb): This notebook allows user to benchmark a NIM LLM deployment, then export the data to the NIM total cost of ownership (TCO) calculator. \ No newline at end of file diff --git a/genai-perf/notebooks/TCO_calculator.ipynb b/genai-perf/notebooks/TCO_calculator.ipynb new file mode 100644 index 00000000..7c098879 --- /dev/null +++ b/genai-perf/notebooks/TCO_calculator.ipynb @@ -0,0 +1,573 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "id": "f3e4c571-624d-4db6-b4d9-ae912879967b", + "metadata": {}, + "source": [ + "# GenAI-perf -> NIM LLM TCO Calculator Data Connector\n", + "\n", + "This notebook shows you how to do LLM performance benchmarking with the NVIDIA GenAI-perf tool and then export the data to an Excel spreadsheet, which can be used to transfer the data to the NIM [spreadsheet TCO calculator tool](https://docs.google.com/spreadsheets/d/1UF_sy89kcLIkdnK0dC-6QwcAgVDUV0ANJ22JnC2dW7g/edit?gid=0#gid=0).\n", + "\n", + "Note: the NIM LLM TCO calculator is implemented as a Google spreadsheet. Please make a private copy for your own usage.\n", + "\n", + "\n", + "To execute this notebook, you can use the NVIDIA Pytorch container:\n", + "```\n", + "docker run --gpus=all --ipc=host --net=host --rm -it -v $PWD:/myworkspace nvcr.io/nvidia/pytorch:25.03-py3 bash \n", + "```\n", + "\n", + "Then from within the docker interactive session:\n", + "```\n", + "jupyter lab --ip 0.0.0.0 --port=8888 --allow-root --notebook-dir=/myworkspace\n", + "```\n", + "\n", + "First, we define some metadata fields describing the deployment environment.\n", + "\n", + "**Notes:**\n", + "- NIM engine ID provides both the backend type (e.g. TensorRT-LLM, vLLM or SGlang) and precision. You can find this information when the NIM container starts.\n", + "\n", + "- This notebook collects data corresponding to a single deployment environment described by the metadata field. In this tutorial, we will make use of the `Meta-Llama-3-8B-Instruct` model. Note that NVIDIA NGC and HuggingFace model hub use slightly different identifier for this model." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "93c18473-09ea-4a6f-87fa-d67fa3f7daa5", + "metadata": {}, + "outputs": [], + "source": [ + "meta_field = {\n", + " 'Model': \"meta-llama/Meta-Llama-3-8B-Instruct\",\n", + " 'GPU Type': \"H100_80GB\",\n", + " 'number_of_gpus': 1,\n", + " 'Precision': \"BF16\",\n", + " 'Execution Mode': \"NIM-TRTLLM\",\n", + "}" + ] + }, + { + "cell_type": "markdown", + "id": "70b3df53-c103-4de2-81f5-419aa4d65f83", + "metadata": {}, + "source": [ + "## Pre-requisite\n", + "\n", + "First, we install the GenAI-perf tool in the Pytorch container. \n", + "As a client-side LLM-focused benchmarking tool, NVIDIA GenAI-Perf provides key metrics such as time to first token (TTFT), inter-token latency (ITL), tokens per second (TPS), requests per second (RPS) and more. GenAI-Perf also supports any LLM inference service conforming to the OpenAI API specification, a widely accepted de facto standard in the industry. For this benchmarking guide, we’ll use NVIDIA NIM, a collection of inference microservices that offer high-throughput and low-latency inference for both base and fine-tuned LLMs. NIM features ease-of-use and enterprise-grade security and manageability. \n", + "\n", + "### Install GenAI-perf tool" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ad5de6fe-8547-4259-956a-980aa8b71dce", + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "pip install genai-perf==0.0.12" + ] + }, + { + "cell_type": "markdown", + "id": "9e6351a6-a5a3-4067-831e-abe26ae53969", + "metadata": {}, + "source": [ + "### Setting up a NIM LLM server (optional)\n", + "\n", + "If you don't already have a target for benchmarking, like an OpenAI compatible LLM service, let's setup one. \n", + "\n", + "NVIDIA NIM provides the easiest and quickest way to put LLMs and other AI foundation models into production. Read [A Simple Guide to Deploying Generative AI with NVIDIA NIM](https://developer.nvidia.com/blog/a-simple-guide-to-deploying-generative-ai-with-nvidia-nim/) or consult the latest [NIM LLM documentation](https://docs.nvidia.com/nim/large-language-models/latest/introduction.html) to get started, which will walk you through hardware requirements and prerequisites, including NVIDIA NGC API keys.\n", + "\n", + "For convenience, the following commands have been provided for deploying NIM and executing inference from the [Getting Started Guide](https://docs.nvidia.com/nim/large-language-models/latest/getting-started.html): \n", + "\n", + " \n", + "```\n", + "export NGC_API_KEY= \n", + "\n", + "# Choose a LLM NIM Image from NGC\n", + "export CONTAINER_NAME=meta/llama-3.1-8b-instruct # NGC model name\n", + "export IMG_NAME=\"nvcr.io/nim/${CONTAINER_NAME}:latest\"\n", + "\n", + "# Choose a path on your system to cache the downloaded models\n", + "export LOCAL_NIM_CACHE=./cache/nim\n", + "mkdir -p \"$LOCAL_NIM_CACHE\"\n", + "\n", + "# Start the LLM NIM\n", + "docker run -it --rm --name=llama-3.1-8b-instruct \\\n", + " --gpus all \\\n", + " --shm-size=16GB \\\n", + " -e NGC_API_KEY \\\n", + " -v \"$LOCAL_NIM_CACHE:/opt/nim/.cache\" \\\n", + " -u $(id -u) \\\n", + " -p 8000:8000 \\\n", + " $IMG_NAME\n", + "```\n", + "\n", + "\n", + "## Performance benchmarking script\n", + "\n", + "The next step is to define the use cases (i.e. input/output sequence length scenarios) and carry out the benchmarking." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e8395733-ce18-4447-845c-b3579acc2067", + "metadata": {}, + "outputs": [], + "source": [ + "%%writefile benchmark.sh\n", + "#!/usr/bin/env bash\n", + "\n", + "declare -A useCases\n", + "\n", + "export MODEL=meta/llama-3.1-8b-instruct # NGC model name\n", + "export TOKENIZER_PATH=meta-llama/Meta-Llama-3-8B-Instruct # Either a HF model or path to a local folder containing the tokenizer \n", + "\n", + "# Populate the array with use case descriptions and their specified input/output lengths\n", + "useCases[\"Translation\"]=\"200/200\"\n", + "useCases[\"Text classification\"]=\"200/5\"\n", + "useCases[\"Text summary\"]=\"1000/200\"\n", + "useCases[\"Code generation\"]=\"200/1000\"\n", + "\n", + "# Function to execute genAI-perf with the input/output lengths as arguments\n", + "runBenchmark() {\n", + " local description=\"$1\"\n", + " local lengths=\"${useCases[$description]}\"\n", + " IFS='/' read -r inputLength outputLength <<< \"$lengths\"\n", + "\n", + " echo \"Running genAI-perf for $description with input length $inputLength and output length $outputLength\"\n", + " #Runs\n", + " for concurrency in 1 2 5 10 50 100 250; do\n", + "\n", + " local INPUT_SEQUENCE_LENGTH=$inputLength\n", + " local INPUT_SEQUENCE_STD=0\n", + " local OUTPUT_SEQUENCE_LENGTH=$outputLength\n", + " local CONCURRENCY=$concurrency\n", + " \n", + " genai-perf profile \\\n", + " -m $MODEL \\\n", + " --endpoint-type chat \\\n", + " --service-kind openai \\\n", + " --streaming \\\n", + " -u localhost:8000 \\\n", + " --synthetic-input-tokens-mean $INPUT_SEQUENCE_LENGTH \\\n", + " --synthetic-input-tokens-stddev $INPUT_SEQUENCE_STD \\\n", + " --concurrency $CONCURRENCY \\\n", + " --output-tokens-mean $OUTPUT_SEQUENCE_LENGTH \\\n", + " --extra-inputs max_tokens:$OUTPUT_SEQUENCE_LENGTH \\\n", + " --extra-inputs min_tokens:$OUTPUT_SEQUENCE_LENGTH \\\n", + " --extra-inputs ignore_eos:true \\\n", + " --tokenizer $TOKENIZER_PATH \\\n", + " --measurement-interval 30000 \\\n", + " --profile-export-file ${INPUT_SEQUENCE_LENGTH}_${OUTPUT_SEQUENCE_LENGTH}.json \\\n", + " -- \\\n", + " -v \\\n", + " --max-threads=256\n", + " \n", + " done\n", + "}\n", + "\n", + "# Iterate over all defined use cases and run the benchmark script for each\n", + "for description in \"${!useCases[@]}\"; do\n", + " runBenchmark \"$description\"\n", + "done\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "id": "603f1941-5206-4bca-a547-028e0ea50f21", + "metadata": {}, + "source": [ + "Next, we execute the bash script, which will carry out the defined benchmarking scenarios and gather the data in a default directory named `artifacts` under the current working directory." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6cbfacd3-5755-4c0b-ae23-3abffceebbdb", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "%%bash\n", + "bash benchmark.sh" + ] + }, + { + "cell_type": "markdown", + "id": "c480b28d-6816-4c84-9124-bdc56fc81f41", + "metadata": {}, + "source": [ + "## Reading gen-AI-perf data\n", + "\n", + "Once performance benchmarking is done, we read and collect the results in a single data frame." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "ff69c986-0c9b-46a9-8f28-800cd61ab24d", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import json\n", + "import pandas as pd\n", + "\n", + "root_dir = \"./artifacts\"\n", + "directory_prefix = \"meta_llama-3.1-8b-instruct-openai-chat-concurrency\" # Change this to fit the actual model deployed\n", + "\n", + "ISL_OSL_LIST = [\"200_5\", \"200_200\", \"1000_200\", \"200_1000\"]\n", + "CONCURRENCIES = [1, 2, 5, 10, 50, 100, 250]\n", + "df = pd.DataFrame()\n", + "\n", + "for concurrency in CONCURRENCIES :\n", + " for isl_osl in ISL_OSL_LIST:\n", + " filename = os.path.join(root_dir, f\"{directory_prefix}{concurrency}\", f\"{isl_osl}_genai_perf.json\")\n", + " \n", + " # Open and read the file\n", + " with open(filename, 'r') as file:\n", + " data = json.load(file)\n", + " \n", + " row = {\n", + " 'Inter Token 90th Percentile Latency (ms)': data[\"inter_token_latency\"][\"p90\"],\n", + " 'Inter Token 99th Percentile Latency (ms)': data[\"inter_token_latency\"][\"p99\"],\n", + " 'Inter Token Average Latency (ms)': data[\"inter_token_latency\"][\"avg\"],\n", + " 'Time to First Token 90th Percentile Latency (ms)': data[\"time_to_first_token\"][\"p90\"],\n", + " 'Time to First Token 99th Percentile Latency (ms)': data[\"time_to_first_token\"][\"p99\"],\n", + " 'Time to First Token Average Latency (ms)': data[\"time_to_first_token\"][\"avg\"],\n", + " 'Request 90th Percentile Latency (ms)': data[\"request_latency\"][\"p90\"],\n", + " 'Request 99th Percentile Latency (ms)': data[\"request_latency\"][\"p99\"],\n", + " 'Request Latency (ms)': data[\"request_latency\"][\"avg\"],\n", + " 'Requests per Second': data[\"request_throughput\"][\"avg\"],\n", + " 'Tokens per Second': data[\"output_token_throughput\"][\"avg\"],\n", + " 'Seq Length (ISL/OSL)': isl_osl,\n", + " 'Concurrency': concurrency\n", + " } \n", + " \n", + " row = meta_field | row\n", + " \n", + " df = pd.concat([df, pd.DataFrame([row])], ignore_index=True)" + ] + }, + { + "cell_type": "markdown", + "id": "3a997b59-3d4e-4877-953c-088563aa8998", + "metadata": {}, + "source": [ + "## Exporting data to excel format\n", + "\n", + "We next export the benchmarking data to a NIM TCO Calculator compatible format, which comprises both metadata fields as well as performance metric fields." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "f39710a9-882c-44aa-b428-d7ed2976eb23", + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
ModelGPU Typenumber_of_gpusPrecisionExecution ModeInter Token 90th Percentile Latency (ms)Inter Token 99th Percentile Latency (ms)Inter Token Average Latency (ms)Time to First Token 90th Percentile Latency (ms)Time to First Token 99th Percentile Latency (ms)Time to First Token Average Latency (ms)Request 90th Percentile Latency (ms)Request 99th Percentile Latency (ms)Request Latency (ms)Requests per SecondTokens per SecondSeq Length (ISL/OSL)Concurrency
0meta-llama/Meta-Llama-3-8B-InstructH100_80GB1BF16NIM-TRTLLM9.59422510.3844539.04113118.40917219.84372817.39371166.55711171.71656462.59936615.96136095.768158200_51
1meta-llama/Meta-Llama-3-8B-InstructH100_80GB1BF16NIM-TRTLLM10.88788811.26320010.61502718.01117738.89382518.1887442195.4008602265.8675402138.4097000.46759993.865874200_2001
2meta-llama/Meta-Llama-3-8B-InstructH100_80GB1BF16NIM-TRTLLM11.61893311.99843611.21038262.15880579.05302054.1334572390.4210832467.3646412294.2889860.43582987.5275011000_2001
3meta-llama/Meta-Llama-3-8B-InstructH100_80GB1BF16NIM-TRTLLM11.37618411.40223711.15512419.12046519.44114418.44150711367.16659911417.40178611155.8998360.08963489.584068200_10001
4meta-llama/Meta-Llama-3-8B-InstructH100_80GB1BF16NIM-TRTLLM10.99790413.01379210.07681333.62154540.71949830.21019686.358385100.11430480.59426324.799054148.794324200_52
\n", + "
" + ], + "text/plain": [ + " Model GPU Type number_of_gpus Precision \\\n", + "0 meta-llama/Meta-Llama-3-8B-Instruct H100_80GB 1 BF16 \n", + "1 meta-llama/Meta-Llama-3-8B-Instruct H100_80GB 1 BF16 \n", + "2 meta-llama/Meta-Llama-3-8B-Instruct H100_80GB 1 BF16 \n", + "3 meta-llama/Meta-Llama-3-8B-Instruct H100_80GB 1 BF16 \n", + "4 meta-llama/Meta-Llama-3-8B-Instruct H100_80GB 1 BF16 \n", + "\n", + " Execution Mode Inter Token 90th Percentile Latency (ms) \\\n", + "0 NIM-TRTLLM 9.594225 \n", + "1 NIM-TRTLLM 10.887888 \n", + "2 NIM-TRTLLM 11.618933 \n", + "3 NIM-TRTLLM 11.376184 \n", + "4 NIM-TRTLLM 10.997904 \n", + "\n", + " Inter Token 99th Percentile Latency (ms) Inter Token Average Latency (ms) \\\n", + "0 10.384453 9.041131 \n", + "1 11.263200 10.615027 \n", + "2 11.998436 11.210382 \n", + "3 11.402237 11.155124 \n", + "4 13.013792 10.076813 \n", + "\n", + " Time to First Token 90th Percentile Latency (ms) \\\n", + "0 18.409172 \n", + "1 18.011177 \n", + "2 62.158805 \n", + "3 19.120465 \n", + "4 33.621545 \n", + "\n", + " Time to First Token 99th Percentile Latency (ms) \\\n", + "0 19.843728 \n", + "1 38.893825 \n", + "2 79.053020 \n", + "3 19.441144 \n", + "4 40.719498 \n", + "\n", + " Time to First Token Average Latency (ms) \\\n", + "0 17.393711 \n", + "1 18.188744 \n", + "2 54.133457 \n", + "3 18.441507 \n", + "4 30.210196 \n", + "\n", + " Request 90th Percentile Latency (ms) Request 99th Percentile Latency (ms) \\\n", + "0 66.557111 71.716564 \n", + "1 2195.400860 2265.867540 \n", + "2 2390.421083 2467.364641 \n", + "3 11367.166599 11417.401786 \n", + "4 86.358385 100.114304 \n", + "\n", + " Request Latency (ms) Requests per Second Tokens per Second \\\n", + "0 62.599366 15.961360 95.768158 \n", + "1 2138.409700 0.467599 93.865874 \n", + "2 2294.288986 0.435829 87.527501 \n", + "3 11155.899836 0.089634 89.584068 \n", + "4 80.594263 24.799054 148.794324 \n", + "\n", + " Seq Length (ISL/OSL) Concurrency \n", + "0 200_5 1 \n", + "1 200_200 1 \n", + "2 1000_200 1 \n", + "3 200_1000 1 \n", + "4 200_5 2 " + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "df.head()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5baf8e86-c8d1-42fc-94d3-15b592a5adc9", + "metadata": {}, + "outputs": [], + "source": [ + "!pip install openpyxl" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "125f78e6-cc51-4091-bb16-9a1d8403d6cf", + "metadata": {}, + "outputs": [], + "source": [ + "columns = [\n", + " 'Model',\n", + " 'GPU Type',\n", + " 'Seq Length (ISL/OSL)',\n", + " 'number_of_gpus',\n", + " 'Concurrency',\n", + " 'Precision',\n", + " 'Execution Mode',\n", + " 'Inter Token 90th Percentile Latency (ms)',\n", + " 'Inter Token 99th Percentile Latency (ms)',\n", + " 'Inter Token Average Latency (ms)',\n", + " 'Time to First Token 90th Percentile Latency (ms)',\n", + " 'Time to First Token 99th Percentile Latency (ms)',\n", + " 'Time to First Token Average Latency (ms)',\n", + " 'Request 90th Percentile Latency (ms)',\n", + " 'Request 99th Percentile Latency (ms)',\n", + " 'Request Latency (ms)',\n", + " 'Requests per Second',\n", + " 'Tokens per Second'\n", + " ]\n", + "df[columns].to_excel('data.xlsx', index=False)\n" + ] + }, + { + "cell_type": "markdown", + "id": "becc138b-6d92-49aa-a9a6-3ad31ad75c87", + "metadata": {}, + "source": [ + "## Importing the data to the TCO calculator\n", + "\n", + "The [NIM TCO calculator tool](https://docs.google.com/spreadsheets/d/1UF_sy89kcLIkdnK0dC-6QwcAgVDUV0ANJ22JnC2dW7g/edit?gid=0#gid=0) is implemented as a Google spreadsheet. You can use Google spreadsheet to open the excel file above, then simply copy the data rows into the \"data\" subsheet of the TCO calculator. That will complete the import phase and make the new data available in the TCO calculator." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.3" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +}