|
| 1 | +{ |
| 2 | + "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "id": "7a765629", |
| 6 | + "metadata": {}, |
| 7 | + "source": [ |
| 8 | + "# Configuring Chunking Settings For Inference Endpoints\n", |
| 9 | + "\n", |
| 10 | + "[](https://colab.research.google.com/github/elastic/elasticsearch-labs/blob/main/notebooks/document-chunking/configuring-chunking-settings-for-inference-endpoints.ipynb)\n", |
| 11 | + "\n", |
| 12 | + "\n", |
| 13 | + "Learn how to configure [chunking settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-apis.html#infer-chunking-config) for [Inference API](https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-apis.html) endpoints." |
| 14 | + ] |
| 15 | + }, |
| 16 | + { |
| 17 | + "cell_type": "markdown", |
| 18 | + "id": "f9101eb9", |
| 19 | + "metadata": {}, |
| 20 | + "source": [ |
| 21 | + "# 🧰 Requirements\n", |
| 22 | + "\n", |
| 23 | + "For this example, you will need:\n", |
| 24 | + "\n", |
| 25 | + "- An Elastic deployment:\n", |
| 26 | + " - We'll be using [Elastic Cloud](https://www.elastic.co/guide/en/cloud/current/ec-getting-started.html) for this example (available with a [free trial](https://cloud.elastic.co/registration?onboarding_token=vectorsearch&utm_source=github&utm_content=elasticsearch-labs-notebook))\n", |
| 27 | + "\n", |
| 28 | + "- Elasticsearch 8.16 or above.\n", |
| 29 | + "\n", |
| 30 | + "- Python 3.7 or above." |
| 31 | + ] |
| 32 | + }, |
| 33 | + { |
| 34 | + "cell_type": "markdown", |
| 35 | + "id": "4cd69cc0", |
| 36 | + "metadata": {}, |
| 37 | + "source": [ |
| 38 | + "# Create Elastic Cloud deployment or serverless project\n", |
| 39 | + "\n", |
| 40 | + "If you don't have an Elastic Cloud deployment, sign up [here](https://cloud.elastic.co/registration?utm_source=github&utm_content=elasticsearch-labs-notebook) for a free trial." |
| 41 | + ] |
| 42 | + }, |
| 43 | + { |
| 44 | + "cell_type": "markdown", |
| 45 | + "id": "f27dffbf", |
| 46 | + "metadata": {}, |
| 47 | + "source": [ |
| 48 | + "# Install packages and connect with Elasticsearch Client\n", |
| 49 | + "\n", |
| 50 | + "To get started, we'll need to connect to our Elastic deployment using the Python client (version 8.12.0 or above).\n", |
| 51 | + "Because we're using an Elastic Cloud deployment, we'll use the **Cloud ID** to identify our deployment.\n", |
| 52 | + "\n", |
| 53 | + "First we need to `pip` install the following packages:\n", |
| 54 | + "\n", |
| 55 | + "- `elasticsearch`" |
| 56 | + ] |
| 57 | + }, |
| 58 | + { |
| 59 | + "cell_type": "code", |
| 60 | + "execution_count": null, |
| 61 | + "id": "8c4b16bc", |
| 62 | + "metadata": {}, |
| 63 | + "outputs": [], |
| 64 | + "source": [ |
| 65 | + "!pip install elasticsearch" |
| 66 | + ] |
| 67 | + }, |
| 68 | + { |
| 69 | + "cell_type": "markdown", |
| 70 | + "id": "41ef96b3", |
| 71 | + "metadata": {}, |
| 72 | + "source": [ |
| 73 | + "Next, we need to import the modules we need. 🔐 NOTE: getpass enables us to securely prompt the user for credentials without echoing them to the terminal, or storing it in memory." |
| 74 | + ] |
| 75 | + }, |
| 76 | + { |
| 77 | + "cell_type": "code", |
| 78 | + "execution_count": 13, |
| 79 | + "id": "690ff9af", |
| 80 | + "metadata": {}, |
| 81 | + "outputs": [], |
| 82 | + "source": [ |
| 83 | + "from elasticsearch import Elasticsearch\n", |
| 84 | + "from getpass import getpass" |
| 85 | + ] |
| 86 | + }, |
| 87 | + { |
| 88 | + "cell_type": "markdown", |
| 89 | + "id": "23fa2b6c", |
| 90 | + "metadata": {}, |
| 91 | + "source": [ |
| 92 | + "Now we can instantiate the Python Elasticsearch client.\n", |
| 93 | + "\n", |
| 94 | + "First we prompt the user for their password and Cloud ID.\n", |
| 95 | + "Then we create a `client` object that instantiates an instance of the `Elasticsearch` class." |
| 96 | + ] |
| 97 | + }, |
| 98 | + { |
| 99 | + "cell_type": "code", |
| 100 | + "execution_count": null, |
| 101 | + "id": "195cc597", |
| 102 | + "metadata": {}, |
| 103 | + "outputs": [], |
| 104 | + "source": [ |
| 105 | + "# https://www.elastic.co/search-labs/tutorials/install-elasticsearch/elastic-cloud#finding-your-cloud-id\n", |
| 106 | + "ELASTIC_CLOUD_ID = getpass(\"Elastic Cloud ID: \")\n", |
| 107 | + "\n", |
| 108 | + "# https://www.elastic.co/search-labs/tutorials/install-elasticsearch/elastic-cloud#creating-an-api-key\n", |
| 109 | + "ELASTIC_API_KEY = getpass(\"Elastic Api Key: \")\n", |
| 110 | + "\n", |
| 111 | + "# Create the client instance\n", |
| 112 | + "client = Elasticsearch(\n", |
| 113 | + " # For local development\n", |
| 114 | + " # hosts=[\"http://localhost:9200\"],\n", |
| 115 | + " cloud_id=ELASTIC_CLOUD_ID,\n", |
| 116 | + " api_key=ELASTIC_API_KEY,\n", |
| 117 | + " request_timeout=120,\n", |
| 118 | + " max_retries=10,\n", |
| 119 | + " retry_on_timeout=True,\n", |
| 120 | + ")" |
| 121 | + ] |
| 122 | + }, |
| 123 | + { |
| 124 | + "cell_type": "markdown", |
| 125 | + "id": "b1115ffb", |
| 126 | + "metadata": {}, |
| 127 | + "source": [ |
| 128 | + "### Test the Client\n", |
| 129 | + "Before you continue, confirm that the client has connected with this test." |
| 130 | + ] |
| 131 | + }, |
| 132 | + { |
| 133 | + "cell_type": "code", |
| 134 | + "execution_count": null, |
| 135 | + "id": "cc0de5ea", |
| 136 | + "metadata": {}, |
| 137 | + "outputs": [], |
| 138 | + "source": [ |
| 139 | + "print(client.info())" |
| 140 | + ] |
| 141 | + }, |
| 142 | + { |
| 143 | + "cell_type": "markdown", |
| 144 | + "id": "659c5890", |
| 145 | + "metadata": {}, |
| 146 | + "source": [ |
| 147 | + "Refer to [the documentation](https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/connecting.html#connect-self-managed-new) to learn how to connect to a self-managed deployment.\n", |
| 148 | + "\n", |
| 149 | + "Read [this page](https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/connecting.html#connect-self-managed-new) to learn how to connect using API keys." |
| 150 | + ] |
| 151 | + }, |
| 152 | + { |
| 153 | + "cell_type": "markdown", |
| 154 | + "id": "840d92f0", |
| 155 | + "metadata": {}, |
| 156 | + "source": [ |
| 157 | + "<a name=\"create-the-inference-endpoint\"></a>\n", |
| 158 | + "## Create the inference endpoint object\n", |
| 159 | + "\n", |
| 160 | + "Let's create the inference endpoint by using the [Create Inference API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-inference-api.html#put-inference-api-desc).\n", |
| 161 | + "\n", |
| 162 | + "In this example, you'll be creating an inference endpoint for the [ELSER integration](https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-service-elser.html) which will deploy Elastic's [ELSER model](https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-elser.html) within your cluster. Chunking settings are configurable for any inference endpoint with an embedding task type. A full list of available integrations can be found in the [Create Inference API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-inference-api.html#put-inference-api-desc) documentation.\n", |
| 163 | + "\n", |
| 164 | + "To configure chunking settings, the request body must contain a `chunking_settings` map with a `strategy` value along with any required values for the selected chunking strategy. For this example, you'll be configuring chunking settings for a `sentence` strategy with a maximum chunk size of 25 words and 1 sentence overlap between chunks. For more information on available chunking strategies and their configurable values, see the [chunking strategies documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-apis.html#_chunking_strategies)." |
| 165 | + ] |
| 166 | + }, |
| 167 | + { |
| 168 | + "cell_type": "code", |
| 169 | + "execution_count": null, |
| 170 | + "id": "0d007737", |
| 171 | + "metadata": {}, |
| 172 | + "outputs": [], |
| 173 | + "source": [ |
| 174 | + "client.inference.put(\n", |
| 175 | + " task_type=\"sparse_embedding\",\n", |
| 176 | + " inference_id=\"my_elser_endpoint\",\n", |
| 177 | + " body={\n", |
| 178 | + " \"service\": \"elasticsearch\",\n", |
| 179 | + " \"service_settings\": {\n", |
| 180 | + " \"num_allocations\": 1,\n", |
| 181 | + " \"num_threads\": 1,\n", |
| 182 | + " \"model_id\": \".elser_model_2\",\n", |
| 183 | + " },\n", |
| 184 | + " \"chunking_settings\": {\n", |
| 185 | + " \"strategy\": \"sentence\",\n", |
| 186 | + " \"max_chunk_size\": 25,\n", |
| 187 | + " \"sentence_overlap\": 1,\n", |
| 188 | + " },\n", |
| 189 | + " },\n", |
| 190 | + ")" |
| 191 | + ] |
| 192 | + }, |
| 193 | + { |
| 194 | + "cell_type": "markdown", |
| 195 | + "id": "f01de885", |
| 196 | + "metadata": {}, |
| 197 | + "source": [ |
| 198 | + "<a name=\"create-the-index\"></a>\n", |
| 199 | + "## Create the index\n", |
| 200 | + "\n", |
| 201 | + "To see the chunking settings you've configured in action, you'll need to ingest a document into a semantic text field of an index. Let's create an index with a semantic text field linked to the inference endpoint created in the previous step." |
| 202 | + ] |
| 203 | + }, |
| 204 | + { |
| 205 | + "cell_type": "code", |
| 206 | + "execution_count": null, |
| 207 | + "id": "0eed3e3b", |
| 208 | + "metadata": {}, |
| 209 | + "outputs": [], |
| 210 | + "source": [ |
| 211 | + "client.indices.create(\n", |
| 212 | + " index=\"my_index\",\n", |
| 213 | + " mappings={\n", |
| 214 | + " \"properties\": {\n", |
| 215 | + " \"infer_field\": {\n", |
| 216 | + " \"type\": \"semantic_text\",\n", |
| 217 | + " \"inference_id\": \"my_elser_endpoint\",\n", |
| 218 | + " }\n", |
| 219 | + " }\n", |
| 220 | + " },\n", |
| 221 | + ")" |
| 222 | + ] |
| 223 | + }, |
| 224 | + { |
| 225 | + "cell_type": "markdown", |
| 226 | + "id": "51ae72e4", |
| 227 | + "metadata": {}, |
| 228 | + "source": [ |
| 229 | + "<a name=\"ingest-a-document\"></a>\n", |
| 230 | + "## Ingest a document\n", |
| 231 | + "\n", |
| 232 | + "Now let's ingest a document into the index created in the previous step.\n", |
| 233 | + "\n", |
| 234 | + "Note: It may take some time Elasticsearch to allocate nodes to the ELSER model deployment that is started when creating the inference endpoint. You will need to wait until the deployment is allocated to a node before the request below can succeed." |
| 235 | + ] |
| 236 | + }, |
| 237 | + { |
| 238 | + "cell_type": "code", |
| 239 | + "execution_count": null, |
| 240 | + "id": "b8ecaec0", |
| 241 | + "metadata": {}, |
| 242 | + "outputs": [], |
| 243 | + "source": [ |
| 244 | + "client.index(\n", |
| 245 | + " index=\"my_index\",\n", |
| 246 | + " document={\n", |
| 247 | + " \"infer_field\": \"This is some sample document data. The data is being used to demonstrate the configurable chunking settings feature. The configured chunking settings will determine how this text is broken down into chunks to help increase inference accuracy.\"\n", |
| 248 | + " },\n", |
| 249 | + ")" |
| 250 | + ] |
| 251 | + }, |
| 252 | + { |
| 253 | + "cell_type": "markdown", |
| 254 | + "id": "ccc7ca3a", |
| 255 | + "metadata": {}, |
| 256 | + "source": [ |
| 257 | + "<a name=\"view-the-chunks\"></a>\n", |
| 258 | + "## View the chunks\n", |
| 259 | + "\n", |
| 260 | + "The generated chunks and their corresponding inference results can be seen stored in the document in the index under the key `chunks` within the `_inference_fields` metafield. The chunks are stored as a list of character offset values. Let's see the chunks generated when ingesting the documenting in the previous step." |
| 261 | + ] |
| 262 | + }, |
| 263 | + { |
| 264 | + "cell_type": "code", |
| 265 | + "execution_count": null, |
| 266 | + "id": "58dc9019", |
| 267 | + "metadata": {}, |
| 268 | + "outputs": [], |
| 269 | + "source": [ |
| 270 | + "client.search(\n", |
| 271 | + " index=\"my_index\",\n", |
| 272 | + " body={\"size\": 100, \"query\": {\"match_all\": {}}, \"fields\": [\"_inference_fields\"]},\n", |
| 273 | + ")" |
| 274 | + ] |
| 275 | + }, |
| 276 | + { |
| 277 | + "cell_type": "markdown", |
| 278 | + "id": "193f5b8d", |
| 279 | + "metadata": {}, |
| 280 | + "source": [ |
| 281 | + "<a name=\"conclusion\"></a>\n", |
| 282 | + "## Conclusion\n", |
| 283 | + "\n", |
| 284 | + "You've now learned how to configure chunking settings for an inference endpoint! For more information about configurable chunking, see the [configuring chunking](https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-apis.html#infer-chunking-config) documentation." |
| 285 | + ] |
| 286 | + } |
| 287 | + ], |
| 288 | + "metadata": { |
| 289 | + "kernelspec": { |
| 290 | + "display_name": ".venv", |
| 291 | + "language": "python", |
| 292 | + "name": "python3" |
| 293 | + }, |
| 294 | + "language_info": { |
| 295 | + "codemirror_mode": { |
| 296 | + "name": "ipython", |
| 297 | + "version": 3 |
| 298 | + }, |
| 299 | + "file_extension": ".py", |
| 300 | + "mimetype": "text/x-python", |
| 301 | + "name": "python", |
| 302 | + "nbconvert_exporter": "python", |
| 303 | + "pygments_lexer": "ipython3", |
| 304 | + "version": "3.13.0" |
| 305 | + } |
| 306 | + }, |
| 307 | + "nbformat": 4, |
| 308 | + "nbformat_minor": 5 |
| 309 | +} |
0 commit comments