Skip to content

Update section on model config with auto precision #103

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 1, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 21 additions & 6 deletions part2_advanced_config.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -73,16 +73,31 @@
"metadata": {},
"source": [
"## Make an hls4ml config & model\n",
"This time, we'll create a config with finer granularity. When we print the config dictionary, you'll notice that an entry is created for each named Layer of the model. See for the first layer, for example:\n",
"```LayerName:\n",
"\n",
"When the parameter `granularity` is set to `'name'` in the `config_from_keras_model` function, hls4ml automatically chooses the fixed-point precision for the ouput variables and accumulators for each layer. The accumulators are internal variables used for accumulating values during matrix multiplications. \n",
"\n",
"This precision choice is **conservative**. It avoids overflow and truncation based solely on input bitwidths, without considering the actual input values. Once again, this approach can be overly conservative, especially when post-training quantization is employed or if the initial input bitwidth settings are relatively loose. In such cases, it is advisable to manually edit the configuration to explicitly set specific widths, potentially iteratively after profiling the data.\n",
"\n",
"In this notebook, we'll create a configuration with the finer granularity (`'name'`). When we print the config dictionary, you'll notice that an entry is created for each named Layer of the model and the types are set to `auto`. For example, for the first layer we have:\n",
"```\n",
"LayerName:\n",
" ...\n",
" fc1:\n",
" Trace: False\n",
" Precision:\n",
" weight: ap_fixed<16,6>\n",
" bias: ap_fixed<16,6>\n",
" result: ap_fixed<16,6>\n",
" weight: auto\n",
" bias: auto\n",
" result: auto\n",
" accum: auto\n",
" ReuseFactor: 1\n",
" ...\n",
"```\n",
"Taken 'out of the box' this config will set all the parameters to the same settings as in part 1, but we can use it as a template to start modifying things. "
"\n",
"In Part 1, all the parameters were set to the same default model precision. In this notebook instead, because of the `granularity='name'` and thus `'auto'` precision selection:\n",
"- `weight` and `bias` are set to the default model precision;\n",
"- `result` and `accum` are set to conservative bit-widths that avoid overflow and truncation.\n",
"\n",
"Later on, you will see that you can use this configuration as a template to start modifying things."
]
},
{
Expand Down
Loading