You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: proposals/compilation-hints/Overview.md
+10-3Lines changed: 10 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ Based on the [branch hinting proposal](https://github.com/WebAssembly/branch-hin
20
20
21
21
Each family of hints is bundled in a respective custom section following the example of branch hints. These sections all have the naming convention `metadata.code.*` and follow the structure
22
22
**function index* |U32|
23
-
* a vector of hints with entries
23
+
* a vector of hints with entries (starting with the number of entries as |U32|)
24
24
**byte offset* |U32| of the hinted instruction from the beginning of the function body (0 for function level hints) (the function body begins at the first byte of its local declarations, the same as branch hints),
25
25
**hint length* |U32| indicating the number of bytes each hint requires,
26
26
**values* |U32| with the actual hint information
@@ -42,7 +42,7 @@ The section `metadata.code.compilation_priority` contains the priority in which
42
42
43
43
If a length of larger than required to store the 2 values is present, only the first two values of the following hint data is evalued while the rest is ignored. This leaves space for future extensions, e.g. grouping functions. Similarly, the *optimization priority* can be dropped if a length corresponding to only 1 value is given.
44
44
45
-
The *optimization priority* has no clear implication on whether a function is tiered up using a more optimized compiler. The smaller the value, to more often a function is expected to be running. So an engine can simply order the functions by priority and tier up the ones with the smallest *optimization priority* until the compilation budget is exceeded. The compilation budget might depend on the engine, compiler, available resources, how long the program has been running, etc. Using a threshold might look easier but relies heavily on the accuracy of the estimation, making it potentially less reliable.
45
+
The *optimization priority* has no clear implication on whether a function is tiered up using a more optimized compiler. The smaller the value, the more often a function is expected to be running. So an engine can simply order the functions by priority and tier up the ones with the smallest *optimization priority* until the compilation budget is exceeded. The compilation budget might depend on the engine, compiler, available resources, how long the program has been running, etc. Using a threshold might look easier but relies heavily on the accuracy of the estimation, making it potentially less reliable.
46
46
47
47
During profiling runs, engines can generate the *optimization priority* easily based on either sampling based profiling by simply counting the number of samples where the function is on top or through explicit instrumentation of counters. The latter is a little more contrived. Function call counters alone might not be sufficient to estimate this as the total time spent within a function also heavily depends on its size and jumps within the function. It's therefore a good idea to at least estimate the time spent by adding counters to loops which multiplied by the loop size might be a sufficiently accurate estimator of instructions executed.
48
48
@@ -65,7 +65,14 @@ The above example is equivalent to
65
65
```
66
66
and tools can produce one or the other.
67
67
68
-
To produce the special value of 127 for the optimization value, one can pass `run_once` without any number instead of the `optimization` annotation.
68
+
To produce the special value of 127 for the optimization value, one can pass `run_once` without any number instead of the `optimization` annotation, e.g.
0 commit comments