We currently start with the smallest permutation in size, try every permutation at that size, then progressively increase the size. This guarantees that we find the permutation with the smallest size.
However, if the error decreases monotonically as size increases (more bits are added leading to better precision), perhaps we could use binary search instead of a linear search, dramatically speeding up the optimization pass. To that end, each group of permutations with the same size could be searched as a bucket and the buckets can use binary search.
If the error is sometimes smaller with less bits, the above will not hold true and binary search may yield a sub-optimal result. Depending on how close of an approximation it yields, perhaps it would be good enough. The full linear search could be used with the highest compression level (or high and above).
We currently start with the smallest permutation in size, try every permutation at that size, then progressively increase the size. This guarantees that we find the permutation with the smallest size.
However, if the error decreases monotonically as size increases (more bits are added leading to better precision), perhaps we could use binary search instead of a linear search, dramatically speeding up the optimization pass. To that end, each group of permutations with the same size could be searched as a bucket and the buckets can use binary search.
If the error is sometimes smaller with less bits, the above will not hold true and binary search may yield a sub-optimal result. Depending on how close of an approximation it yields, perhaps it would be good enough. The full linear search could be used with the highest compression level (or high and above).