-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve Robustness and Performance #130
Conversation
Maybe StaticArrays.jl, rather than tuples? |
With the current spaghetti code, allocations are down to zero, which achieves a 4x speed-up compared to julia> using FiniteDifferences, BenchmarkTools
julia> m = central_fdm(10, 2, adapt=2)
@benchmark m(sinFiniteDifferenceMethod:
order of method: 10
order of derivative: 2
grid: [-5, -4, -3, -2, -1, 1, 2, 3, 4, 5]
coefficients: [-0.011298500881834215, 0.11119929453262786, -0.4830357142857143, 1.1558201058201059, -0.7726851851851851, -0.7726851851851851, 1.1558201058201059, -0.4830357142857143, 0.11119929453262786, -0.011298500881834215]
julia> m(sin, 1) + sin(1)
3.8624659026709196e-13
julia> @benchmark $m(sin, 1)
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 916.167 ns (0.00% GC)
median time: 926.521 ns (0.00% GC)
mean time: 1.027 μs (0.00% GC)
maximum time: 3.898 μs (0.00% GC)
--------------
samples: 10000
evals/sample: 48
julia> m(sin, 1) + sin(1)
2.97983859809392e-13
julia> @benchmark $m(sin, 1)
BenchmarkTools.Trial:
memory estimate: 1.80 KiB
allocs estimate: 92
--------------
minimum time: 3.824 μs (0.00% GC)
median time: 3.942 μs (0.00% GC)
mean time: 4.196 μs (1.39% GC)
maximum time: 207.103 μs (97.42% GC)
--------------
samples: 10000
evals/sample: 8 |
we need to drop Julia-Nightly from the CI (out of scope for this PR) |
factor::Float64 | ||
max_range::Float64 | ||
∇f_magnitude_mult::Float64 | ||
f_error_mult::Float64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense if a AdapteFiniteDifferenceMethod
had just 2 fields:
A bounds_estimator
and a inner
that is an UnaddaptedFiniteDifferenceMethod
?
Then a bunch of methods for the AdapteFiniteDifferenceMethod
would just delegate down to the inner
-- sometimes passing in a bounds estimate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Certainly a good alternative. The downside is that this would introduce a bunch of accessor methods like
_get_factor(m::UnadaptedFiniteDifferenceMethod) = m.factor
_get_factor(m::AdaptedFiniteDifferenceMethod) = _get_factor(m.inner)
which are not necessary right now. I also considered this, but decided to copy, because that seemed like the simpler option and the duplication appears fairly harmless.
Happy to go with whichever option you find better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough let's merge as is.
Can always change later
src/methods.jl
Outdated
coefs = T.(coefs) | ||
return sum(fs .* coefs) ./ T(step)^Q |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a MWE
JuliaLang/julia#39151
# For high precision on the `\`, we use `Rational`, and to prevent overflows we use | ||
# `BigInt`. At the end we go to `Float64` for fast floating point math, rather than | ||
# rational math. | ||
C = [Rational{BigInt}(g^i) for i in 0:(p - 1), g in grid] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was Int128
not doing it for us?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, Int128
unfortunately hit overflow for orders bigger than 12.
step_max = m.max_range / maximum(abs.(m.grid)) | ||
if step > step_max | ||
step = step_max | ||
acc = NaN |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it documented somewhere what this means?
Also should we just stop computing accuracy?
Its never used nor exposed to the user AFAICT
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function estimate_step
is exposed to the user, which promises to also estimate the accuracy. You're right that NaN
isn't documented, which I'll add. (Edit: Added.)
Co-authored-by: Lyndon White <[email protected]>
Co-authored-by: Lyndon White <[email protected]>
Co-authored-by: Lyndon White <[email protected]>
Co-authored-by: Lyndon White <[email protected]>
Co-authored-by: Lyndon White <[email protected]>
Co-authored-by: Lyndon White <[email protected]>
@oxinabox The ChainRules integration test fails because ChainRules and ChainRulesUtils both have a |
There is not. |
Fair enough, I guess we can just check this PR locally. Running the ChainRules tests locally, it appears that some tests fail due to singularities, but those failures are resolved by changing const _fdm = central(5, 1) to const _fdm = central(5, 1; max_range=1e-2) in Are we happy to merge this? |
This reverts commit 9be39b8.
Yes. |
master
:This PR:
The current implementation is suboptimal in several ways:
The function
f
is evaluated atx
and possibly again aroundx
to estimate the round-off error. This can be avoided by reusing the function evaluations of the bound estimator.The bound estimator may be run twice if it returns zero identically, which requires even more function evaluations. This can be avoided by increasing the order of the bound estimator by one so that it can estimate the derivative in a neighbourhood of
x
instead of just atx
.The functions
estimate_magnitude
andestimate_roundoff_error
, whilst fine in principle, feel a little ad hoc and complicate the logic.This PR gets rid of
estimate_magnitude
andestimate_roundoff_error
entirely by more cleverly using the function evaluations of the bound estimator (_estimate_magnitudes
). Moreover, the PR makes sure that unnecessary allocations are avoided where possible.Improvements:
extrapolate_fdm
.README.md
todocs/src/index.md
indocs/make.jl
.Additions:
max_range
keyword that specifies how farf
is allowed to be evaluated fromx
:Breaking changes:
max_step
keyword argument.