-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Remove add_tiny
and add estimate_magitude
#125
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for sorting this out promptly!
- Would it make sense to construct some test cases where the conditions present in Accuracy is a bit brittle #124 are nearly satisfied, for the sake of robustness testing? i.e. what if
M
is only very slightly greater than0
or something? - Some unit testing for
estimate_magnitude
would be a good idea, particularly around input types that are notFloat64
s. - Integration testing for
Float32
s would be a good idea if you think we know how to construct them properly.
To answer your questions,
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved subject to patch-version bump.
edit: we need CI back up and running. This cannot be merged until that happens.
Yes, ugh, that's annoying. Moreover, @oxinabox will check tomorrow if these fixes help with some other accuracy issues that he was encountering, so perhaps good to also wait for that. |
Ok, I can test this later today. There were indeed a couple of other functions that were failing to achieve the same accuracy as |
Another similar case, expected derivative in 0 is 0: julia> Calculus.derivative(sinc, 0)
0.0
# v0.11.3
julia> (p -> FiniteDifferences.central_fdm(p, 1)(sinc, 0)).(2:10)
9-element Array{Float64,1}:
0.0
0.0
5.444645613907266e-13
2.7755575615628914e-16
-9.538516174483089e-15
2.42861286636753e-16
-3.7404985543720125e-15
-4.822111254688986e-16
2.809173270009664e-15
# this PR
julia> (p -> FiniteDifferences.central_fdm(p, 1)(sinc, 0)).(2:10)
9-element Array{Float64,1}:
0.0
0.0
5.444645613907266e-13
4.312032877555794e-14
-9.538516174483089e-15
2.3967209127499836e-15
-3.7404985543720125e-15
-4.822111254688986e-16
2.809173270009664e-15 I wouldn't say the accuracy in this case is bad, it's just that comparing with 0 is complicated. Also the derivative of this function julia> f(t) = (x -> QuadGK.quadgk(sin, -x, x)[1])(t)
f (generic function with 1 method)
julia> Calculus.derivative(f, 4)
7.065611032273702e-11
# v0.11.3
julia> (p -> FiniteDifferences.central_fdm(p, 1)(f, 4)).(2:10)
9-element Array{Float64,1}:
-0.015243579281754268
0.0
4.131607608911521e-9
-1.7704270075193033e-9
2.7545394016191937e-9
-8.188893009030013e-12
-6.983545129351427e-11
-3.0697586011503785e-11
-2.164732008407795e-11
# this PR
julia> (p -> FiniteDifferences.central_fdm(p, 1)(f, 4)).(2:10)
9-element Array{Float64,1}:
0.01641638262171357
-1.5410326104441793e-5
2.1935477946857812e-8
-2.7022157015905038e-8
9.047605793628396e-11
-1.3604658271651211e-9
-3.154690991530035e-11
4.959996079211705e-11
7.027698887063365e-13 |
In the end I think I'll default to 8 grid points in |
Ah, there was one other edge case that wasn't taken care of. The results look now as follows: julia> (p -> FiniteDifferences.central_fdm(p, 1)(cosc, 0)).(2:10) .+ (pi ^ 2) / 3
9-element Array{Float64,1}:
-2.487109898532124
-3.1522345644852123e-6
-4.327381120106111e-9
-8.283298491562618e-10
1.3939516207983615e-11
3.93636234718997e-11
9.558132063602898e-12
-2.1227464230832993e-13
1.213251721310371e-12
julia> (p -> FiniteDifferences.central_fdm(p, 1)(sinc, 0)).(2:10)
9-element Array{Float64,1}:
0.0
0.0
5.444645613907266e-13
4.312032877555794e-14
-9.538516174483089e-15
2.3967209127499836e-15
-3.7404985543720125e-15
-4.822111254688986e-16
2.809173270009664e-15
julia> (p -> FiniteDifferences.central_fdm(p, 1)(f, 4)).(2:10)
9-element Array{Float64,1}:
-2.3988103001624555e-13
1.430803955495777e-12
-2.4073864725037792e-14
-2.2694686733052566e-15
1.52180247233912e-13
-1.1700430263506674e-15
2.5255867408048164e-13
7.01087210768824e-13
-1.0381492304012043e-14 The latter two are near machine epsilon. I don't think you can get much better. The hardest case is still julia> FiniteDifferences.central_fdm(10, 1, adapt=3)(cosc, 0) + (pi ^ 2) / 3
-3.019806626980426e-14 You can also push Richardson extrapolation: julia> FiniteDifferences.extrapolate_fdm(FiniteDifferences.central_fdm(2, 1), cosc, 0.0, contract=0.5)[1] + (pi ^ 2) / 3
-5.702105454474804e-13 |
With the latest version all current tests pass already with |
Nice! I'm very happy to hear that. :) |
# pathological input for `f`. Perturb `x`. Assume that the pertubed value for `x` is | ||
# highly unlikely also a pathological value for `f`. | ||
Δ = convert(T, 0.1) * max(abs(x), one(x)) | ||
return float(maximum(abs, f(x + Δ))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we make this recursive?
return float(maximum(abs, f(x + Δ))) | |
return estimate_magitude(f, x + Δ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would recurse infinitely for the function x -> 0.0
. Moreover, a function might just actually be 0
in a neighbour around x
.
src/methods.jl
Outdated
# Estimate the round-off error. It can happen that the function is zero around `x`, in | ||
# which case we cannot take `eps(f(x))`. Therefore, we assume a lower bound that is | ||
# equal to `eps(T) / 1000`, which gives `f` four orders of magnitude wiggle room. | ||
ε = max(eps(estimate_magitude(f, x)), eps(T) / 1000) * factor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we move this to a little function?
estimate_roundoff_error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally! Pushed the change.
Yep, I can confirm that this fixes the problems we were having with ChainRules |
I think this needs rebasing so CI will run. |
@oxinabox The tests fail on nightly. Are we okay with that? |
yes this is fine. |
The function
add_tiny
was a bit of a hack to deal with zeros. The new additionestimate_magnitude
deals with this in a better way, at the cost of more function evaluations.This should address #124:
The package can be pushed to estimate at high accuracies:
This PR also adds a test that checks that above two cases.