-
-
Notifications
You must be signed in to change notification settings - Fork 406
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Type error for primal feasibility report in non-Float64 precision #3912
Comments
Do you have a motivation for nonlinear models with non-Float64 precision? |
I was using this to gauge how working in a lower precision ( More details: consider a constraint This brought me to JuMP's generic model interface, which allows me to build the same model in different precision, then evaluate the same solution via |
Don't solver tolerances make much more of a difference? |
Depends on the use case I guess. If we're trying to run Ipopt in The setting I'm considering is where one is training a machine learning model to predict good solutions. This typically involves evaluating the feasibility of said solutions, and everything is in 32-bit precision. So I was exploring whether I could trust the violation levels when evaluated in 32-bit precision... which led me here. One possible example: consider the constraint (obviously we're now past the original issue 🙃) |
Isn't this example a counter example to
If so, the answer is "no". |
the stacktrace below suggests that something got converted
Notes:
Float64
arithmetic (see snippet below)Same code as above, but
Float64
The text was updated successfully, but these errors were encountered: