Skip to content

[OpenVINO Backend] : add support for numpy.nan_to_num #21186

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Apr 28, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions keras/src/backend/openvino/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,38 @@
"string": ov.Type.string,
}

DTYPES_MAX = {
ov.Type.bf16: 3.38953139e38,
ov.Type.f16: np.finfo(np.float16).max,
ov.Type.f32: np.finfo(np.float32).max,
ov.Type.f64: np.finfo(np.float64).max,
ov.Type.u8: np.iinfo(np.uint8).max,
ov.Type.u16: np.iinfo(np.uint16).max,
ov.Type.u32: np.iinfo(np.uint32).max,
ov.Type.u64: np.iinfo(np.uint64).max,
ov.Type.i8: np.iinfo(np.int8).max,
ov.Type.i16: np.iinfo(np.int16).max,
ov.Type.i32: np.iinfo(np.int32).max,
ov.Type.i64: np.iinfo(np.int64).max,
ov.Type.boolean: 1,
}

DTYPES_MIN = {
ov.Type.bf16: -3.38953139e38,
ov.Type.f16: np.finfo(np.float16).min,
ov.Type.f32: np.finfo(np.float32).min,
ov.Type.f64: np.finfo(np.float64).min,
ov.Type.u8: np.iinfo(np.uint8).min,
ov.Type.u16: np.iinfo(np.uint16).min,
ov.Type.u32: np.iinfo(np.uint32).min,
ov.Type.u64: np.iinfo(np.uint64).min,
ov.Type.i8: np.iinfo(np.int8).min,
ov.Type.i16: np.iinfo(np.int16).min,
ov.Type.i32: np.iinfo(np.int32).min,
ov.Type.i64: np.iinfo(np.int64).min,
ov.Type.boolean: 0,
}


def align_operand_types(x1, x2, op_name):
x1_type = x1.element_type
Expand Down
2 changes: 0 additions & 2 deletions keras/src/backend/openvino/excluded_concrete_tests.txt
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,6 @@ NumpyDtypeTest::test_meshgrid
NumpyDtypeTest::test_min
NumpyDtypeTest::test_moveaxis
NumpyDtypeTest::test_multiply
NumpyDtypeTest::test_nan
NumpyDtypeTest::test_outer_
NumpyDtypeTest::test_power
NumpyDtypeTest::test_prod
Expand Down Expand Up @@ -96,7 +95,6 @@ NumpyOneInputOpsCorrectnessTest::test_median
NumpyOneInputOpsCorrectnessTest::test_meshgrid
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please also remove NumpyDtypeTest::test_nan

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

NumpyOneInputOpsCorrectnessTest::test_min
NumpyOneInputOpsCorrectnessTest::test_moveaxis
NumpyOneInputOpsCorrectnessTest::test_nan_to_num
NumpyOneInputOpsCorrectnessTest::test_pad_float16_constant_2
NumpyOneInputOpsCorrectnessTest::test_pad_float32_constant_2
NumpyOneInputOpsCorrectnessTest::test_pad_float64_constant_2
Expand Down
35 changes: 32 additions & 3 deletions keras/src/backend/openvino/numpy.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@
from keras.src.backend import config
from keras.src.backend.common import dtypes
from keras.src.backend.common.variables import standardize_dtype
from keras.src.backend.openvino.core import DTYPES_MAX
from keras.src.backend.openvino.core import DTYPES_MIN
from keras.src.backend.openvino.core import OPENVINO_DTYPES
from keras.src.backend.openvino.core import OpenVINOKerasTensor
from keras.src.backend.openvino.core import (
Expand Down Expand Up @@ -1030,9 +1032,36 @@ def moveaxis(x, source, destination):


def nan_to_num(x, nan=0.0, posinf=None, neginf=None):
raise NotImplementedError(
"`nan_to_num` is not supported with openvino backend"
)
x = get_ov_output(x)
dtype = x.get_element_type()
if dtype.is_integral():
return OpenVINOKerasTensor(x)
isfloat64 = True if dtype == Type.f64 else False
if isfloat64: # conversion to f32 due to https://github.com/openvinotoolkit/openvino/issues/30264
x = ov_opset.convert(x, Type.f32).output(0)
dtype = Type.f32
nan_val = ov_opset.constant(nan, dtype).output(0)
posinf_val = ov_opset.constant(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@11happy, let us do not call is_inf and is_nan for integer types. You can return input as is as I understand.
For floating point types you need to call but convert to fp32 is not needed.
Can you fix it please in this PR?

Copy link
Contributor Author

@11happy 11happy Apr 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a check for int type & early retunr & removed the dtype conversion to float ,

if dtype.is_integral():
        return OpenVINOKerasTensor(x)

removed this ov_opset.convert(x, Type.f32)) conversion from is_inf, with these changes test is failing , it is not able to detect the infs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@11happy, please clarify what cases exactly are failing. For int types, it should pass because there is no inf and nan values.

Copy link
Contributor Author

@11happy 11happy Apr 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so the test case that is failing is previous one only

>       self.assertAllClose(
            knp.nan_to_num(x, nan=2, posinf=3, neginf=4), [1.0, 2.0, 3.0, 4.0]
        )

keras/src/ops/numpy_test.py:4806: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
keras/src/testing/test_case.py:48: in assertAllClose
    np.testing.assert_allclose(x1, x2, atol=atol, rtol=rtol, err_msg=msg)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<function assert_allclose.<locals>.compare at 0x722684c8ed40>, array([ 1.00000000e+00,  2.00000000e+00,  3.40282347e+38, -3.40282347e+38]), array([1., 2., 3., 4.]))
kwds = {'equal_nan': True, 'err_msg': 'None', 'header': 'Not equal to tolerance rtol=1e-06, atol=1e-06', 'verbose': True}

    @wraps(func)
    def inner(*args, **kwds):
        with self._recreate_cm():
>           return func(*args, **kwds)
E           AssertionError: 
E           Not equal to tolerance rtol=1e-06, atol=1e-06
E           None
E           Mismatched elements: 2 / 4 (50%)
E           Max absolute difference: 3.40282347e+38
E           Max relative difference: 1.13427449e+38
E            x: array([ 1.000000e+00,  2.000000e+00,  3.402823e+38, -3.402823e+38])
E            y: array([1., 2., 3., 4.])

/usr/lib/python3.12/contextlib.py:81: AssertionError

I tried to debug it , my observations:
this is failing for f64 dtype, here the test case with dtype f64 is failing , also in conversion logic if I intentionally convert it to f64 type instead of f32 the above test fails. here specifically it passes with f16,f32, but fails with f64.

I am also attaching a ss of PDB :
Screenshot from 2025-04-25 15-36-00

Copy link
Contributor Author

@11happy 11happy Apr 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have now a reproducible code on f64 dtype for the same error
I have posted that on the previous ticket openvinotoolkit/openvino#30264

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, can you please do the implementation as follows:

  1. return input as is for integer types
  2. use is_inf and is_nan mask for floating point types. In addition, use WA with convert for fp64, for other types it is not needed.
    Best regards,
    Roman

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have also added a comment referencing the issue

posinf if posinf is not None else DTYPES_MAX[dtype], dtype
).output(0)
neginf_val = ov_opset.constant(
neginf if neginf is not None else DTYPES_MIN[dtype], dtype
).output(0)
posinf_mask = ov_opset.is_inf(
x,
{"detect_positive": True, "detect_negative": False},
).output(0)
neginf_mask = ov_opset.is_inf(
x,
{"detect_positive": False, "detect_negative": True},
).output(0)
nan_mask = ov_opset.is_nan(x).output(0)
x = ov_opset.select(nan_mask, nan_val, x).output(0)
x = ov_opset.select(posinf_mask, posinf_val, x).output(0)
x = ov_opset.select(neginf_mask, neginf_val, x).output(0)
if isfloat64:
x = ov_opset.convert(x, Type.f64).output(0)
return OpenVINOKerasTensor(x)


def ndim(x):
Expand Down