Skip to content

Conversation

meinie0826
Copy link
Collaborator

PR Category

OP Test

Type of Change

Refactor

Description

Issue

Progress

  • Change is properly reviewed (1 reviewer required, 2 recommended).
  • Change is responded to an issue.
  • Change is fully covered by a UT.

Performance

)
@pytest.mark.addmm
@pytest.mark.parametrize("M, N, K", MNK_SHAPES)
@pytest.mark.parametrize("scalar", SCALARS)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file has some error in unit test.
Since the output dtype is not the same.

________________ test_accuracy_dot_tensor_tensor[dtype2-shape6] ________________

shape = (81,), dtype = torch.bfloat16

    @pytest.mark.dot
    @pytest.mark.parametrize("shape", UT_SHAPES_1D)
    @pytest.mark.parametrize("dtype", FLOAT_DTYPES)
    def test_accuracy_dot_tensor_tensor(shape, dtype):
        if flag_gems.vendor_name == "kunlunxin":
            torch.manual_seed(0)
            torch.cuda.manual_seed_all(0)
    
        inp1 = torch.randn(shape, dtype=dtype, device="cpu")
        inp2 = torch.randn(shape, dtype=dtype, device="cpu")
        ref_inp1 = to_reference(inp1, True)
        ref_inp2 = to_reference(inp2, True)
        inp1 = to_reference(inp1, True)
        inp2 = to_reference(inp2, True)
        ref_out = torch.dot(ref_inp1, ref_inp2)
        with flag_gems.use_gems():
            res_out = torch.dot(inp1, inp2)
>       gems_assert_close(res_out, ref_out, dtype, equal_nan=True)

tests/test_blas_ops.py:196: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/accuracy_utils.py:200: in gems_assert_close
    flag_gems.testing.assert_close(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants