-
Notifications
You must be signed in to change notification settings - Fork 338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Casting error in simple model with a constant tensor<f32> during --EmitLib #335
Comments
Looking further into the problem, I found that if we replace the type declaration: Compilation works if the mlir generated from the onnx model were this: module {
func @main_graph(%arg0: tensor<4x5xf32>, %arg1: tensor<4x5xf32>) -> tensor<4x5xf32> attributes {input_names = ["input_y:0", "input_x:0"], output_names = ["output:0"]} {
%0 = "onnx.Add"(%arg1, %arg0) {onnx_node_name = "added"} : (tensor<4x5xf32>, tensor<4x5xf32>) -> tensor<4x5xf32>
%1 = "onnx.Constant"() {value = dense<4.200000e+01> : tensor<1xf32>} : () -> tensor<1xf32>
%2 = "onnx.Add"(%0, %1) {onnx_node_name = "add"} : (tensor<4x5xf32>, tensor<1xf32>) -> tensor<4x5xf32>
return %2 : tensor<4x5xf32>
}
"onnx.EntryPoint"() {func = @main_graph, numInputs = 2 : i32, numOutputs = 1 : i32} : () -> ()
} However,
emits this incorrect IR for this model. I will try to add the model in the next comment. |
I could not add the model.onnx file, but this is how the onnx file was generated: #!/bin/python
# To execute this, you need to have the following installed
# pip install tensorflow onnx tf2onnx
import tensorflow as tf
import tf2onnx
# Declare a custom function that represents the graph with all its inputs
def add_plus_cte(x, y, cte):
added = tf.math.add(
x, y, name='added'
)
added_plus_cte = added + cte
return added_plus_cte
# Create a `Function` object that contains a graph
fun_obj = tf.function(add_plus_cte)
# Make some tensors to test it
x1 = tf.constant([[1.0, 2.0]])
y1 = tf.constant([[2.0, 3.0]])
b1 = tf.constant(4.0)
# It works!
print(fun_obj(x1, y1, b1).numpy())
# Wrap everything in a session to be read by tf2onnx tool
with tf.compat.v1.Session() as sess:
# Declare graph input arguments (it has bigger dimensions)
x = tf.compat.v1.placeholder(tf.float32, [4, 5], name="input_x")
y = tf.compat.v1.placeholder(tf.float32, [4, 5], name="input_y")
cte = tf.constant(42.0) # The constant is not a variable. It is embedded in the graph.
result = add_plus_cte(x,y,cte)
_ = tf.identity(result, name="output")
# Create the onnx graph
onnx_graph = tf2onnx.tfonnx.process_tf_graph(sess.graph, input_names=["input_x:0","input_y:0"], output_names=["output:0"])
model_proto = onnx_graph.make_model("custom_add_plus_cte")
# Save to file
with open("model.onnx", "wb") as f:
f.write(model_proto.SerializeToString()) |
Thanks @agostini01. Could you send the model (.onnx) file to me by email([email protected])? |
@chentong319 , just sent you an email. The manual fix allows for LLVM IR to be generated but breaks downstream during the compilation of the |
@agostini01 I did not see your email yet. Is the model file too large? |
@chentong319 my email must have been sent to a spam folder. I have created a github repo and included the example: https://github.com/agostini01/failing-onnx-models/tree/main/custom_tensor_add_plus_cte |
I downloaded the model and tried. So far I found that the onnx::TensorProto node for this denseElemtAttr has dims().size() == 0. It should be 1. That's why the importer generated tensor, but not tensor<1xf32>. Need further to identify the source. |
This is good that you found one of the sources of the problem. I was playing with onnx-mlir source code, but could not spot how to debug this error. |
I set break point in onnx-mlir/src/Builder/FrontendDialectHelper.cpp: mlir::DenseElementsAttr onnxTensorProtoToDenseElmAttr. This is the procedure to construct the attribute for ConstantOp. I dumped the type and also print out initializer.dims().size(). |
@chentong319 any updates on whether this error has already been fixed? |
Got an error in a
(Tensor+Tensor)+Constant
graph.This is the onnx.mlir code:
And this is the executed line and the error:
Note that the same example without the internal constant, but with a
1xf32
argument works:The text was updated successfully, but these errors were encountered: