-
Notifications
You must be signed in to change notification settings - Fork 338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Translation from ONNX type to MLIR type #160
Comments
One way to remove duplication is to define a static method in the op definition, say:
And, in // If the result type is specified by an attribute, set the result type
// direcly.
if (!op.getTypeMapAttrName().empty()) {
auto typeAttr = op.getTypeMapAttrName();
int typeNum = -1;
for (int i = 0; i < node.attribute_size(); ++i) {
auto attr = node.attribute(i);
if (attr.name() == typeAttr) {
typeNum = attr.i();
break;
}
}
if (typeNum != -1) {
auto resultType =
mlir::UnrankedTensorType::get(convertONNXTypeToMLIRType(
static_cast<onnx::TensorProto_DataType>(typeNum)));
for (int i = 0; i < node.output().size(); i++) {
if (variadicOut)
(*(op.getODSResults(0).begin() + i)).setType(resultType);
else
(*op.getODSResults(i).begin()).setType(resultType);
}
}
} It works well. However, this way does not use the static method |
The design looks good to me. |
Merge latest community onnx-mlir into fork
@tungld @tjingrant
This is related to PR#156.
We need a translation table from types in ONNX to types in MLIR. We can not do one-to-one mapping because some type is not supported by ONNX-MLIR or we may choose to support types in different way.
This translation will be used in Builder to build data for ONNX-MLIR from protbuf, and in gen_onnx-mlir.py for operation Tablegen.
How should we share the code to make sure we have a consistent translation?
The text was updated successfully, but these errors were encountered: