Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Translation from ONNX type to MLIR type #160

Open
chentong319 opened this issue Jun 1, 2020 · 2 comments
Open

Translation from ONNX type to MLIR type #160

chentong319 opened this issue Jun 1, 2020 · 2 comments
Assignees
Labels
ONNX ingestion Support of ingestion of ONNX model into the ONNX-MLIR framework, e.g. ops or models.

Comments

@chentong319
Copy link
Collaborator

@tungld @tjingrant
This is related to PR#156.
We need a translation table from types in ONNX to types in MLIR. We can not do one-to-one mapping because some type is not supported by ONNX-MLIR or we may choose to support types in different way.
This translation will be used in Builder to build data for ONNX-MLIR from protbuf, and in gen_onnx-mlir.py for operation Tablegen.
How should we share the code to make sure we have a consistent translation?

@tungld
Copy link
Collaborator

tungld commented Jun 1, 2020

One way to remove duplication is to define a static method in the op definition, say:

let extraClassDeclaration = [{                                                                                                                  
   static StringRef getTypeMapAttrName() {                                         
     return "to";                                                                  
   }                                                                               
}];

And, in FrontendDialectTransformer.cpp, we will obtain the attribute with that name, get the attribute value and pass it to convertONNXTypeToMLIRType to get an MLIR type, like:

// If the result type is specified by an attribute, set the result type        
// direcly.                
if (!op.getTypeMapAttrName().empty()) {                                        
  auto typeAttr = op.getTypeMapAttrName();                                     
  int typeNum = -1;                                                            
  for (int i = 0; i < node.attribute_size(); ++i) {                            
    auto attr = node.attribute(i);                                             
    if (attr.name() == typeAttr) {                                             
      typeNum = attr.i();                                                      
      break;                                                                   
    }                                                                          
  }                                                                            
  if (typeNum != -1) {                                                         
    auto resultType =                                                          
      mlir::UnrankedTensorType::get(convertONNXTypeToMLIRType(               
        static_cast<onnx::TensorProto_DataType>(typeNum)));                
    for (int i = 0; i < node.output().size(); i++) {                           
      if (variadicOut)                                                         
        (*(op.getODSResults(0).begin() + i)).setType(resultType);              
      else                                                                     
        (*op.getODSResults(i).begin()).setType(resultType);                    
    }                                                                          
  }                                                                            
}

It works well. However, this way does not use the static method getTypeMap nor resultTypeInference. Hence, it may make situation complicated and confusing.

@chentong319
Copy link
Collaborator Author

The design looks good to me.

@AlexandreEichenberger AlexandreEichenberger added the ONNX ingestion Support of ingestion of ONNX model into the ONNX-MLIR framework, e.g. ops or models. label Jun 25, 2020
cjvolzka added a commit to cjvolzka/onnx-mlir that referenced this issue Oct 30, 2023
Merge latest community onnx-mlir into fork
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ONNX ingestion Support of ingestion of ONNX model into the ONNX-MLIR framework, e.g. ops or models.
Projects
None yet
Development

No branches or pull requests

3 participants