Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions About Lowering from the onnx dialect #189

Open
AkhilJ99 opened this issue Jun 26, 2020 · 4 comments
Open

Questions About Lowering from the onnx dialect #189

AkhilJ99 opened this issue Jun 26, 2020 · 4 comments

Comments

@AkhilJ99
Copy link

How exactly does one convert the code in onnx dialect like the one given below to a lower dialect like affine and std ?. Do these conversions already exist ?.
`
func @main_graph(%arg0: tensor<2x3xf32>) -> tensor<2x3xf32> {
%0 = "onnx.Add"(%arg0, %arg0) : (tensor<2x3xf32>, tensor<2x3xf32>) -> tensor<2x3xf32>
return %0 : tensor<2x3xf32>
}
"onnx.EntryPoint"() {func = @main_graph, numInputs = 1 : i32, numOutputs = 1 : i32} : () -> ()

`

@tjingrant
Copy link
Contributor

Simply run ./onnx-mlir --EmitMLIR add.mlir and you should get:

#map0 = affine_map<() -> (0)>
#map1 = affine_map<() -> (3)>
#map2 = affine_map<() -> (2)>


module {
  %0 = "krnl.packed_const"() {file_name = "/var/folders/1n/n6cdfhkd1hndnfq93trnf0vm0000gn/T/packed_const-c3909e.tmp", is_le = true, size_in_bytes = 0 : i64} : () -> i64
  func @main_graph(%arg0: memref<2x3xf32>) -> memref<2x3xf32> {
    %1 = alloc() : memref<2x3xf32>
    affine.for %arg1 = 0 to 2 {
      affine.for %arg2 = 0 to 3 {
        %2 = load %arg0[%arg1, %arg2] : memref<2x3xf32>
        %3 = load %arg0[%arg1, %arg2] : memref<2x3xf32>
        %4 = addf %2, %3 : f32
        store %4, %1[%arg1, %arg2] : memref<2x3xf32>
      }
    }
    return %1 : memref<2x3xf32>
  }
  "krnl.entry_point"() {func = @main_graph, numInputs = 1 : i32, numOutputs = 1 : i32} : () -> ()
}

@AkhilJ99
Copy link
Author

Simply run ./onnx-mlir --EmitMLIR add.mlir and you should get:

#map0 = affine_map<() -> (0)>
#map1 = affine_map<() -> (3)>
#map2 = affine_map<() -> (2)>


module {
  %0 = "krnl.packed_const"() {file_name = "/var/folders/1n/n6cdfhkd1hndnfq93trnf0vm0000gn/T/packed_const-c3909e.tmp", is_le = true, size_in_bytes = 0 : i64} : () -> i64
  func @main_graph(%arg0: memref<2x3xf32>) -> memref<2x3xf32> {
    %1 = alloc() : memref<2x3xf32>
    affine.for %arg1 = 0 to 2 {
      affine.for %arg2 = 0 to 3 {
        %2 = load %arg0[%arg1, %arg2] : memref<2x3xf32>
        %3 = load %arg0[%arg1, %arg2] : memref<2x3xf32>
        %4 = addf %2, %3 : f32
        store %4, %1[%arg1, %arg2] : memref<2x3xf32>
      }
    }
    return %1 : memref<2x3xf32>
  }
  "krnl.entry_point"() {func = @main_graph, numInputs = 1 : i32, numOutputs = 1 : i32} : () -> ()
}

Thank you that was helpful

@AkhilJ99
Copy link
Author

AkhilJ99 commented Jun 29, 2020

Also I have tried to lower the onnx implementation of AlexNet (https://github.com/onnx/models/blob/master/vision/classification/alexnet/model/bvlcalexnet-3.onnx) to the onnx dialect but have got the shape inference error.

First I emmited the onnxbasic

./build/bin/onnx-mlir --EmitONNXBasic bvlcalexnet-3.onnx 
Full MLIR code written to: 
   bvlcalexnet-3.onnx.mlir

Constant-free MLIR Code written to: 
   bvlcalexnet-3.tmp

Use:
   bvlcalexnet-3.onnx.mlir
to continue lowering the code to other dialects.

And then tried to lower to MLIR but got shape inference error

./build/bin/onnx-mlir --EmitMLIR bvlcalexnet-3.onnx.mlir
loc("bvlcalexnet-3.onnx.mlir":10:10): error: unable to infer shape of operation without shape inference interface
loc("bvlcalexnet-3.onnx.mlir":11:10): error: Input tensor not ranked
loc("bvlcalexnet-3.onnx.mlir":11:10): error: shape inference failed
loc("bvlcalexnet-3.onnx.mlir":14:10): error: Input tensor not ranked
loc("bvlcalexnet-3.onnx.mlir":14:10): error: shape inference failed
loc("bvlcalexnet-3.onnx.mlir":16:11): error: unable to infer shape of operation without shape inference interface
loc("bvlcalexnet-3.onnx.mlir":17:11): error: Input tensor not ranked
loc("bvlcalexnet-3.onnx.mlir":17:11): error: shape inference failed
loc("bvlcalexnet-3.onnx.mlir":20:11): error: Input tensor not ranked
loc("bvlcalexnet-3.onnx.mlir":20:11): error: shape inference failed
loc("bvlcalexnet-3.onnx.mlir":24:11): error: Input tensor not ranked
loc("bvlcalexnet-3.onnx.mlir":24:11): error: shape inference failed
loc("bvlcalexnet-3.onnx.mlir":28:11): error: Input tensor not ranked
loc("bvlcalexnet-3.onnx.mlir":28:11): error: shape inference failed
loc("bvlcalexnet-3.onnx.mlir":30:11): error: Input tensor not ranked
loc("bvlcalexnet-3.onnx.mlir":30:11): error: shape inference failed
loc("bvlcalexnet-3.onnx.mlir":33:11): error: Input tensor(s) not ranked
loc("bvlcalexnet-3.onnx.mlir":33:11): error: shape inference failed
loc("bvlcalexnet-3.onnx.mlir":35:22): error: unable to infer shape of operation without shape inference interface
loc("bvlcalexnet-3.onnx.mlir":38:11): error: Input tensor(s) not ranked
loc("bvlcalexnet-3.onnx.mlir":38:11): error: shape inference failed
loc("bvlcalexnet-3.onnx.mlir":40:26): error: unable to infer shape of operation without shape inference interface
loc("bvlcalexnet-3.onnx.mlir":43:11): error: Input tensor(s) not ranked
loc("bvlcalexnet-3.onnx.mlir":43:11): error: shape inference failed
loc("bvlcalexnet-3.onnx.mlir":4:3): error: Shape inference failed, 21 operations couldn't be inferred

I have also tried using onnx-mlir-opt for shape inference but it throws another error

./build/bin/onnx-mlir-opt --shape-inference bvlcalexnet-3.onnx.mlir
bvlcalexnet-3.onnx.mlir:10:10: error: unable to infer shape of operation without shape inference interface
    %4 = "onnx.LRN"(%3) {alpha = 9.99999974E-5 : f32, beta = 7.500000e-01 : f32, bias = 1.000000e+00 : f32, size = 5 : i64} : (tensor<*xf32>) -> tensor<*xf32>
         ^
bvlcalexnet-3.onnx.mlir:10:10: note: see current operation: %4 = "onnx.LRN"(%3) {alpha = 9.99999974E-5 : f32, beta = 7.500000e-01 : f32, bias = 1.000000e+00 : f32, size = 5 : i64} : (tensor<1x96x54x54xf32>) -> tensor<*xf32>
bvlcalexnet-3.onnx.mlir:11:10: error: Input tensor not ranked
    %5 = "onnx.MaxPoolSingleOut"(%4) {kernel_shape = [3, 3], pads = [0, 0, 0, 0], strides = [2, 2]} : (tensor<*xf32>) -> tensor<*xf32>
         ^
bvlcalexnet-3.onnx.mlir:11:10: note: see current operation: %5 = "onnx.MaxPoolSingleOut"(%4) {kernel_shape = [3, 3], pads = [0, 0, 0, 0], strides = [2, 2]} : (tensor<*xf32>) -> tensor<*xf32>
bvlcalexnet-3.onnx.mlir:11:10: error: shape inference failed
    %5 = "onnx.MaxPoolSingleOut"(%4) {kernel_shape = [3, 3], pads = [0, 0, 0, 0], strides = [2, 2]} : (tensor<*xf32>) -> tensor<*xf32>
.
.
.
.

@doru1004
Copy link
Collaborator

doru1004 commented Nov 6, 2020

@AkhilJ99 any updates on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants