-
Notifications
You must be signed in to change notification settings - Fork 659
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: 'torch._C.Node' object has no attribute 'ival' #1237
Comments
I'm getting the same error on coremltools 5.1, python 3.8.5. |
FWIW, this is the structure that is not replicated in the |
Are there any updates to this? I'm getting the same error with torch 1.10, and coremltools 5.1. |
I'm able to replicate the same error using the mobile optimizer as well: trace = torch.jit.trace(net, dummy_input).eval()
# trace = torch.jit.freeze(trace)
trace = torch.utils.mobile_optimizer.optimize_for_mobile(
trace,
set(
[
MobileOptimizerType.CONV_BN_FUSION,
MobileOptimizerType.INSERT_FOLD_PREPACK_OPS,
MobileOptimizerType.REMOVE_DROPOUT,
]
),
) Running it all on python 3.8.10, torch 1.9 and coremltools 5.1. |
I've got the same problem on python 3.8, torch 1.9 and coremltools 5.1, when trying to convert a quantized and optimized ResNet18 to CoreML. On the other hand, I am able to convert the non-quantized model without any issues. |
I'm having the same problem when trying to convert a model optimized for mobile. Unoptimized conversion works fine, but is not really an option, due to iOS memory limitations. |
Was this issue ever resolved? I am running into it while trying to convert a model that is optimized for mobile (which converts fine when it is not optimized). |
@jral3s , can you please expand by what you mean by "model that is optimized for mobile"? Which torch APIs are you using in particular to "optimize" the model and which kinds of optimizations it has? |
Sure. I am currently trying to convert a deeplabv3 model to coreml on google colab, and if I just trace it like in the code below then I have no issues:
But it is too large and slow for my application, so I am trying to use torch's mobile optimizer like in the code below, which returns the following error:
|
The optimizations specified here are specific to pytorch mobile and not necessarily relevant or applicable or even supported for CoreML.
Can you please expand on that? How do the optimizations help in reducing the size of the model? What are the memory and latency targets that you are planning to hit and getting with the current CoreML model deployment? |
We are trying to run the torch mobile optimizer because when we export the model purely using the trace (exactly like in the first code block), we run into the following error when trying to have it run configured with the neural engine in XCode.
The work around was to configure it to run with cpu and gpu only, but the resulting model takes 800 ms to run in XCode (whereas the Coreml model for DeepLabv3 available from Apple works just fine on the neural engine and runs with a 30 ms latency). How can I file a feedback request for this model? |
|
In my case, python 3.7, pytorch 1.11 and coremltools 6.0b1 |
Same error here with: script_mdl = torch.jit.script(generator, **kwargs)
script_mdl = torch.jit.freeze(script_mdl, preserved_attrs=["reset", "get_sample_rate"] if stream else [],)
coreml_mdl = ct.convert(script_mdl, inputs=[ct.TensorType(shape=(1,80,100)], debug=True)
|
I think the problem is solved with Pytorch 1.12 because of this definition Although, I'm getting other error |
@alealv - you are correct. This has been fixed in PyTorch. The original code now runs without error. Thanks for the information. |
🐞Describe the bug
I got this error when convert coremlmodel after torch.jit.freeze
Trace
To Reproduce
System environment (please complete the following information):
The text was updated successfully, but these errors were encountered: