Skip to content

Pytorch 2.4.0 Ultralytics/YOLO - does not work with OpenCL backend #84

@Skillnoob

Description

@Skillnoob

I tried to run Ultralytics using the most recent release.

Minimal example to reproduce:

from ultralytics import YOLO
import pytorch_ocl

model = YOLO('yolov8n.pt')

model.val(data='coco8.yaml', batch=1)

This line needs to be modified to return torch.device('ocl:0'), otherwise Ultralytics will complain about passing a wrong device or only run on the CPU.

My GPU: Radeon RX 7900 GRE

Full log:

Ultralytics YOLOv8.2.74 🚀 Python-3.11.9 torch-2.4.0+cpu CPU (AMD Ryzen 7 7800X3D 8-Core Processor)
Accessing device #0:gfx1100 on AMD Accelerated Parallel Processing
C:\Users\Skillnoob_\AppData\Roaming\Python\Python311\site-packages\ultralytics\utils\torch_utils.py:245: UserWarning: The operator 'aten::mm.out' is not currently supported on the ocl backend. Please open an issue at for requesting support https://github.com/artyom-beilis/pytorch_dlprim/issues (Triggered internally at C:\Users\artik\Projects\build_env\pytorch_dlprim\src\tensor_ops.cpp:336.)
  fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
C:\Users\Skillnoob_\AppData\Roaming\Python\Python311\site-packages\ultralytics\utils\torch_utils.py:250: UserWarning: The operator 'aten::mm.out' is not currently supported on the ocl backend. Please open an issue at for requesting support https://github.com/artyom-beilis/pytorch_dlprim/issues (Triggered internally at C:\Users\artik\Projects\build_env\pytorch_dlprim\src\tensor_ops.cpp:336.)
  fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)

Process finished with exit code -1073741819 (0xC0000005)

Metadata

Metadata

Assignees

No one assigned

    Labels

    operator is not implemented yetSpecific pytorch operator/feature have not been implemented yet

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions