Skip to content

remove unnecessary term XPUs from profiler #3394

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 15 additions & 5 deletions recipes_source/recipes/profiler_recipe.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,18 +163,20 @@
# Note the occurrence of ``aten::convolution`` twice with different input shapes.

######################################################################
# Profiler can also be used to analyze performance of models executed on GPUs and XPUs:
# Profiler can also be used to analyze performance of models executed on GPUs:
# Users could switch between cpu, cuda and xpu
activities = [ProfilerActivity.CPU]
if torch.cuda.is_available():
device = 'cuda'
activities += [ProfilerActivity.CUDA]
elif torch.xpu.is_available():
device = 'xpu'
activities += [ProfilerActivity.XPU]
else:
print('Neither CUDA nor XPU devices are available to demonstrate profiling on acceleration devices')
import sys
sys.exit(0)

activities = [ProfilerActivity.CPU, ProfilerActivity.CUDA, ProfilerActivity.XPU]
sort_by_keyword = device + "_time_total"

model = models.resnet18().to(device)
Expand Down Expand Up @@ -308,9 +310,17 @@
# Profiling results can be outputted as a ``.json`` trace file:
# Tracing CUDA or XPU kernels
# Users could switch between cpu, cuda and xpu
device = 'cuda'

activities = [ProfilerActivity.CPU, ProfilerActivity.CUDA, ProfilerActivity.XPU]
activities = [ProfilerActivity.CPU]
if torch.cuda.is_available():
device = 'cuda'
activities += [ProfilerActivity.CUDA]
elif torch.xpu.is_available():
device = 'xpu'
activities += [ProfilerActivity.XPU]
else:
print('Neither CUDA nor XPU devices are available to demonstrate profiling on acceleration devices')
import sys
sys.exit(0)

model = models.resnet18().to(device)
inputs = torch.randn(5, 3, 224, 224).to(device)
Expand Down