Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

results for NVIDIA GeForce RTX 4090 (overclocked) in Windows 11 #108

Open
moyang opened this issue May 20, 2023 · 3 comments
Open

results for NVIDIA GeForce RTX 4090 (overclocked) in Windows 11 #108

moyang opened this issue May 20, 2023 · 3 comments

Comments

@moyang
Copy link

moyang commented May 20, 2023

Core: +200 MHz, VRAM +1000 MHz, Power limit: 600W

Platform: NVIDIA CUDA
Device: NVIDIA GeForce RTX 4090
Driver version : 531.61 (Win64)
Compute units : 128
Clock frequency : 2520 MHz

Global memory bandwidth (GBPS)
  float   : 954.06
  float2  : 983.28
  float4  : 1001.36
  float8  : 1013.59
  float16 : 1017.66

Single-precision compute (GFLOPS)
  float   : 90262.02
  float2  : 85753.74
  float4  : 90346.06
  float8  : 89091.80
  float16 : 89121.65

No half precision support! Skipped

Double-precision compute (GFLOPS)
  double   : 1496.53
  double2  : 1494.58
  double4  : 1488.91
  double8  : 1482.93
  double16 : 1470.34

Integer compute (GIOPS)
  int   : 46283.53
  int2  : 46459.81
  int4  : 45872.11
  int8  : 46332.95
  int16 : 46330.86

Integer compute Fast 24bit (GIOPS)
  int   : 46572.11
  int2  : 46336.07
  int4  : 46324.10
  int8  : 46139.03
  int16 : 45105.82

Transfer bandwidth (GBPS)
  enqueueWriteBuffer              : 20.85
  enqueueReadBuffer               : 20.48
  enqueueWriteBuffer non-blocking : 20.84
  enqueueReadBuffer non-blocking  : 20.46
  enqueueMapBuffer(for read)      : 9.07
    memcpy from mapped ptr        : 28.45
  enqueueUnmap(after write)       : 26.86
    memcpy to mapped ptr          : 28.05

Kernel launch latency : 9.42 us
@RhynarAI
Copy link

Thanx for sharing. I wonder why FP16 is shown as "no support"... Ada Lovelace has FP16 Support.

@moyang
Copy link
Author

moyang commented May 31, 2023

I guess it means no "native support" for half-precision. FWIW Ada emulates FP16 using FP32, hence FP16 and FP32 have the same tflops. In contrast, recent AMD architectures (RDNA, CDNA) has FP16 performance 2x as much as FP32.

@RhynarAI
Copy link

Yeah, thats probably the reason. But it still is missleading - because of course one can do FP16 on Ada and the Tensor Cores do have native FP16 support, so its not like the card can´t process or store FP16/BF16..
Is Tensor core support possible in OPENCL ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants