CUDA C++ Best Practices Guide Release 12.5 NVIDIA May 09, 2024 Contents 1 What Is This Document? 2 Who Should Read This Guide? 3 Assess, Parallelize, Optimize, Deploy . Assess . . . . Parallelize . . . Optimize . . . . Deploy . 3.1 3.2 3.3 3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Recommendations and Best Practices 5 Assessing Your Application 6 Heterogeneous Computing 6.1 6.2 . Differences between Host and Device . What Runs on a CUDA-Enabled Device? . . . . . . . . . . . . 7.1 Profile . 7.1.1 7.1.2 7.1.3 7 Application Profiling . . . . . . . Creating the Profile . Identifying Hotspots . . Understanding Scaling . . . . . Strong Scaling and Amdahl’s Law . . . . . 7.1.3.1 . 7.1.3.2 Weak Scaling and Gustafson’s Law . Applying Strong and Weak Scaling . 7.1.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Parallelizing Your Application 9 Getting Started 9.1 9.2 9.3 . Parallel Libraries . . Parallelizing Compilers . . Coding to Expose Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Getting the Right Answer . Verification . 10.1 10.2 10.3 . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Reference Comparison . . . 10.1.2 Unit Testing . Debugging . . . . Numerical Accuracy and Precision . . . . . . . 10.3.1 Single vs. Double Precision . . 10.3.2 Floating Point Math Is not Associative . . 10.3.3 IEEE 754 Compliance . . . 10.3.4 x86 80-bit Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 5 7 8 8 8 9 . . . . 11 13 15 . 15 . 16 19 . 19 . 19 . 20 . 20 . 20 . 21 . 21 23 25 . 25 . 25 . 26 27 . 27 . 27 . 27 . 28 . 28 . 28 . 29 . 29 . 29 i 11 Optimizing CUDA Applications 12 Performance Metrics . 12.1 12.2 . . . . . . . . . . . . . Timing . . . 12.1.1 Using CPU Timers . . 12.1.2 Using CUDA GPU Timers . . . . . . . . . . . . 12.2.1 Theoretical Bandwidth Calculation . 12.2.2 Effective Bandwidth Calculation . . . 12.2.3 Throughput Reported by Visual Profiler . Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Memory Optimizations 13.1 13.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.2.1 13.2.2.2 13.2.2 L2 Cache . Data Transfer Between Host and Device . . Effects of Misaligned Accesses . . Strided Accesses . . . . . . . . 13.2.1 Coalesced Access to Global Memory . . . . . 13.1.1 Pinned Memory . 13.1.2 Asynchronous and Overlapping Transfers with Computation . . 13.1.3 Zero Copy . . . . 13.1.4 Unified Virtual Addressing . . . Device Memory Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.1.1 A Simple Access Pattern . . . . 13.2.1.2 A Sequential but Misaligned Access Pattern . . . . 13.2.1.3 . . . . 13.2.1.4 . . . . . . . . . . . . L2 Cache Access Window . . . . . Tuning the Access Window Hit-Ratio . . . . . . . . . . Shared Memory and Memory Banks . . . 13.2.3.1 . Shared Memory in Matrix Multiplication (C=AB) . 13.2.3.2 13.2.3.3 . Shared Memory in Matrix Multiplication (C=AAT) . 13.2.3.4 Asynchronous Copy from Global Memory to Shared Memory . . . . . . . . . . . . . 13.2.5.1 Additional Texture Capabilities . . . . . . . . . Register Pressure . . . . 13.2.6 Constant Memory . . 13.2.7 Registers . 13.2.4 Local Memory . . 13.2.5 Texture Memory . Allocation . . NUMA Best Practices . 13.2.3 Shared Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 13.4 . . . . . . . . 14.1 Occupancy . . 14.1.1 Calculating Occupancy . 14 Execution Configuration Optimizations . . . . . . . . . 14.2 Hiding Register Dependencies . . 14.3 . 14.4 . 14.5 . 14.6 Multiple contexts . Thread and Block Heuristics . Effects of Shared Memory . . Concurrent Kernel Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Instruction Optimization 15.1 . . Arithmetic Instructions . . 15.1.1 Division Modulo Operations . 15.1.2 Loop Counters Signed vs. Unsigned . . 15.1.3 Reciprocal Square Root . . 15.1.4 Other Arithmetic Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii 31 33 . 33 . 33 . 34 . 34 . 35 . 35 . 36 37 . 37 . 38 . 38 . 40 . 41 . 42 . 43 . 44 . 44 . 45 . 46 . 47 . 47 . 48 . 50 . 51 . 51 . 54 . 56 . 58 . 59 . 59 . 60 . 60 . 60 . 60 . 61 63 . 63 . 64 . 64 . 65 . 66 . 67 . 67 69 . 69 . 69 . 69 . 70 . 70 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.5 Exponentiation With Small Fractional Arguments . . 15.1.6 Math Libraries . . . . 15.1.7 Precision-related Compiler Flags . . . 15.2 Memory Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Control Flow 16.1 16.2 Branching and Divergence . . Branch Predication . . . . . . . . . . . . . . . . . . . . . 17 Deploying CUDA Applications 18 Understanding the Programming Environment . . . . . CUDA Compute Capability . 18.1 . Additional Hardware Data . 18.2 18.3 Which Compute Capability Target . . 18.4 CUDA Runtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4 19.1 19.2 19.3 19 CUDA Compatibility Developer’s Guide . . . CUDA Toolkit Versioning . . Source Compatibility . . Binary Compatibility . . . . 19.3.1 CUDA Binary (cubin) Compatibility . . . . . CUDA Compatibility Across Minor Releases . . . . . . 19.4.1 Existing CUDA Applications within Minor Versions of CUDA . . . . . . . . . . . . . . . . . . . . . . . . . . Recommendations for building a minor-version compatible library . . Recommendations for taking advantage of minor version compatibility in your . . . application . 19.4.1.1 Handling New CUDA Features and Driver APIs . . 19.4.1.2 Using PTX . . . . . 19.4.1.3 Dynamic Code Generation . 19.4.1.4 19.4.1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Preparing for Deployment . . . 20.1 20.2 20.3 20.4 . . Testing for CUDA Availability . . Error Handling . . . . . Building for Maximum Compatibility . Distributing the CUDA Runtime and Libraries . . . . . . . . 20.4.1 CUDA Toolkit Library Redistribution . 20.4.1.1 Which Files to Redistribute . . . 20.4.1.2 Where to Install Redistributed CUDA Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1 Nvidia-SMI 21 Deployment Infrastructure Tools . . . . . 21.1.1 Queryable state . 21.1.2 Modifiable state . . . . . . . . . NVML . . . Cluster Management Tools . . Compiler JIT Cache Management Tools . . CUDA_VISIBLE_DEVICES . 21.2 21.3 21.4 21.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Recommendations and Best Practices 22.1 Overall Performance Optimization Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 23 nvcc Compiler Switches . nvcc . 23.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 . 71 . 73 . 73 75 . 75 . 75 77 79 . 79 . 79 . 80 . 80 83 . 84 . 85 . 85 . 87 . 88 . 88 . 89 . 90 . 91 . 93 . 94 95 . 95 . 96 . 96 . 97 . 98 . 99 . 100 103 . 103 . 103 . 104 . 104 . 104 . 105 . 105 107 . 107 109 . 109 iii 24 Notices Notice . 24.1 . 24.2 OpenCL . 24.3 . . Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 . 111 . 112 . 112 iv CUDA C++ Best Practices Guide, Release 12.5 CUDA C++ Best Practices Guide The programming guide to using the CUDA Toolkit to obtain the best performance from NVIDIA GPUs. Contents 1 CUDA C++ Best Practices Guide, Release 12.5 2 Contents Chapter 1. What Is This Document? This Best Practices Guide is a manual to help developers obtain the best performance from NVIDIA® CUDA® GPUs. It presents established parallelization and optimization techniques and explains coding metaphors and idioms that can greatly simplify programming for CUDA-capable GPU architectures. While the contents can be used as a reference manual, you should be aware that some topics are revis- ited in different contexts as various programming and configuration topics are explored. As a result, it is recommended that first-time readers proceed through the guide sequentially. This approach will greatly improve your understanding of effective programming practices and enable you to better use the guide for reference later. 3 CUDA C++ Best Practices Guide, Release 12.5 4 Chapter 1. What Is This Document? Chapter 2. Who Should Read This Guide? The discussions in this guide all use the C++ programming language, so you should be comfortable reading C++ code. This guide refers to and relies on several other documents that you should have at your disposal for reference, all of which are available at no cost from the CUDA website https://docs.nvidia.com/cuda/. The following documents are especially important resources: ▶ CUDA Installation Guide ▶ CUDA C++ Programming Guide ▶ CUDA Toolkit Reference Manual In particular, the optimization section of this guide assumes that you have already successfully down- loaded and installed the CUDA Toolkit (if not, please refer to the relevant CUDA Installation Guide for your platform) and that you have a basic familiarity with the CUDA C++ programming language and environment (if not, please refer to the CUDA C++ Programming Guide). 5 CUDA C++ Best Practices Guide, Release 12.5 6 Chapter 2. Who Should Read This Guide? Chapter 3. Assess, Parallelize, Optimize, Deploy This guide introduces the Assess, Parallelize, Optimize, Deploy(APOD) design cycle for applications with the goal of helping application developers to rapidly identify the portions of their code that would most readily benefit from GPU acceleration, rapidly realize that benefit, and begin leveraging the resulting speedups in production as early as possible. APOD is a cyclical process: initial speedups can be achieved, tested, and deployed with only minimal initial investment of time, at which point the cycle can begin again by identifying further optimiza- tion opportunities, seeing additional speedups, and then deploying the even faster versions of the application into production. 7 CUDA C++ Best Practices Guide, Release 12.5 3.1. Assess For an existing project, the first step is to assess the application to locate the parts of the code that are responsible for the bulk of the execution time. Armed with this knowledge, the developer can evaluate these bottlenecks for parallelization and start to investigate GPU acceleration. By understanding the end-user’s requirements and constraints and by applying Amdahl’s and Gustafson’s laws, the developer can determine the upper bound of performance improvement from acceleration of the identified portions of the application. 3.2. Parallelize Having identified the hotspots and having done the basic exercises to set goals and expectations, the developer needs to parallelize the code. Depending on the original code, this can be as simple as calling into an existing GPU-optimized library such as cuBLAS, cuFFT, or Thrust, or it could be as simple as adding a few preprocessor directives as hints to a parallelizing compiler. On the other hand, some applications’ designs will require some amount of refactoring to expose their inherent parallelism. As even CPU architectures will require exposing parallelism in order to improve or simply maintain the performance of sequential applications, the CUDA family of parallel programming languages (CUDA C++, CUDA Fortran, etc.) aims to make the expression of this parallelism as simple as possible, while simultaneously enabling operation on CUDA-capable GPUs designed for maximum parallel throughput. 3.3. Optimize After each round of application parallelization is complete, the developer can move to optimizing the implementation to improve performance. Since there are many possible optimizations that can be considered, having a good understanding of the needs of the application can help to make the pro- cess as smooth as possible. However, as with APOD as a whole, program optimization is an iterative process (identify an opportunity for optimization, apply and test the optimization, verify the speedup achieved, and repeat), meaning that it is not necessary for a programmer to spend large amounts of In- time memorizing the bulk of all possible optimization strategies prior to seeing good speedups. stead, strategies can be applied incrementally as they are learned. Optimizations can be applied at various levels, from overlapping data transfers with computation all the way down to fine-tuning floating-point operation sequences. The available profiling tools are invaluable for guiding this process, as they can help suggest a next-best course of action for the developer’s optimization efforts and provide references into the relevant portions of the optimization section of this guide. 8 Chapter 3. Assess, Parallelize, Optimize, Deploy CUDA C++ Best Practices Guide, Release 12.5 3.4. Deploy Having completed the GPU acceleration of one or more components of the application it is possible to compare the outcome with the original expectation. Recall that the initial assess step allowed the developer to determine an upper bound for the potential speedup attainable by accelerating given hotspots. Before tackling other hotspots to improve the total speedup, the developer should consider taking the partially parallelized implementation and carry it through to production. This is important for a number of reasons; for example, it allows the user to profit from their investment as early as possible (the speedup may be partial but is still valuable), and it minimizes risk for the developer and the user by providing an evolutionary rather than revolutionary set of changes to the application. 3.4. Deploy 9 CUDA C++ Best Practices Guide, Release 12.5 10 Chapter 3. Assess, Parallelize, Optimize, Deploy Chapter 4. Recommendations and Best Practices Throughout this guide, specific recommendations are made regarding the design and implementation of CUDA C++ code. These recommendations are categorized by priority, which is a blend of the effect of the recommendation and its scope. Actions that present substantial improvements for most CUDA applications have the highest priority, while small optimizations that affect only very specific situations are given a lower priority. Before implementing lower priority recommendations, it is good practice to make sure all higher pri- ority recommendations that are relevant have already been applied. This approach will tend to provide the best results for the time invested and will avoid the trap of premature optimization. The criteria of benefit and scope for establishing priority will vary depending on the nature of the program. In this guide, they represent a typical case. Your code might reflect different priority factors. Regardless of this possibility, it is good practice to verify that no higher-priority recommendations have been overlooked before undertaking lower-priority items. Note: Code samples throughout the guide omit error checking for conciseness. Production code should, however, systematically check the error code returned by each API call and check for failures in kernel launches by calling cudaGetLastError(). 11 CUDA C++ Best Practices Guide, Release 12.5 12 Chapter 4. Recommendations and Best Practices Chapter 5. Assessing Your Application From supercomputers to mobile phones, modern processors increasingly rely on parallelism to provide performance. The core computational unit, which includes control, arithmetic, registers and typically some cache, is replicated some number of times and connected to memory via a network. As a result, all modern processors require parallel code in order to achieve good utilization of their computational power. While processors are evolving to expose more fine-grained parallelism to the programmer, many ex- isting applications have evolved either as serial codes or as coarse-grained parallel codes (for example, where the data is decomposed into regions processed in parallel, with sub-regions shared using MPI). In order to profit from any modern processor architecture, GPUs included, the first steps are to assess the application to identify the hotspots, determine whether they can be parallelized, and understand the relevant workloads both now and in the future. 13 CUDA C++ Best Practices Guide, Release 12.5 14 Chapter 5. Assessing Your Application Chapter 6. Heterogeneous Computing CUDA programming involves running code on two different platforms concurrently: a host system with one or more CPUs and one or more CUDA-enabled NVIDIA GPU devices. While NVIDIA GPUs are frequently associated with graphics, they are also powerful arithmetic engines capable of running thousands of lightweight threads in parallel. This capability makes them well suited to computations that can leverage parallel execution. However, the device is based on a distinctly different design from the host system, and it’s important to understand those differences and how they determine the performance of CUDA applications in order to use CUDA effectively. 6.1. Differences between Host and Device The primary differences are in threading model and in separate physical memories: Threading resources Execution pipelines on host systems can support a limited number of concurrent threads. For ex- ample, servers that have two 32 core processors can run only 64 threads concurrently (or small multiple of that if the CPUs support simultaneous multithreading). By comparison, the small- est executable unit of parallelism on a CUDA device comprises 32 threads (termed a warp of threads). Modern NVIDIA GPUs can support up to 2048 active threads concurrently per multi- processor (see Features and Specifications of the CUDA C++ Programming Guide) On GPUs with 80 multiprocessors, this leads to more than 160,000 concurrently active threads. Threads Threads on a CPU are generally heavyweight entities. The operating system must swap threads on and off CPU execution channels to provide multithreading capability. Context switches (when two threads are swapped) are therefore slow and expensive. By comparison, threads on GPUs In a typical system, thousands of threads are queued up for work are extremely lightweight. (in warps of 32 threads each). If the GPU must wait on one warp of threads, it simply begins executing work on another. Because separate registers are allocated to all active threads, no swapping of registers or other state need occur when switching among GPU threads. Resources stay allocated to each thread until it completes its execution. In short, CPU cores are designed to minimize latency for a small number of threads at a time each, whereas GPUs are designed to handle a large number of concurrent, lightweight threads in order to maximize throughput. RAM The host system and the device each have their own distinct attached physical memories1. As the host and device memories are separated, items in the host memory must occasionally be 1 On Systems on a Chip with integrated GPUs, such as NVIDIA® Tegra®, host and device memory are physically the same, but there is still a logical distinction between host and device memory. See the Application Note on CUDA for Tegra for details. 15 CUDA C++ Best Practices Guide, Release 12.5 communicated between device memory and host memory as described in What Runs on a CUDA- Enabled Device?. These are the primary hardware differences between CPU hosts and GPU devices with respect to par- allel programming. Other differences are discussed as they arise elsewhere in this document. Applica- tions composed with these differences in mind can treat the host and device together as a cohesive heterogeneous system wherein each processing unit is leveraged to do the kind of work it does best: sequential work on the host and parallel work on the device. 6.2. What Runs on a CUDA-Enabled Device? The following issues should be considered when determining what parts of an application to run on the device: ▶ The device is ideally suited for computations that can be run on numerous data elements si- multaneously in parallel. This typically involves arithmetic on large data sets (such as matrices) where the same operation can be performed across thousands, if not millions, of elements at the same time. This is a requirement for good performance on CUDA: the software must use a large number (generally thousands or tens of thousands) of concurrent threads. The support for running numerous threads in parallel derives from CUDA’s use of a lightweight threading model described above. ▶ To use CUDA, data values must be transferred from the host to the device. These transfers are costly in terms of performance and should be minimized. (See Data Transfer Between Host and Device.) This cost has several ramifications: ▶ The complexity of operations should justify the cost of moving data to and from the device. Code that transfers data for brief use by a small number of threads will see little or no per- formance benefit. The ideal scenario is one in which many threads perform a substantial amount of work. For example, transferring two matrices to the device to perform a matrix addition and then transferring the results back to the host will not realize much performance benefit. The issue here is the number of operations performed per data element transferred. For the preceding procedure, assuming matrices of size NxN, there are N2 operations (additions) and 3N2 elements transferred, so the ratio of operations to elements transferred is 1:3 or O(1). Performance benefits can be more readily achieved when this ratio is higher. For example, a matrix multiplication of the same matrices requires N3 operations (multiply-add), so the ratio of operations to elements transferred is O(N), in which case the larger the matrix the greater the performance benefit. The types of operations are an additional factor, as additions have different complexity profiles than, for example, trigonometric functions. It is important to include the overhead of transferring data to and from the device in determining whether operations should be performed on the host or on the device. ▶ Data should be kept on the device as long as possible. Because transfers should be mini- mized, programs that run multiple kernels on the same data should favor leaving the data on the device between kernel calls, rather than transferring intermediate results to the host and then sending them back to the device for subsequent calculations. So, in the previous example, had the two matrices to be added already been on the device as a result of some previous calculation, or if the results of the addition would be used in some subsequent calculation, the matrix addition should be performed locally on the device. This approach should be used even if one of the steps in a sequence of calculations could be performed faster on the host. Even a relatively slow kernel may be advantageous if it avoids one or more transfers between host and device memory. Data Transfer Between Host and Device 16 Chapter 6. Heterogeneous Computing CUDA C++ Best Practices Guide, Release 12.5 provides further details, including the measurements of bandwidth between the host and the device versus within the device proper. ▶ For best performance, there should be some coherence in memory access by adjacent threads running on the device. Certain memory access patterns enable the hardware to coalesce groups of reads or writes of multiple data items into one operation. Data that cannot be laid out so as to enable coalescing, or that doesn’t have enough locality to use the L1 or texture caches effectively, will tend to see lesser speedups when used in computations on GPUs. A noteworthy exception to this are completely random memory access patterns. In general, they should be avoided, be- cause compared to peak capabilities any architecture processes these memory access patterns at a low efficiency. However, compared to cache based architectures, like CPUs, latency hiding architectures, like GPUs, tend to cope better with completely random memory access patterns. 6.2. What Runs on a CUDA-Enabled Device? 17 CUDA C++ Best Practices Guide, Release 12.5 18 Chapter 6. Heterogeneous Computing Chapter 7. Application Profiling 7.1. Profile Many codes accomplish a significant portion of the work with a relatively small amount of code. Us- ing a profiler, the developer can identify such hotspots and start to compile a list of candidates for parallelization. 7.1.1. Creating the Profile There are many possible approaches to profiling the code, but in all cases the objective is the same: to identify the function or functions in which the application is spending most of its execution time. Note: High Priority: To maximize developer productivity, profile the application to determine hotspots and bottlenecks. The most important consideration with any profiling activity is to ensure that the workload is realistic - i.e., that information gained from the test and decisions based upon that information are relevant to real data. Using unrealistic workloads can lead to sub-optimal results and wasted effort both by causing developers to optimize for unrealistic problem sizes and by causing developers to concentrate on the wrong functions. There are a number of tools that can be used to generate the profile. The following example is based on gprof, which is an open-source profiler for Linux platforms from the GNU Binutils collection. $ gcc -O2 -g -pg myprog.c $ gprof .∕a.out > profile.txt Each sample counts as 0.01 seconds. % time 33.34 16.67 16.67 16.67 16.67 0.00 0.00 0.00 0.00 0.00 cumulative seconds 0.02 0.03 0.04 0.05 0.06 0.06 0.06 0.06 0.06 0.06 self seconds 0.02 0.01 0.01 0.01 0.01 0.00 0.00 0.00 0.00 0.00 calls 7208 240 8 7 236 192 47 45 1 self total ms∕call ms∕call 0.00 0.12 1.25 1.43 0.00 0.04 1.25 1.43 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 50.00 name genTimeStep calcStats calcSummaryData write mcount tzset tolower strlen strchr main (continues on next page) 19 CUDA C++ Best Practices Guide, Release 12.5 0.00 0.00 0.00 0.00 0.06 0.06 0.06 0.06 0.00 0.00 0.00 0.00 1 1 1 1 0.00 0.00 0.00 0.00 0.00 10.11 0.00 50.00 memcpy print profil report (continued from previous page) 7.1.2. Identifying Hotspots In the example above, we can clearly see that the function genTimeStep() takes one-third of the total running time of the application. This should be our first candidate function for parallelization. Understanding Scaling discusses the potential benefit we might expect from such parallelization. It is worth noting that several of the other functions in the above example also take up a significant por- tion of the overall running time, such as calcStats() and calcSummaryData(). Parallelizing these functions as well should increase our speedup potential. However, since APOD is a cyclical process, we might opt to parallelize these functions in a subsequent APOD pass, thereby limiting the scope of our work in any given pass to a smaller set of incremental changes. 7.1.3. Understanding Scaling The amount of performance benefit an application will realize by running on CUDA depends entirely on the extent to which it can be parallelized. Code that cannot be sufficiently parallelized should run on the host, unless doing so would result in excessive transfers between the host and the device. Note: High Priority: To get the maximum benefit from CUDA, focus first on finding ways to parallelize sequential code. By understanding how applications can scale it is possible to set expectations and plan an incremental parallelization strategy. Strong Scaling and Amdahl’s Law describes strong scaling, which allows us to set an upper bound for the speedup with a fixed problem size. Weak Scaling and Gustafson’s Law de- scribes weak scaling, where the speedup is attained by growing the problem size. In many applications, a combination of strong and weak scaling is desirable. 7.1.3.1 Strong Scaling and Amdahl’s Law Strong scaling is a measure of how, for a fixed overall problem size, the time to solution decreases as more processors are added to a system. An application that exhibits linear strong scaling has a speedup equal to the number of processors used. Strong scaling is usually equated with Amdahl’s Law, which specifies the maximum speedup that can be expected by parallelizing portions of a serial program. Essentially, it states that the maximum speedup S of a program is: S = 1 (1−P )+ P N Here P is the fraction of the total serial execution time taken by the portion of code that can be par- allelized and N is the number of processors over which the parallel portion of the code runs. The larger N is(that is, the greater the number of processors), the smaller the P/N fraction. It can be simpler to view N as a very large number, which essentially transforms the equation into S = 1/(1 − P ). 20 Chapter 7. Application Profiling CUDA C++ Best Practices Guide, Release 12.5 Now, if 3/4 of the running time of a sequential program is parallelized, the maximum speedup over serial code is 1 / (1 - 3/4) = 4. In reality, most applications do not exhibit perfectly linear strong scaling, even if they do exhibit some degree of strong scaling. For most purposes, the key point is that the larger the parallelizable portion P is, the greater the potential speedup. Conversely, if P is a small number (meaning that the applica- tion is not substantially parallelizable), increasing the number of processors N does little to improve performance. Therefore, to get the largest speedup for a fixed problem size, it is worthwhile to spend effort on increasing P, maximizing the amount of code that can be parallelized. 7.1.3.2 Weak Scaling and Gustafson’s Law Weak scaling is a measure of how the time to solution changes as more processors are added to a system with a fixed problem size per processor; i.e., where the overall problem size increases as the number of processors is increased. Weak scaling is often equated with Gustafson’s Law, which states that in practice, the problem size scales with the number of processors. Because of this, the maximum speedup S of a program is: S = N + (1 − P )(1 − N ) Here P is the fraction of the total serial execution time taken by the portion of code that can be par- allelized and N is the number of processors over which the parallel portion of the code runs. Another way of looking at Gustafson’s Law is that it is not the problem size that remains constant as we scale up the system but rather the execution time. Note that Gustafson’s Law assumes that the ratio of serial to parallel execution remains constant, reflecting additional cost in setting up and handling the larger problem. 7.1.3.3 Applying Strong and Weak Scaling Understanding which type of scaling is most applicable to an application is an important part of esti- mating speedup. For some applications the problem size will remain constant and hence only strong scaling is applicable. An example would be modeling how two molecules interact with each other, where the molecule sizes are fixed. For other applications, the problem size will grow to fill the available processors. Examples include modeling fluids or structures as meshes or grids and some Monte Carlo simulations, where increasing the problem size provides increased accuracy. Having understood the application profile, the developer should understand how the problem size would change if the computational performance changes and then apply either Amdahl’s or Gustafson’s Law to determine an upper bound for the speedup. 7.1. Profile 21 CUDA C++ Best Practices Guide, Release 12.5 22 Chapter 7. Application Profiling Chapter 8. Parallelizing Your Application Having identified the hotspots and having done the basic exercises to set goals and expectations, the developer needs to parallelize the code. Depending on the original code, this can be as simple as calling into an existing GPU-optimized library such as cuBLAS, cuFFT, or Thrust, or it could be as simple as adding a few preprocessor directives as hints to a parallelizing compiler. On the other hand, some applications’ designs will require some amount of refactoring to expose their inherent parallelism. As even CPU architectures require exposing this parallelism in order to improve or simply maintain the performance of sequential applications, the CUDA family of parallel programming languages (CUDA C++, CUDA Fortran, etc.) aims to make the expression of this parallelism as simple as possible, while simultaneously enabling operation on CUDA-capable GPUs designed for maximum parallel throughput. 23 CUDA C++ Best Practices Guide, Release 12.5 24 Chapter 8. Parallelizing Your Application Chapter 9. Getting Started There are several key strategies for parallelizing sequential code. While the details of how to apply these strategies to a particular application is a complex and problem-specific topic, the general themes listed here apply regardless of whether we are parallelizing code to run on for multicore CPUs or for use on CUDA GPUs. 9.1. Parallel Libraries The most straightforward approach to parallelizing an application is to leverage existing libraries that take advantage of parallel architectures on our behalf. The CUDA Toolkit includes a number of such libraries that have been fine-tuned for NVIDIA CUDA GPUs, such as cuBLAS, cuFFT, and so on. The key here is that libraries are most useful when they match well with the needs of the application. Applications already using other BLAS libraries can often quite easily switch to cuBLAS, for example, whereas applications that do little to no linear algebra will have little use for cuBLAS. The same goes for other CUDA Toolkit libraries: cuFFT has an interface similar to that of FFTW, etc. Also of note is the Thrust library, which is a parallel C++ template library similar to the C++ Standard Template Library. Thrust provides a rich collection of data parallel primitives such as scan, sort, and reduce, which can be composed together to implement complex algorithms with concise, readable source code. By describing your computation in terms of these high-level abstractions you provide Thrust with the freedom to select the most efficient implementation automatically. As a result, Thrust can be utilized in rapid prototyping of CUDA applications, where programmer productivity matters most, as well as in production, where robustness and absolute performance are crucial. 9.2. Parallelizing Compilers Another common approach to parallelization of sequential codes is to make use of parallelizing compil- ers. Often this means the use of directives-based approaches, where the programmer uses a pragma or other similar notation to provide hints to the compiler about where parallelism can be found with- out needing to modify or adapt the underlying code itself. By exposing parallelism to the compiler, directives allow the compiler to do the detailed work of mapping the computation onto the parallel architecture. The OpenACC standard provides a set of compiler directives to specify loops and regions of code in standard C, C++ and Fortran that should be offloaded from a host CPU to an attached accelerator such as a CUDA GPU. The details of managing the accelerator device are handled implicitly by an OpenACC- enabled compiler and runtime. 25 CUDA C++ Best Practices Guide, Release 12.5 See http://www.openacc.org/ for details. 9.3. Coding to Expose Parallelism For applications that need additional functionality or performance beyond what existing parallel li- braries or parallelizing compilers can provide, parallel programming languages such as CUDA C++ that integrate seamlessly with existing sequential code are essential. Once we have located a hotspot in our application’s profile assessment and determined that custom code is the best approach, we can use CUDA C++ to expose the parallelism in that portion of our code as a CUDA kernel. We can then launch this kernel onto the GPU and retrieve the results without requiring major rewrites to the rest of our application. This approach is most straightforward when the majority of the total running time of our application is spent in a few relatively isolated portions of the code. More difficult to parallelize are applications with a very flat profile - i.e., applications where the time spent is spread out relatively evenly across a wide portion of the code base. For the latter variety of application, some degree of code refactoring to expose the inherent parallelism in the application might be necessary, but keep in mind that this refactoring work will tend to benefit all future architectures, CPU and GPU alike, so it is well worth the effort should it become necessary. 26 Chapter 9. Getting Started Chapter 10. Getting the Right Answer Obtaining the right answer is clearly the principal goal of all computation. On parallel systems, it is possible to run into difficulties not typically found in traditional serial-oriented programming. These include threading issues, unexpected values due to the way floating-point values are computed, and challenges arising from differences in the way CPU and GPU processors operate. This chapter exam- ines issues that can affect the correctness of returned data and points to appropriate solutions. 10.1. Verification 10.1.1. Reference Comparison A key aspect of correctness verification for modifications to any existing program is to establish some mechanism whereby previous known-good reference outputs from representative inputs can be com- pared to new results. After each change is made, ensure that the results match using whatever criteria apply to the particular algorithm. Some will expect bitwise identical results, which is not always pos- sible, especially where floating-point arithmetic is concerned; see Numerical Accuracy and Precision regarding numerical accuracy. For other algorithms, implementations may be considered correct if they match the reference within some small epsilon. Note that the process used for validating numerical results can easily be extended to validate perfor- mance results as well. We want to ensure that each change we make is correct and that it improves performance (and by how much). Checking these things frequently as an integral part of our cyclical APOD process will help ensure that we achieve the desired results as rapidly as possible. 10.1.2. Unit Testing A useful counterpart to the reference comparisons described above is to structure the code itself in such a way that is readily verifiable at the unit level. For example, we can write our CUDA kernels as a collection of many short __device__ functions rather than one large monolithic __global__ function; each device function can be tested independently before hooking them all together. For example, many kernels have complex addressing logic for accessing memory in addition to their actual computation. If we validate our addressing logic separately prior to introducing the bulk of the computation, then this will simplify any later debugging efforts. (Note that the CUDA compiler considers any device code that does not contribute to a write to global memory as dead code subject to elimination, so we must at least write something out to global memory as a result of our addressing logic in order to successfully apply this strategy.) 27 CUDA C++ Best Practices Guide, Release 12.5 Going a step further, if most functions are defined as __host__ __device__ rather than just __de- vice__ functions, then these functions can be tested on both the CPU and the GPU, thereby increas- ing our confidence that the function is correct and that there will not be any unexpected differences in the results. If there are differences, then those differences will be seen early and can be understood in the context of a simple function. As a useful side effect, this strategy will allow us a means to reduce code duplication should we wish to include both CPU and GPU execution paths in our application: if the bulk of the work of our CUDA kernels is done in __host__ __device__ functions, we can easily call those functions from both the host code and the device code without duplication. 10.2. Debugging CUDA-GDB is a port of the GNU Debugger that runs on Linux and Mac; see: https://developer.nvidia. com/cuda-gdb. The NVIDIA Nsight Visual Studio Edition for Microsoft Windows 7, Windows HPC Server 2008, Windows 8.1, and Windows 10 is available as a free plugin for Microsoft Visual Studio; see: https://developer. nvidia.com/nsight-visual-studio-edition. Several third-party debuggers support CUDA debugging as well; see: https://developer.nvidia.com/ debugging-solutions for more details. 10.3. Numerical Accuracy and Precision Incorrect or unexpected results arise principally from issues of floating-point accuracy due to the way floating-point values are computed and stored. The following sections explain the principal items of interest. Other peculiarities of floating-point arithmetic are presented in Features and Technical Spec- ifications of the CUDA C++ Programming Guide as well as in a whitepaper and accompanying webi- nar on floating-point precision and performance available from https://developer.nvidia.com/content/ precision-performance-floating-point-and-ieee-754-compliance-nvidia-gpus. 10.3.1. Single vs. Double Precision Devices of compute capability 1.3 and higher provide native support for double-precision floating- point values (that is, values 64 bits wide). Results obtained using double-precision arithmetic will fre- quently differ from the same operation performed via single-precision arithmetic due to the greater precision of the former and due to rounding issues. Therefore, it is important to be sure to compare values of like precision and to express the results within a certain tolerance rather than expecting them to be exact. 28 Chapter 10. Getting the Right Answer CUDA C++ Best Practices Guide, Release 12.5 10.3.2. Floating Point Math Is not Associative Each floating-point arithmetic operation involves a certain amount of rounding. Consequently, the order in which arithmetic operations are performed is important. If A, B, and C are floating-point values, (A+B)+C is not guaranteed to equal A+(B+C) as it is in symbolic math. When you parallelize computations, you potentially change the order of operations and therefore the parallel results might not match sequential results. This limitation is not specific to CUDA, but an inherent part of parallel computation on floating-point values. 10.3.3. IEEE 754 Compliance All CUDA compute devices follow the IEEE 754 standard for binary floating-point representation, with some small exceptions. These exceptions, which are detailed in Features and Technical Specifications of the CUDA C++ Programming Guide, can lead to results that differ from IEEE 754 values computed on the host system. One of the key differences is the fused multiply-add (FMA) instruction, which combines multiply-add operations into a single instruction execution. Its result will often differ slightly from results obtained by doing the two operations separately. 10.3.4. x86 80-bit Computations x86 processors can use an 80-bit double extended precision math when performing floating-point cal- culations. The results of these calculations can frequently differ from pure 64-bit operations per- formed on the CUDA device. To get a closer match between values, set the x86 host processor to use regular double or single precision (64 bits and 32 bits, respectively). This is done with the FLDCW x86 assembly instruction or the equivalent operating system API. 10.3. Numerical Accuracy and Precision 29 CUDA C++ Best Practices Guide, Release 12.5 30 Chapter 10. Getting the Right Answer Chapter 11. Optimizing CUDA Applications After each round of application parallelization is complete, the developer can move to optimizing the implementation to improve performance. Since there are many possible optimizations that can be considered, having a good understanding of the needs of the application can help to make the pro- cess as smooth as possible. However, as with APOD as a whole, program optimization is an iterative process (identify an opportunity for optimization, apply and test the optimization, verify the speedup achieved, and repeat), meaning that it is not necessary for a programmer to spend large amounts of In- time memorizing the bulk of all possible optimization strategies prior to seeing good speedups. stead, strategies can be applied incrementally as they are learned. Optimizations can be applied at various levels, from overlapping data transfers with computation all the way down to fine-tuning floating-point operation sequences. The available profiling tools are invaluable for guiding this process, as they can help suggest a next-best course of action for the developer’s optimization efforts and provide references into the relevant portions of the optimization section of this guide. 31 CUDA C++ Best Practices Guide, Release 12.5 32 Chapter 11. Optimizing CUDA Applications Chapter 12. Performance Metrics When attempting to optimize CUDA code, it pays to know how to measure performance accurately and to understand the role that bandwidth plays in performance measurement. This chapter discusses how to correctly measure performance using CPU timers and CUDA events. It then explores how bandwidth affects performance metrics and how to mitigate some of the challenges it poses. 12.1. Timing CUDA calls and kernel executions can be timed using either CPU or GPU timers. This section examines the functionality, advantages, and pitfalls of both approaches. 12.1.1. Using CPU Timers Any CPU timer can be used to measure the elapsed time of a CUDA call or kernel execution. The details of various CPU timing approaches are outside the scope of this document, but developers should always be aware of the resolution their timing calls provide. When using CPU timers, it is critical to remember that many CUDA API functions are asynchronous; that is, they return control back to the calling CPU thread prior to completing their work. All ker- nel launches are asynchronous, as are memory-copy functions with the Async suffix on their names. Therefore, to accurately measure the elapsed time for a particular call or sequence of CUDA calls, it is necessary to synchronize the CPU thread with the GPU by calling cudaDeviceSynchronize() im- mediately before starting and stopping the CPU timer. cudaDeviceSynchronize()blocks the calling CPU thread until all CUDA calls previously issued by the thread are completed. Although it is also possible to synchronize the CPU thread with a particular stream or event on the GPU, these synchronization functions are not suitable for timing code in streams other than the default stream. cudaStreamSynchronize() blocks the CPU thread until all CUDA calls previously issued into the given stream have completed. cudaEventSynchronize() blocks until a given event in a particular stream has been recorded by the GPU. Because the driver may interleave execution of CUDA calls from other non-default streams, calls in other streams may be included in the timing. Because the default stream, stream 0, exhibits serializing behavior for work on the device (an operation in the default stream can begin only after all preceding calls in any stream have completed; and no subsequent operation in any stream can begin until it finishes), these functions can be used reliably for timing in the default stream. Be aware that CPU-to-GPU synchronization points such as those mentioned in this section imply a stall in the GPU’s processing pipeline and should thus be used sparingly to minimize their performance 33 CUDA C++ Best Practices Guide, Release 12.5 impact. 12.1.2. Using CUDA GPU Timers The CUDA event API provides calls that create and destroy events, record events (including a times- tamp), and convert timestamp differences into a floating-point value in milliseconds. How to time code using CUDA events illustrates their use. How to time code using CUDA events cudaEvent_t start, stop; float time; cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord( start, 0 ); kernel<<>> ( d_odata, d_idata, size_x, size_y, cudaEventRecord( stop, 0 ); cudaEventSynchronize( stop ); NUM_REPS); cudaEventElapsedTime( &time, start, stop ); cudaEventDestroy( start ); cudaEventDestroy( stop ); Here cudaEventRecord() is used to place the start and stop events into the default stream, stream 0. The device will record a timestamp for the event when it reaches that event in the stream. The cudaEventElapsedTime() function returns the time elapsed between the recording of the start and stop events. This value is expressed in milliseconds and has a resolution of approximately half a microsecond. Like the other calls in this listing, their specific operation, parameters, and return values are described in the CUDA Toolkit Reference Manual. Note that the timings are measured on the GPU clock, so the timing resolution is operating-system-independent. 12.2. Bandwidth Bandwidth - the rate at which data can be transferred - is one of the most important gating factors for performance. Almost all changes to code should be made in the context of how they affect bandwidth. As described in Memory Optimizations of this guide, bandwidth can be dramatically affected by the choice of memory in which data is stored, how the data is laid out and the order in which it is accessed, as well as other factors. To measure performance accurately, it is useful to calculate theoretical and effective bandwidth. When the latter is much lower than the former, design or implementation details are likely to reduce band- width, and it should be the primary goal of subsequent optimization efforts to increase it. Note: High Priority: Use the effective bandwidth of your computation as a metric when measuring performance and optimization benefits. 34 Chapter 12. Performance Metrics CUDA C++ Best Practices Guide, Release 12.5 12.2.1. Theoretical Bandwidth Calculation Theoretical bandwidth can be calculated using hardware specifications available in the product liter- ature. For example, the NVIDIA Tesla V100 uses HBM2 (double data rate) RAM with a memory clock rate of 877 MHz and a 4096-bit-wide memory interface. Using these data items, the peak theoretical memory bandwidth of the NVIDIA Tesla V100 is 898 GB/s: ( ) 0.877 × 109 × (4096/8) × 2 ÷ 109 = 898GB/s In this calculation, the memory clock rate is converted in to Hz, multiplied by the interface width (di- vided by 8, to convert bits to bytes) and multiplied by 2 due to the double data rate. Finally, this product is divided by 109 to convert the result to GB/s. Note: Some calculations use 10243 instead of 109 for the final calculation. In such a case, the band- width would be 836.4 GiB/s. It is important to use the same divisor when calculating theoretical and effective bandwidth so that the comparison is valid. Note: On GPUs with GDDR memory with ECC enabled the available DRAM is reduced by 6.25% to allow for the storage of ECC bits. Fetching ECC bits for each memory transaction also reduced the effective bandwidth by approximately 20% compared to the same GPU with ECC disabled, though the exact impact of ECC on bandwidth can be higher and depends on the memory access pattern. HBM2 memories, on the other hand, provide dedicated ECC resources, allowing overhead-free ECC protection.2 12.2.2. Effective Bandwidth Calculation Effective bandwidth is calculated by timing specific program activities and by knowing how data is accessed by the program. To do so, use this equation: Effective bandwidth = (Br + Bw) ÷ 109 ÷ time ( ) Here, the effective bandwidth is in units of GB/s, Br is the number of bytes read per kernel, Bw is the number of bytes written per kernel, and time is given in seconds. For example, to compute the effective bandwidth of a 2048 x 2048 matrix copy, the following formula could be used: Effective bandwidth = (( ) 20482 × 4 × 2 ) ÷ 109 ÷ time The number of elements is multiplied by the size of each element (4 bytes for a float), multiplied by 2 (because of the read and write), divided by 109 (or 1,0243) to obtain GB of memory transferred. This number is divided by the time in seconds to obtain GB/s. 2 As an exception, scattered writes to HBM2 see some overhead from ECC but much less than the overhead with similar access patterns on ECC-protected GDDR5 memory. 12.2. Bandwidth 35 CUDA C++ Best Practices Guide, Release 12.5 12.2.3. Throughput Reported by Visual Profiler For devices with compute capability of 2.0 or greater, the Visual Profiler can be used to collect several different memory throughput measures. The following throughput metrics can be displayed in the Details or Detail Graphs view: ▶ Requested Global Load Throughput ▶ Requested Global Store Throughput ▶ Global Load Throughput ▶ Global Store Throughput ▶ DRAM Read Throughput ▶ DRAM Write Throughput The Requested Global Load Throughput and Requested Global Store Throughput values indicate the global memory throughput requested by the kernel and therefore correspond to the effective band- width obtained by the calculation shown under Effective Bandwidth Calculation. Because the minimum memory transaction size is larger than most word sizes, the actual memory throughput required for a kernel can include the transfer of data not used by the kernel. For global memory accesses, this actual throughput is reported by the Global Load Throughput and Global Store Throughput values. It’s important to note that both numbers are useful. The actual memory throughput shows how close the code is to the hardware limit, and a comparison of the effective or requested bandwidth to the actual bandwidth presents a good estimate of how much bandwidth is wasted by suboptimal coalesc- ing of memory accesses (see Coalesced Access to Global Memory). For global memory accesses, this comparison of requested memory bandwidth to actual memory bandwidth is reported by the Global Memory Load Efficiency and Global Memory Store Efficiency metrics. 36 Chapter 12. Performance Metrics Chapter 13. Memory Optimizations Memory optimizations are the most important area for performance. The goal is to maximize the use of the hardware by maximizing bandwidth. Bandwidth is best served by using as much fast memory and as little slow-access memory as possible. This chapter discusses the various kinds of memory on the host and device and how best to set up data items to use the memory effectively. 13.1. Data Transfer Between Host and Device The peak theoretical bandwidth between the device memory and the GPU is much higher (898 GB/s on the NVIDIA Tesla V100, for example) than the peak theoretical bandwidth between host memory and device memory (16 GB/s on the PCIe x16 Gen3). Hence, for best overall application performance, it is important to minimize data transfer between the host and the device, even if that means running kernels on the GPU that do not demonstrate any speedup compared with running them on the host CPU. Note: High Priority: Minimize data transfer between the host and the device, even if it means running some kernels on the device that do not show performance gains when compared with running them on the host CPU. Intermediate data structures should be created in device memory, operated on by the device, and destroyed without ever being mapped by the host or copied to host memory. Also, because of the overhead associated with each transfer, batching many small transfers into one larger transfer performs significantly better than making each transfer separately, even if doing so requires packing non-contiguous regions of memory into a contiguous buffer and then unpacking after the transfer. Finally, higher bandwidth between the host and the device is achieved when using page-locked (or pinned) memory, as discussed in the CUDA C++ Programming Guide and the Pinned Memory section of this document. 37 CUDA C++ Best Practices Guide, Release 12.5 13.1.1. Pinned Memory Page-locked or pinned memory transfers attain the highest bandwidth between the host and the device. On PCIe x16 Gen3 cards, for example, pinned memory can attain roughly 12 GB/s transfer rates. Pinned memory is allocated using the cudaHostAlloc() functions in the Runtime API. The band- widthTest CUDA Sample shows how to use these functions as well as how to measure memory trans- fer performance. For regions of system memory that have already been pre-allocated, cudaHostRegister() can be used to pin the memory on-the-fly without the need to allocate a separate buffer and copy the data into it. Pinned memory should not be overused. Excessive use can reduce overall system performance be- cause pinned memory is a scarce resource, but how much is too much is difficult to know in advance. Furthermore, the pinning of system memory is a heavyweight operation compared to most normal system memory allocations, so as with all optimizations, test the application and the systems it runs on for optimal performance parameters. 13.1.2. Asynchronous and Overlapping Transfers with Computation Data transfers between the host and the device using cudaMemcpy() are blocking transfers; that is, control is returned to the host thread only after the data transfer is complete. The cudaMem- cpyAsync() function is a non-blocking variant of cudaMemcpy() in which control is returned immedi- ately to the host thread. In contrast with cudaMemcpy(), the asynchronous transfer version requires pinned host memory (see Pinned Memory), and it contains an additional argument, a stream ID. A stream is simply a sequence of operations that are performed in order on the device. Operations in different streams can be interleaved and in some cases overlapped - a property that can be used to hide data transfers between the host and the device. Asynchronous transfers enable overlap of data transfers with computation in two different ways. On all CUDA-enabled devices, it is possible to overlap host computation with asynchronous data transfers and with device computations. For example, Overlapping computation and data transfers demon- strates how host computation in the routine cpuFunction() is performed while data is transferred to the device and a kernel using the device is executed. Overlapping computation and data transfers cudaMemcpyAsync(a_d, a_h, size, cudaMemcpyHostToDevice, 0); kernel<<>>(a_d); cpuFunction(); The last argument to the cudaMemcpyAsync() function is the stream ID, which in this case uses the default stream, stream 0. The kernel also uses the default stream, and it will not begin execution until the memory copy completes; therefore, no explicit synchronization is needed. Because the memory copy and the kernel both return control to the host immediately, the host function cpuFunction() overlaps their execution. 38 Chapter 13. Memory Optimizations CUDA C++ Best Practices Guide, Release 12.5 In Overlapping computation and data transfers, the memory copy and kernel execution occur sequen- tially. On devices that are capable of concurrent copy and compute, it is possible to overlap kernel ex- ecution on the device with data transfers between the host and the device. Whether a device has this capability is indicated by the asyncEngineCount field of the cudaDeviceProp structure (or listed in the output of the deviceQuery CUDA Sample). On devices that have this capability, the overlap once again requires pinned host memory, and, in addition, the data transfer and kernel must use different, non-default streams (streams with non-zero stream IDs). Non-default streams are required for this overlap because memory copy, memory set functions, and kernel calls that use the default stream begin only after all preceding calls on the device (in any stream) have completed, and no operation on the device (in any stream) commences until they are finished. Concurrent copy and execute illustrates the basic technique. Concurrent copy and execute cudaStreamCreate(&stream1); cudaStreamCreate(&stream2); cudaMemcpyAsync(a_d, a_h, size, cudaMemcpyHostToDevice, stream1); kernel<<>>(otherData_d); In this code, two streams are created and used in the data transfer and kernel executions as specified in the last arguments of the cudaMemcpyAsync call and the kernel’s execution configuration. Concurrent copy and execute demonstrates how to overlap kernel execution with asynchronous data transfer. This technique could be used when the data dependency is such that the data can be broken into chunks and transferred in multiple stages, launching multiple kernels to operate on each chunk as it arrives. Sequential copy and execute and Staged concurrent copy and execute demonstrate this. They produce equivalent results. The first segment shows the reference sequential implementation, which transfers and operates on an array of N floats (where N is assumed to be evenly divisible by nThreads). Sequential copy and execute cudaMemcpy(a_d, a_h, N*sizeof(float), dir); kernel<<>>(a_d); Staged concurrent copy and execute shows how the transfer and kernel execution can be broken up into nStreams stages. This approach permits some overlapping of the data transfer and execution. Staged concurrent copy and execute size=N*sizeof(float)∕nStreams; for (i=0; i>>(a_d+offset); } is assumed that N is evenly divisible by it (In Staged concurrent copy and execute, nThreads*nStreams.) Because execution within a stream occurs sequentially, none of the ker- nels will launch until the data transfers in their respective streams complete. Current GPUs can 13.1. Data Transfer Between Host and Device 39 CUDA C++ Best Practices Guide, Release 12.5 simultaneously process asynchronous data transfers and execute kernels. GPUs with a single copy engine can perform one asynchronous data transfer and execute kernels whereas GPUs with two copy engines can simultaneously perform one asynchronous data transfer from the host to the device, one asynchronous data transfer from the device to the host, and execute kernels. The number of copy engines on a GPU is given by the asyncEngineCount field of the cudaDeviceProp structure, which is also listed in the output of the deviceQuery CUDA Sample. (It should be mentioned that it is not possible to overlap a blocking transfer with an asynchronous transfer, because the blocking transfer occurs in the default stream, so it will not begin until all previous CUDA calls complete. It will not allow any other CUDA call to begin until it has completed.) A diagram depicting the timeline of execution for the two code segments is shown in Figure 1, and nStreams is equal to 4 for Staged concurrent copy and execute in the bottom half of the figure. Fig. 1: Timeline comparison for copy and kernel execution Top Sequential Bottom Concurrent For this example, it is assumed that the data transfer and kernel execution times are comparable. In such cases, and when the execution time (tE) exceeds the transfer time (tT), a rough estimate for the overall time is tE + tT/nStreams for the staged version versus tE + tT for the sequential version. If the transfer time exceeds the execution time, a rough estimate for the overall time is tT + tE/nStreams. 13.1.3. Zero Copy Zero copy is a feature that was added in version 2.2 of the CUDA Toolkit. It enables GPU threads to directly access host memory. For this purpose, it requires mapped pinned (non-pageable) memory. On integrated GPUs (i.e., GPUs with the integrated field of the CUDA device properties structure set to 1), mapped pinned memory is always a performance gain because it avoids superfluous copies as integrated GPU and CPU memory are physically the same. On discrete GPUs, mapped pinned memory is advantageous only in certain cases. Because the data is not cached on the GPU, mapped pinned memory should be read or written only once, and the global loads and stores that read and write the memory should be coalesced. Zero copy can be used in place of streams because kernel-originated data transfers automatically overlap kernel execution without the overhead of setting up and deter- mining the optimal number of streams. Note: Low Priority: Use zero-copy operations on integrated GPUs for CUDA Toolkit version 2.2 and later. 40 Chapter 13. Memory Optimizations The host code in Zero-copy host code shows how zero copy is typically set up. CUDA C++ Best Practices Guide, Release 12.5 Zero-copy host code float *a_h, *a_map; ... cudaGetDeviceProperties(&prop, 0); if (!prop.canMapHostMemory) exit(0); cudaSetDeviceFlags(cudaDeviceMapHost); cudaHostAlloc(&a_h, nBytes, cudaHostAllocMapped); cudaHostGetDevicePointer(&a_map, a_h, 0); kernel<<>>(a_map); In this code, the canMapHostMemory field of the structure returned by cudaGetDeviceProper- ties() is used to check that the device supports mapping host memory to the device’s address space. Page-locked memory mapping is enabled by calling cudaSetDeviceFlags() with cudaDe- viceMapHost. Note that cudaSetDeviceFlags() must be called prior to setting a device or making a CUDA call that requires state (that is, essentially, before a context is created). Page-locked mapped host memory is allocated using cudaHostAlloc(), and the pointer to the mapped device address space is obtained via the function cudaHostGetDevicePointer(). In the code in Zero-copy host code, kernel() can reference the mapped pinned host memory using the pointer a_map in exactly the same was as it would if a_map referred to a location in device memory. Note: Mapped pinned host memory allows you to overlap CPU-GPU memory transfers with compu- tation while avoiding the use of CUDA streams. But since any repeated access to such memory areas causes repeated CPU-GPU transfers, consider creating a second area in device memory to manually cache the previously read host memory data. 13.1.4. Unified Virtual Addressing Devices of compute capability 2.0 and later support a special addressing mode called Unified Virtual Addressing (UVA) on 64-bit Linux and Windows. With UVA, the host memory and the device memories of all installed supported devices share a single virtual address space. Prior to UVA, an application had to keep track of which pointers referred to device memory (and for which device) and which referred to host memory as a separate bit of metadata (or as hard-coded information in the program) for each pointer. Using UVA, on the other hand, the physical memory space to which a pointer points can be determined simply by inspecting the value of the pointer using cudaPointerGetAttributes(). Under UVA, pinned host memory allocated with cudaHostAlloc() will have identical host and de- vice pointers, so it is not necessary to call cudaHostGetDevicePointer() for such allocations. Host memory allocations pinned after-the-fact via cudaHostRegister(), however, will continue to have different device pointers than their host pointers, so cudaHostGetDevicePointer() remains nec- essary in that case. UVA is also a necessary precondition for enabling peer-to-peer (P2P) transfer of data directly across the PCIe bus or NVLink for supported GPUs in supported configurations, bypassing host memory. See the CUDA C++ Programming Guide for further explanations and software requirements for UVA and P2P. 13.1. Data Transfer Between Host and Device 41 CUDA C++ Best Practices Guide, Release 12.5 13.2. Device Memory Spaces CUDA devices use several memory spaces, which have different characteristics that reflect their dis- tinct usages in CUDA applications. These memory spaces include global, local, shared, texture, and registers, as shown in Figure 2. Fig. 2: Memory spaces on a CUDA device Of these different memory spaces, global memory is the most plentiful; see Features and Technical Specifications of the CUDA C++ Programming Guide for the amounts of memory available in each memory space at each compute capability level. Global, local, and texture memory have the greatest access latency, followed by constant memory, shared memory, and the register file. The various principal traits of the memory types are shown in Table 1. 42 Chapter 13. Memory Optimizations CUDA C++ Best Practices Guide, Release 12.5 Table 1: Table 1. Salient Features of Device Memory Location on/off chip On Off On CachedAc- cess Scope Life- time n/a R/W 1 thread Thread Yes†† R/W 1 thread Thread n/a R/W All Block threads in block Off † R/W All Off Yes R Off Yes R threads + host All threads + host All threads + host Host alloca- tion Host alloca- tion Host alloca- tion Memory Register Local Shared Global Constant Texture † Cached in L1 and L2 by default on devices of com- pute capability 6.0 and 7.x; cached only in L2 by default on devices of lower compute capabilities, though some allow opt-in to caching in L1 as well via compilation flags. †† Cached in L1 and L2 by default except on devices of compute capability 5.x; devices of compute capa- bility 5.x cache locals only in L2. In the case of texture access, if a texture reference is bound to a linear array in global memory, then the device code can write to the underlying array. Texture references that are bound to CUDA arrays can be written to via surface-write operations by binding a surface to the same underlying CUDA array storage). Reading from a texture while writing to its underlying global memory array in the same kernel launch should be avoided because the texture caches are read-only and are not invalidated when the associated global memory is modified. 13.2.1. Coalesced Access to Global Memory A very important performance consideration in programming for CUDA-capable GPU architectures is the coalescing of global memory accesses. Global memory loads and stores by threads of a warp are coalesced by the device into as few as possible transactions. Note: High Priority: Ensure global memory accesses are coalesced whenever possible. The access requirements for coalescing depend on the compute capability of the device and are doc- umented in the CUDA C++ Programming Guide. 13.2. Device Memory Spaces 43 CUDA C++ Best Practices Guide, Release 12.5 For devices of compute capability 6.0 or higher, the requirements can be summarized quite easily: the concurrent accesses of the threads of a warp will coalesce into a number of transactions equal to the number of 32-byte transactions necessary to service all of the threads of the warp. For certain devices of compute capability 5.2, L1-caching of accesses to global memory can be option- ally enabled. If L1-caching is enabled on these devices, the number of required transactions is equal to the number of required 128-byte aligned segments. Note: On devices of compute capability 6.0 or higher, L1-caching is the default, however the data access unit is 32-byte regardless of whether global loads are cached in L1 or not. On devices with GDDR memory, accessing memory in a coalesced way is even more important when ECC is turned on. Scattered accesses increase ECC memory transfer overhead, especially when writing data to global memory. Coalescing concepts are illustrated in the following simple examples. These examples assume com- pute capability 6.0 or higher and that accesses are for 4-byte words, unless otherwise noted. 13.2.1.1 A Simple Access Pattern The first and simplest case of coalescing can be achieved by any CUDA-enabled device of compute capability 6.0 or higher: the k-th thread accesses the k-th word in a 32-byte aligned array. Not all threads need to participate. For example, if the threads of a warp access adjacent 4-byte words (e.g., adjacent float values), four coalesced 32-byte transactions will service that memory access. Such a pattern is shown in Figure 3. Fig. 3: Coalesced access This access pattern results in four 32-byte transactions, indicated by the red rectangles. If from any of the four 32-byte segments only a subset of the words are requested (e.g. if several threads had accessed the same word or if some threads did not participate in the access), the full segment is fetched anyway. Furthermore, if accesses by the threads of the warp had been permuted within or accross the four segments, still only four 32-byte transactions would have been performed by a device with compute capability 6.0 or higher. 13.2.1.2 A Sequential but Misaligned Access Pattern If sequential threads in a warp access memory that is sequential but not aligned with a 32-byte seg- ment, five 32-byte segments will be requested, as shown in Figure 4. Memory allocated through the CUDA Runtime API, such as via cudaMalloc(), is guaranteed to be aligned to at least 256 bytes. Therefore, choosing sensible thread block sizes, such as multiples of the warp size (i.e., 32 on current GPUs), facilitates memory accesses by warps that are properly aligned. (Consider what would happen to the memory addresses accessed by the second, third, and subse- quent thread blocks if the thread block size was not a multiple of warp size, for example.) 44 Chapter 13. Memory Optimizations CUDA C++ Best Practices Guide, Release 12.5 Fig. 4: Misaligned sequential addresses that fall within five 32-byte segments 13.2.1.3 Effects of Misaligned Accesses It is easy and informative to explore the ramifications of misaligned accesses using a simple copy kernel, such as the one in A copy kernel that illustrates misaligned accesses. A copy kernel that illustrates misaligned accesses __global__ void offsetCopy(float *odata, float* idata, int offset) { int xid = blockIdx.x * blockDim.x + threadIdx.x + offset; odata[xid] = idata[xid]; } In A copy kernel that illustrates misaligned accesses, data is copied from the input array idata to the output array, both of which exist in global memory. The kernel is executed within a loop in host code that varies the parameter offset from 0 to 32. (e.g. Figure 4 corresponds to this misalignments) The effective bandwidth for the copy with various offsets on an NVIDIA Tesla V100 (compute capability 7.0) is shown in Figure 5. Fig. 5: Performance of offsetCopy kernel 13.2. Device Memory Spaces 45 CUDA C++ Best Practices Guide, Release 12.5 For the NVIDIA Tesla V100, global memory accesses with no offset or with offsets that are multiples of 8 words result in four 32-byte transactions. The achieved bandwidth is approximately 790 GB/s. Otherwise, five 32-byte segments are loaded per warp, and we would expect approximately 4/5th of the memory throughput achieved with no offsets. In this particular example, the offset memory throughput achieved is, however, approximately 9/10th, because adjacent warps reuse the cache lines their neighbors fetched. So while the impact is still evident it is not as large as we might have expected. It would have been more so if adjacent warps had not exhibited such a high degree of reuse of the over-fetched cache lines. 13.2.1.4 Strided Accesses As seen above, in the case of misaligned sequential accesses, caches help to alleviate the performance impact. It may be different with non-unit-strided accesses, however, and this is a pattern that occurs frequently when dealing with multidimensional data or matrices. For this reason, ensuring that as much as possible of the data in each cache line fetched is actually used is an important part of per- formance optimization of memory accesses on these devices. To illustrate the effect of strided access on effective bandwidth, see the kernel strideCopy() in A kernel to illustrate non-unit stride data copy, which copies data with a stride of stride elements between threads from idata to odata. A kernel to illustrate non-unit stride data copy __global__ void strideCopy(float *odata, float* idata, int stride) { int xid = (blockIdx.x*blockDim.x + threadIdx.x)*stride; odata[xid] = idata[xid]; } Figure 6 illustrates such a situation; in this case, threads within a warp access words in memory with a stride of 2. This action leads to a load of eight L2 cache segments per warp on the Tesla V100 (compute capability 7.0). Fig. 6: Adjacent threads accessing memory with a stride of 2 A stride of 2 results in a 50% of load/store efficiency since half the elements in the transaction are not used and represent wasted bandwidth. As the stride increases, the effective bandwidth decreases 46 Chapter 13. Memory Optimizations CUDA C++ Best Practices Guide, Release 12.5 until the point where 32 32-byte segments are loaded for the 32 threads in a warp, as indicated in Figure 7. Fig. 7: Performance of strideCopy kernel As illustrated in Figure 7, non-unit-stride global memory accesses should be avoided whenever possi- ble. One method for doing so utilizes shared memory, which is discussed in the next section. 13.2.2. L2 Cache Starting with CUDA 11.0, devices of compute capability 8.0 and above have the capability to influ- ence persistence of data in the L2 cache. Because L2 cache is on-chip, it potentially provides higher bandwidth and lower latency accesses to global memory. For more details refer to the L2 Access Management section in the CUDA C++ Programming Guide. 13.2.2.1 L2 Cache Access Window When a CUDA kernel accesses a data region in the global memory repeatedly, such data accesses can be considered to be persisting. On the other hand, if the data is only accessed once, such data accesses can be considered to be streaming. A portion of the L2 cache can be set aside for persistent accesses to a data region in global memory. If this set-aside portion is not used by persistent accesses, then streaming or normal data accesses can use it. The L2 cache set-aside size for persisting accesses may be adjusted, within limits: cudaGetDeviceProperties(&prop, device_id); cudaDeviceSetLimit(cudaLimitPersistingL2CacheSize, prop.persistingL2CacheMaxSize); ∕*￿ ,→Set aside max possible size of L2 cache for persisting accesses *∕ 13.2. Device Memory Spaces 47 ∕∕￿ ∕∕￿ ∕∕ (Must￿ ∕∕ Hint￿ CUDA C++ Best Practices Guide, Release 12.5 Mapping of user data to L2 set-aside portion can be controlled using an access policy window on a CUDA stream or CUDA graph kernel node. The example below shows how to use the access policy window on a CUDA stream. cudaStreamAttrValue stream_attribute; ,→Stream level attributes data structure stream_attribute.accessPolicyWindow.base_ptr ,→Global Memory data pointer stream_attribute.accessPolicyWindow.num_bytes = num_bytes; ,→Number of bytes for persisting accesses. = reinterpret_cast(ptr); ∕∕￿ ,→be less than cudaDeviceProp::accessPolicyMaxWindowSize) stream_attribute.accessPolicyWindow.hitRatio ,→for L2 cache hit ratio for persisting accesses in the num_bytes region stream_attribute.accessPolicyWindow.hitProp ,→of access property on cache hit stream_attribute.accessPolicyWindow.missProp ,→of access property on cache miss. = 1.0; = cudaAccessPropertyPersisting; ∕∕ Type￿ = cudaAccessPropertyStreaming; ∕∕ Type￿ ∕∕Set the attributes to a CUDA stream of type cudaStream_t cudaStreamSetAttribute(stream, cudaStreamAttributeAccessPolicyWindow, &stream_ ,→attribute); The access policy window requires a value for hitRatio and num_bytes. Depending on the value of the num_bytes parameter and the size of L2 cache, one may need to tune the value of hitRatio to avoid thrashing of L2 cache lines. 13.2.2.2 Tuning the Access Window Hit-Ratio The hitRatio parameter can be used to specify the fraction of accesses that receive the hitProp property. For example, if the hitRatio value is 0.6, 60% of the memory accesses in the global memory region [ptr..ptr+num_bytes) have the persisting property and 40% of the memory accesses have the streaming property. To understand the effect of hitRatio and num_bytes, we use a sliding window micro benchmark. This microbenchmark uses a 1024 MB region in GPU global memory. First, we set aside 30 MB of the L2 cache for persisting accesses using cudaDeviceSetLimit(), as discussed above. Then, as shown in the figure below, we specify that the accesses to the first freqSize * sizeof(int) bytes of the memory region are persistent. This data will thus use the L2 set-aside portion. In our experiment, we vary the size of this persistent data region from 10 MB to 60 MB to model various scenarios where data fits in or exceeds the available L2 set-aside portion of 30 MB. Note that the NVIDIA Tesla A100 GPU has 40 MB of total L2 cache capacity. Accesses to the remaining data of the memory region (i.e., streaming data) are considered normal or streaming accesses and will thus use the remaining 10 MB of the non set-aside L2 portion (unless part of the L2 set-aside portion is unused). Consider the following kernel code and access window parameters, as the implementation of the slid- ing window experiment. __global__ void kernel(int *data_persistent, int *data_streaming, int dataSize, int￿ ,→freqSize) { int tid = blockIdx.x * blockDim.x + threadIdx.x; ∕*Each CUDA thread accesses one element in the persistent data section and one element in the streaming data section. Because the size of the persistent memory region (freqSize * sizeof(int) bytes)￿ ,→is much 48 (continues on next page) Chapter 13. Memory Optimizations CUDA C++ Best Practices Guide, Release 12.5 Fig. 8: Mapping Persistent data accesses to set-aside L2 in sliding window experiment smaller than the size of the streaming memory region (dataSize * sizeof(int)￿ ,→bytes), data in the persistent region is accessed more frequently*∕ (continued from previous page) data_persistent[tid % freqSize] = 2 * data_persistent[tid % freqSize]; data_streaming[tid % dataSize] = 2 * data_streaming[tid % dataSize]; } stream_attribute.accessPolicyWindow.base_ptr ,→persistent); stream_attribute.accessPolicyWindow.num_bytes = freqSize * sizeof(int); ,→bytes for persisting accesses in range 10-60 MB stream_attribute.accessPolicyWindow.hitRatio ,→cache hit ratio. Fixed value 1.0 = 1.0; = reinterpret_cast(data_ ∕∕Number of￿ ∕∕Hint for￿ The performance of the above kernel is shown in the chart below. When the persistent data region fits well into the 30 MB set-aside portion of the L2 cache, a performance increase of as much as 50% is observed. However, once the size of this persistent data region exceeds the size of the L2 set-aside cache portion, approximately 10% performance drop is observed due to thrashing of L2 cache lines. Fig. 9: The performance of the sliding-window benchmark with fixed hit-ratio of 1.0 13.2. Device Memory Spaces 49 CUDA C++ Best Practices Guide, Release 12.5 In order to optimize the performance, when the size of the persistent data is more than the size of the set-aside L2 cache portion, we tune the num_bytes and hitRatio parameters in the access window as below. stream_attribute.accessPolicyWindow.base_ptr ,→persistent); stream_attribute.accessPolicyWindow.num_bytes = 20*1024*1024; ,→ stream_attribute.accessPolicyWindow.hitRatio ,→((float)freqSize*sizeof(int)); ∕∕20 MB = (20*1024*1024)∕ ∕∕Such that up to 20MB of data is resident. = reinterpret_cast(data_ ￿ We fix the num_bytes in the access window to 20 MB and tune the hitRatio such that a random 20 MB of the total persistent data is resident in the L2 set-aside cache portion. The remaining portion of this persistent data will be accessed using the streaming property. This helps in reducing cache thrashing. The results are shown in the chart below, where we see good performance regardless of whether the persistent data fits in the L2 set-aside or not. Fig. 10: The performance of the sliding-window benchmark with tuned hit-ratio 13.2.3. Shared Memory Because it is on-chip, shared memory has much higher bandwidth and lower latency than local and global memory - provided there are no bank conflicts between the threads, as detailed in the following section. 50 Chapter 13. Memory Optimizations CUDA C++ Best Practices Guide, Release 12.5 13.2.3.1 Shared Memory and Memory Banks To achieve high memory bandwidth for concurrent accesses, shared memory is divided into equally sized memory modules (banks) that can be accessed simultaneously. Therefore, any memory load or store of n addresses that spans n distinct memory banks can be serviced simultaneously, yielding an effective bandwidth that is n times as high as the bandwidth of a single bank. However, if multiple addresses of a memory request map to the same memory bank, the accesses are serialized. The hardware splits a memory request that has bank conflicts into as many separate conflict-free requests as necessary, decreasing the effective bandwidth by a factor equal to the num- ber of separate memory requests. The one exception here is when multiple threads in a warp address the same shared memory location, resulting in a broadcast. In this case, multiple broadcasts from different banks are coalesced into a single multicast from the requested shared memory locations to the threads. To minimize bank conflicts, it is important to understand how memory addresses map to memory banks and how to optimally schedule memory requests. On devices of compute capability 5.x or newer, each bank has a bandwidth of 32 bits every clock cycle, and successive 32-bit words are assigned to successive banks. The warp size is 32 threads and the number of banks is also 32, so bank conflicts can occur between any threads in the warp. See Compute Capability 5.x in the CUDA C++ Programming Guide for further details. 13.2.3.2 Shared Memory in Matrix Multiplication (C=AB) Shared memory enables cooperation between threads in a block. When multiple threads in a block use the same data from global memory, shared memory can be used to access the data from global memory only once. Shared memory can also be used to avoid uncoalesced memory accesses by loading and storing data in a coalesced pattern from global memory and then reordering it in shared memory. Aside from memory bank conflicts, there is no penalty for non-sequential or unaligned accesses by a warp in shared memory. The use of shared memory is illustrated via the simple example of a matrix multiplication C = AB for the case with A of dimension Mxw, B of dimension wxN, and C of dimension MxN. To keep the kernels simple, M and N are multiples of 32, since the warp size (w) is 32 for current devices. A natural decomposition of the problem is to use a block and tile size of wxw threads. Therefore, in terms of wxw tiles, A is a column matrix, B is a row matrix, and C is their outer product; see Figure 11. A grid of N/w by M/w blocks is launched, where each thread block calculates the elements of a different tile in C from a single tile of A and a single tile of B. To do this, the simpleMultiply kernel (Unoptimized matrix multiplication) calculates the output el- ements of a tile of matrix C. Unoptimized matrix multiplication __global__ void simpleMultiply(float *a, float* b, float *c, int N) { int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; float sum = 0.0f; for (int i = 0; i < TILE_DIM; i++) { sum += a[row*TILE_DIM+i] * b[i*N+col]; 13.2. Device Memory Spaces 51 (continues on next page) CUDA C++ Best Practices Guide, Release 12.5 Fig. 11: Block-column matrix multiplied by block-row matrix. Block-column matrix (A) multiplied by block-row matrix (B) with resulting product matrix (C). (continued from previous page) } c[row*N+col] = sum; } In Unoptimized matrix multiplication, a, b, and c are pointers to global memory for the matrices A, B, and C, respectively; blockDim.x, blockDim.y, and TILE_DIM are all equal to w. Each thread in the wxw-thread block calculates one element in a tile of C. row and col are the row and column of the element in C being calculated by a particular thread. The for loop over i multiplies a row of A by a column of B, which is then written to C. The effective bandwidth of this kernel is 119.9 GB/s on an NVIDIA Tesla V100. To analyze performance, it is necessary to consider how warps access global memory in the for loop. Each warp of threads calculates one row of a tile of C, which depends on a single row of A and an entire tile of B as illustrated in Figure 12. Fig. 12: Computing a row of a tile. Computing a row of a tile in C using one row of A and an entire tile of B. For each iteration i of the for loop, the threads in a warp read a row of the B tile, which is a sequential and coalesced access for all compute capabilities. 52 Chapter 13. Memory Optimizations CUDA C++ Best Practices Guide, Release 12.5 However, for each iteration i, all threads in a warp read the same value from global memory for matrix A, as the index row*TILE_DIM+i is constant within a warp. Even though such an access requires only 1 transaction on devices of compute capability 2.0 or higher, there is wasted bandwidth in the transaction, because only one 4-byte word out of 8 words in a 32-byte cache segment is used. We can reuse this cache line in subsequent iterations of the loop, and we would eventually utilize all 8 words; however, when many warps execute on the same multiprocessor simultaneously, as is generally the case, the cache line may easily be evicted from the cache between iterations i and i+1. The performance on a device of any compute capability can be improved by reading a tile of A into shared memory as shown in Using shared memory to improve the global memory load efficiency in matrix multiplication. Using shared memory to improve the global memory load efficiency in matrix multiplication __global__ void coalescedMultiply(float *a, float* b, float *c, int N) { } __shared__ float aTile[TILE_DIM][TILE_DIM]; int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; float sum = 0.0f; aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x]; __syncwarp(); for (int i = 0; i < TILE_DIM; i++) { sum += aTile[threadIdx.y][i]* b[i*N+col]; } c[row*N+col] = sum; In Using shared memory to improve the global memory load efficiency in matrix multiplication, each element in a tile of A is read from global memory only once, in a fully coalesced fashion (with no wasted bandwidth), to shared memory. Within each iteration of the for loop, a value in shared mem- ory is broadcast to all threads in a warp. Instead of a __syncthreads()synchronization barrier call, a __syncwarp() is sufficient after reading the tile of A into shared memory because only threads within the warp that write the data into shared memory read this data. This kernel has an effective bandwidth of 144.4 GB/s on an NVIDIA Tesla V100. This illustrates the use of the shared memory as a user-managed cache when the hardware L1 cache eviction policy does not match up well with the needs of the application or when L1 cache is not used for reads from global memory. A further improvement can be made to how Using shared memory to improve the global memory load efficiency in matrix multiplication deals with matrix B. In calculating each of the rows of a tile of matrix C, the entire tile of B is read. The repeated reading of the B tile can be eliminated by reading it into shared memory once (Improvement by reading additional data into shared memory). 13.2. Device Memory Spaces 53 CUDA C++ Best Practices Guide, Release 12.5 Improvement by reading additional data into shared memory __global__ void sharedABMultiply(float *a, float* b, float *c, int N) { } __shared__ float aTile[TILE_DIM][TILE_DIM], bTile[TILE_DIM][TILE_DIM]; int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; float sum = 0.0f; aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x]; bTile[threadIdx.y][threadIdx.x] = b[threadIdx.y*N+col]; __syncthreads(); for (int i = 0; i < TILE_DIM; i++) { sum += aTile[threadIdx.y][i]* bTile[i][threadIdx.x]; } c[row*N+col] = sum; Note that in Improvement by reading additional data into shared memory, a __syncthreads() call is required after reading the B tile because a warp reads data from shared memory that were written to shared memory by different warps. The effective bandwidth of this routine is 195.5 GB/s on an NVIDIA Tesla V100. Note that the performance improvement is not due to improved coalescing in either case, but to avoiding redundant transfers from global memory. The results of the various optimizations are summarized in Table 2. Table 2: Table 2. Performance Improvements Optimizing C = AB Matrix Multiply :class table-no-stripes Optimization No optimization Coalesced using shared memory to store a tile of A NVIDIA Tesla V100 119.9 GB/s 144.4 GB/s Using shared memory to eliminate redundant reads of a tile of B 195.5 GB/s Note: Medium Priority: Use shared memory to avoid redundant transfers from global memory. 13.2.3.3 Shared Memory in Matrix Multiplication (C=AAT) A variant of the previous matrix multiplication can be used to illustrate how strided accesses to global memory, as well as shared memory bank conflicts, are handled. This variant simply uses the transpose of A in place of B, so C = AAT. A simple implementation for C = AAT is shown in Unoptimized handling of strided accesses to global memory 54 Chapter 13. Memory Optimizations CUDA C++ Best Practices Guide, Release 12.5 Unoptimized handling of strided accesses to global memory __global__ void simpleMultiply(float *a, float *c, int M) { int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; float sum = 0.0f; for (int i = 0; i < TILE_DIM; i++) { sum += a[row*TILE_DIM+i] * a[col*TILE_DIM+i]; } c[row*M+col] = sum; } In Unoptimized handling of strided accesses to global memory, the row-th, col-th element of C is ob- tained by taking the dot product of the row-th and col-th rows of A. The effective bandwidth for this kernel is 12.8 GB/s on an NVIDIA Tesla V100. These results are substantially lower than the corre- sponding measurements for the C = AB kernel. The difference is in how threads in a half warp access elements of A in the second term, a[col*TILE_DIM+i], for each iteration i. For a warp of threads, col represents sequential columns of the transpose of A, and therefore col*TILE_DIM represents a strided access of global memory with a stride of w, resulting in plenty of wasted bandwidth. The way to avoid strided access is to use shared memory as before, except in this case a warp reads a row of A into a column of a shared memory tile, as shown in An optimized handling of strided accesses using coalesced reads from global memory. An optimized handling of strided accesses using coalesced reads from global memory __global__ void coalescedMultiply(float *a, float *c, int M) { __shared__ float aTile[TILE_DIM][TILE_DIM], transposedTile[TILE_DIM][TILE_DIM]; int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; float sum = 0.0f; aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x]; transposedTile[threadIdx.x][threadIdx.y] = a[(blockIdx.x*blockDim.x + threadIdx.y)*TILE_DIM + threadIdx.x]; __syncthreads(); for (int i = 0; i < TILE_DIM; i++) { sum += aTile[threadIdx.y][i]* transposedTile[i][threadIdx.x]; } c[row*M+col] = sum; } An optimized handling of strided accesses using coalesced reads from global memory uses the shared transposedTile to avoid uncoalesced accesses in the second term in the dot product and the shared aTile technique from the previous example to avoid uncoalesced accesses in the first term. The effective bandwidth of this kernel is 140.2 GB/s on an NVIDIA Tesla V100.These results are lower than those obtained by the final kernel for C = AB. The cause of the difference is shared memory bank conflicts. The reads of elements in transposedTile within the for loop are free of conflicts, because threads of each half warp read across rows of the tile, resulting in unit stride across the banks. However, bank conflicts occur when copying the tile from global memory into shared memory. To enable the loads from global memory to be coalesced, data are read from global memory sequentially. However, this 13.2. Device Memory Spaces 55 CUDA C++ Best Practices Guide, Release 12.5 requires writing to shared memory in columns, and because of the use of wxw tiles in shared memory, this results in a stride between threads of w banks - every thread of the warp hits the same bank (Recall that w is selected as 32). These many-way bank conflicts are very expensive. The simple remedy is to pad the shared memory array so that it has an extra column, as in the following line of code. __shared__ float transposedTile[TILE_DIM][TILE_DIM+1]; This padding eliminates the conflicts entirely, because now the stride between threads is w+1 banks (i.e., 33 for current devices), which, due to modulo arithmetic used to compute bank indices, is equiva- lent to a unit stride. After this change, the effective bandwidth is 199.4 GB/s on an NVIDIA Tesla V100, which is comparable to the results from the last C = AB kernel. The results of these optimizations are summarized in Table 3. Table 3: Table 3. Performance Improvements Optimizing C = AAT Matrix Multiplication Optimization No optimization NVIDIA Tesla V100 12.8 GB/s Using shared memory to coalesce global reads 140.2 GB/s Removing bank conflicts 199.4 GB/s These results should be compared with those in Table 2. As can be seen from these tables, judicious use of shared memory can dramatically improve performance. The examples in this section have illustrated three reasons to use shared memory: ▶ To enable coalesced accesses to global memory, especially to avoid large strides (for general matrices, strides are much larger than 32) ▶ To eliminate (or reduce) redundant loads from global memory ▶ To avoid wasted bandwidth 13.2.3.4 Asynchronous Copy from Global Memory to Shared Memory CUDA 11.0 introduces an async-copy feature that can be used within device code to explicitly manage the asynchronous copying of data from global memory to shared memory. This feature enables CUDA It also avoids an kernels to overlap copying data from global to shared memory with computation. intermediary register file access traditionally present between the global memory read and the shared memory write. For more details refer to the memcpy_async section in the CUDA C++ Programming Guide. To understand the performance difference between synchronous copy and asynchronous copy of data from global memory to shared memory, consider the following micro benchmark CUDA kernels for demonstrating the synchronous and asynchronous approaches. Asynchronous copies are hardware accelerated for NVIDIA A100 GPU. template __global__ void pipeline_kernel_sync(T *global, uint64_t *clock, size_t copy_count) { extern __shared__ char s[]; T *shared = reinterpret_cast(s); uint64_t clock_start = clock64(); (continues on next page) 56 Chapter 13. Memory Optimizations CUDA C++ Best Practices Guide, Release 12.5 (continued from previous page) for (size_t i = 0; i < copy_count; ++i) { shared[blockDim.x * i + threadIdx.x] = global[blockDim.x * i + threadIdx.x]; } uint64_t clock_end = clock64(); atomicAdd(reinterpret_cast(clock), clock_end - clock_start); } template __global__ void pipeline_kernel_async(T *global, uint64_t *clock, size_t copy_count) { extern __shared__ char s[]; T *shared = reinterpret_cast(s); uint64_t clock_start = clock64(); ∕∕pipeline pipe; for (size_t i = 0; i < copy_count; ++i) { __pipeline_memcpy_async(&shared[blockDim.x * i + threadIdx.x], &global[blockDim.x * i + threadIdx.x], sizeof(T)); } __pipeline_commit(); __pipeline_wait_prior(0); uint64_t clock_end = clock64(); atomicAdd(reinterpret_cast(clock), clock_end - clock_start); } The synchronous version for the kernel loads an element from global memory to an intermediate regis- ter and then stores the intermediate register value to shared memory. In the asynchronous version of the kernel, instructions to load from global memory and store directly into shared memory are issued as soon as __pipeline_memcpy_async() function is called. The __pipeline_wait_prior(0) will wait until all the instructions in the pipe object have been executed. Using asynchronous copies does not use any intermediate register. Not using intermediate registers can help reduce register pressure and can increase kernel occupancy. Data copied from global memory to shared memory using asyn- chronous copy instructions can be cached in the L1 cache or the L1 cache can be optionally bypassed. If individual CUDA threads are copying elements of 16 bytes, the L1 cache can be bypassed. This difference is illustrated in Figure 13. Fig. 13: Comparing Synchronous vs Asynchronous Copy from Global Memory to Shared Memory We evaluate the performance of both kernels using elements of size 4B, 8B and 16B per thread i.e., using int, int2 and int4 for the template parameter. We adjust the copy_count in the kernels such that each thread block copies from 512 bytes up to 48 MB. The performance of the kernels is shown in Figure 14. From the performance chart, the following observations can be made for this experiment. 13.2. Device Memory Spaces 57 CUDA C++ Best Practices Guide, Release 12.5 Fig. 14: Comparing Performance of Synchronous vs Asynchronous Copy from Global Memory to Shared Memory ▶ Best performance with synchronous copy is achieved when the copy_count parameter is a mul- tiple of 4 for all three element sizes. The compiler can optimize groups of 4 load and store in- structions. This is evident from the saw tooth curves. ▶ Asynchronous copy achieves better performance in nearly all cases. ▶ The async-copy does not require the copy_count parameter to be a multiple of 4, to maximize performance through compiler optimizations. ▶ Overall, best performance is achieved when using asynchronous copies with an element of size 8 or 16 bytes. 13.2.4. Local Memory Local memory is so named because its scope is local to the thread, not because of its physical location. In fact, local memory is off-chip. Hence, access to local memory is as expensive as access to global memory. In other words, the term local in the name does not imply faster access. Local memory is used only to hold automatic variables. This is done by the nvcc compiler when it determines that there is insufficient register space to hold the variable. Automatic variables that are likely to be placed in local memory are large structures or arrays that would consume too much register space and arrays that the compiler determines may be indexed dynamically. Inspection of the PTX assembly code (obtained by compiling with -ptx or -keep command-line op- tions to nvcc) reveals whether a variable has been placed in local memory during the first compilation phases. If it has, it will be declared using the .local mnemonic and accessed using the ld.local and st.local mnemonics. If it has not, subsequent compilation phases might still decide otherwise, if they find the variable consumes too much register space for the targeted architecture. There is no 58 Chapter 13. Memory Optimizations CUDA C++ Best Practices Guide, Release 12.5 way to check this for a specific variable, but the compiler reports total local memory usage per kernel (lmem) when run with the--ptxas-options=-v option. 13.2.5. Texture Memory The read-only texture memory space is cached. Therefore, a texture fetch costs one device memory read only on a cache miss; otherwise, it just costs one read from the texture cache. The texture cache is optimized for 2D spatial locality, so threads of the same warp that read texture addresses that are close together will achieve best performance. Texture memory is also designed for streaming fetches with a constant latency; that is, a cache hit reduces DRAM bandwidth demand, but not fetch latency. In certain addressing situations, reading device memory through texture fetching can be an advanta- geous alternative to reading device memory from global or constant memory. 13.2.5.1 Additional Texture Capabilities If textures are fetched using tex1D(),tex2D(), or tex3D() rather than tex1Dfetch(), the hardware provides other capabilities that might be useful for some applications such as image processing, as shown in Table 4. Table 4: Table 4. Useful Features for tex1D(), tex2D(), and tex3D() Fetches Feature Filtering Use Caveat Fast, low-precision in- terpolation between texels Valid only if the texture reference returns floating- point data Normalized texture coordinates Resolution- independent coding None Addressing modes Automatic of boundary cases1 handling Can be used only with normalized texture coordi- nates 1 The automatic handling of boundary cases in the bottom row of Table 4 refers to how a texture coordinate is resolved when it falls outside the valid addressing range. There are two options: clamp and wrap. If x is the co- ordinate and N is the number of texels for a one-dimensional texture, then with clamp, x is replaced by 0 if x < 0 and by 1-1/N if 1