CUDA(Compute Unified Device Architecture)是由NVIDIA开发的并行计算平台和编程模型,它允许开发者利用NVIDIA GPU的强大计算能力来加速计算密集型应用。以下是如何开始使用CUDA的详细指南。
g++
,在Windows上使用MSVC或WSL中的g++
。.run
文件或.exe
文件)。wget https://developer.download.nvidia.com/compute/cuda/12.4.1/local_installers/cuda_12.4.1_550.54.15_linux.run
chmod +x cuda_12.4.1_550.54.15_linux.run
sudo ./cuda_12.4.1_550.54.15_linux.run
echo 'export PATH=/usr/local/cuda/bin:$PATH' | sudo tee /etc/profile.d/cuda.sh
source /etc/profile
nvcc --version
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4
。nvcc --version
以下是一个简单的CUDA程序示例,该程序在GPU上执行两个数组的加法操作:
#include <iostream>
#include <math.h>
// CUDA kernel to add two arrays
__global__ void add(int n, float *x, float *y) {
for (int i = 0; i < n; i++) {
y[i] = x[i] + y[i];
}
}
int main(void) {
int N = 1 << 20; // 1 million elements
float *x, *y;
// Allocate Unified Memory – accessible from CPU or GPU
cudaMallocManaged(&x, N * sizeof(float));
cudaMallocManaged(&y, N * sizeof(float));
// Initialize x and y arrays on the host
for (int i = 0; i < N; i++) {
x[i] = 1.0f;
y[i] = 2.0f;
}
// Run kernel on 1M elements on the GPU
add<<<1, 1>>>(N, x, y);
// Wait for GPU to finish before accessing on host
cudaDeviceSynchronize();
// Check for errors (all values should be 3.0f)
float maxError = 0.0f;
for (int i = 0; i < N; i++) {
maxError = fmax(maxError, fabs(y[i] - 3.0f));
}
std::cout << "Max error: " << maxError << std::endl;
// Free memory
cudaFree(x);
cudaFree(y);
return 0;
}
add.cu
文件。nvcc
编译器:nvcc add.cu -o add
nvcc
编译器:nvcc add.cu -o add.exe
./add
add.exe