torch.backends¶
torch.backends controls the behavior of various backends that PyTorch supports.
These backends include:
torch.backends.cudatorch.backends.cudnntorch.backends.mkltorch.backends.mkldnntorch.backends.openmp
torch.backends.cuda¶
-
torch.backends.cuda.is_built()[source]¶ Returns whether PyTorch is built with CUDA support. Note that this doesn’t necessarily mean CUDA is available; just that if this PyTorch binary were run a machine with working CUDA drivers and devices, we would be able to use it.
-
torch.backends.cuda.matmul.allow_tf32¶ A
boolthat controls whether TensorFloat-32 tensor cores may be used in matrix multiplications on Ampere or newer GPUs. See TensorFloat-32(TF32) on Ampere devices.
-
torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction¶ A
boolthat controls whether reduced precision reductions (e.g., with fp16 accumulation type) are allowed with fp16 GEMMs.
-
torch.backends.cuda.cufft_plan_cache¶ cufft_plan_cachecaches the cuFFT plans-
clear()¶ Clears the cuFFT plan cache.
-
-
torch.backends.cuda.preferred_linalg_library(backend=None)[source]¶ Warning
This flag is experimental and subject to change.
When PyTorch runs a CUDA linear algebra operation it often uses the cuSOLVER or MAGMA libraries, and if both are available it decides which to use with a heuristic. This flag (a
str) allows overriding those heuristics.If “cusolver” is set then cuSOLVER will be used wherever possible.
If “magma” is set then MAGMA will be used wherever possible.
If “default” (the default) is set then heuristics will be used to pick between cuSOLVER and MAGMA if both are available.
When no input is given, this function returns the currently preferred library.
Note: When a library is preferred other libraries may still be used if the preferred library doesn’t implement the operation(s) called. This flag may achieve better performance if PyTorch’s heuristic library selection is incorrect for your application’s inputs.
Currently supported linalg operators:
torch.backends.cudnn¶
-
torch.backends.cudnn.is_available()[source]¶ Returns a bool indicating if CUDNN is currently available.
-
torch.backends.cudnn.allow_tf32¶ A
boolthat controls where TensorFloat-32 tensor cores may be used in cuDNN convolutions on Ampere or newer GPUs. See TensorFloat-32(TF32) on Ampere devices.
-
torch.backends.cudnn.deterministic¶ A
boolthat, if True, causes cuDNN to only use deterministic convolution algorithms. See alsotorch.are_deterministic_algorithms_enabled()andtorch.use_deterministic_algorithms().