Changes

Jump to navigation Jump to search
no edit summary
There are many different ways to work with GPUs using Python. This page explores them!provides a discussion of the foundations behind working with GPUs, from the fundamental choice between CUDA and OpenCL to what it means to compile a kernel. It then covers the dominant approaches for using CUDA with python.
==Foundations==
===CUDA vs. OpenCL===
At a fundamental level, using a GPU for computing means using [[https://en.wikipedia.org/wiki/CUDA CUDA]], [[https://en.wikipedia.org/wiki/OpenCL OpenCL]], or some other interface (OpenGL compute, Microsoft's DirectCompute, etc.) The big trade-off between CUDA and OpenCL is proprietary performance vs. open-source generality. Usually, I favour the later. However, at this point, the nVIDIA chipsets dominate the market and CUDA (which only runs on nVIDIA) seems to be the obvious choice. There have also been some attempts to make CUDA run on CL.
===CUDA for C++ or Fortran===

Navigation menu