FAQ

Frequently Asked Questions

The FAQ page is a work in progress.

What is GPU Ocelot?

GPU Ocelot is a dynamic compilation framework for PTX that functions as a drop-in replacement for NVIDIA’s CUDA Runtime API. Ocelot is capable of interacting with existing CUDA programs, dynamically analyze and recompile CUDA kernels, and execute them on NVIDIA GPUs, multicore x86 CPUs, AMD GPUs, a functional emulator, and more.

GPU Ocelot compiles as a shared library that directly replaces libcudart.so.

How is GPU Ocelot distributed?

GPU Ocelot is distributed in source form with SCons build facilities for Linux, Mac, and Windows. Visit our Github Code site for more details including these Installation Instructions.

The most recent commit of the GPU Ocelot code base is available via git clone:

git clone https://github.com/gtcasl/gpuocelot.git

Is the source code documented?

GPU Ocelot’s source code is documented using inline comments. The resulting Doxygen documents are available here: http://gpuocelot.gatech.edu/doxygen

Which back end devices are supported?

GPU Ocelot currently executes CUDA kernels on the following devices:

  • NVIDIA GPU
  • PTX Emulator
  • Multicore CPU
  • AMD GPU
  • Remote device

Does Ocelot support OpenCL?

We are currently in the process of adding an OpenCL API front end to GPU Ocelot.

Which versions of PTX are supported?

GPU Ocelot robustly supports PTX 2.3 and CUDA 4.0. We are in the process of improving support for CUDA 4.1, though many CUDA programs compiled with CUDA 4.1 work correctly already.

How do I run my CUDA programs with GPU Ocelot?

Compile your program with nvcc normally, but link with libocelot.so instead of libcudart.so. Be sure to include the flag -arch=sm_20 to ensure nvcc compiles kernels for PTX 2.3. See the following example.

nvcc -c sourcefile.cu -arch=sm_20 g++ -o Application sourcefile.o -locelot

How do I use Ocelot to debug my CUDA programs?

GPU Ocelot includes a rich set of correctness and validation tools, many of which are enabled by default for the PTX emulator. Ensure the “emulated” device is enabled and both “memoryChecker” and “raceDetector” are enabled.

How do I configure GPU Ocelot to use a particular backend?

GPU Ocelot is configured through a JSON document called ‘configure.ocelot’ expected to be located in the application’s working directory.

Who has contributed funding toward the development of GPU Ocelot?

We gratefully acknowledge the support of this research by the National Science Foundation, LogicBlox Corporation, IBM, NVIDIA, AMD, and Sandia National Laboratories, with equipment grants from NVIDIA and Intel.