This commit provides definitions of builtins with the generic address space. One concept to consider is the difference between supporting the generic address space from the user's perspective and the requirement for libclc as a compiler implementation detail to define separate generic address space builtins. In practice a target (like NVPTX) might notionally support the generic address space, but it's mapped to the same LLVM target address space as another address space (often the private one). In such cases libclc must be careful not to define both private and generic overloads of the same builtin. We track these two concepts separately, and make the assumption that if the generic address space does clash with another, it's with the private one. We track the concepts separately because there are some builtins such as atomics that are defined for the generic address space but not the private address space.
libclc
libclc is an open source implementation of the library requirements of the OpenCL C programming language, as specified by the OpenCL 1.1 Specification. The following sections of the specification impose library requirements:
- 6.1: Supported Data Types
- 6.2.3: Explicit Conversions
- 6.2.4.2: Reinterpreting Types Using as_type() and as_typen()
- 6.9: Preprocessor Directives and Macros
- 6.11: Built-in Functions
- 9.3: Double Precision Floating-Point
- 9.4: 64-bit Atomics
- 9.5: Writing to 3D image memory objects
- 9.6: Half Precision Floating-Point
libclc is intended to be used with the Clang compiler's OpenCL frontend.
libclc is designed to be portable and extensible. To this end, it provides generic implementations of most library requirements, allowing the target to override the generic implementation at the granularity of individual functions.
libclc currently supports PTX, AMDGPU, SPIRV and CLSPV targets, but support for more targets is welcome.
Compiling and installing
(in the following instructions you can use make or ninja)
For an in-tree build, Clang must also be built at the same time:
$ cmake <path-to>/llvm-project/llvm/CMakeLists.txt -DLLVM_ENABLE_PROJECTS="libclc;clang" \
-DCMAKE_BUILD_TYPE=Release -G Ninja
$ ninja
Then install:
$ ninja install
Note you can use the DESTDIR Makefile variable to do staged installs.
$ DESTDIR=/path/for/staged/install ninja install
To build out of tree, or in other words, against an existing LLVM build or install:
$ cmake <path-to>/llvm-project/libclc/CMakeLists.txt -DCMAKE_BUILD_TYPE=Release \
-G Ninja -DLLVM_DIR=$(<path-to>/llvm-config --cmakedir)
$ ninja
Then install as before.
In both cases this will include all supported targets. You can choose which
targets are enabled by passing -DLIBCLC_TARGETS_TO_BUILD to CMake. The default
is all.
In both cases, the LLVM used must include the targets you want libclc support for
(AMDGPU and NVPTX are enabled in LLVM by default). Apart from SPIRV where you do
not need an LLVM target but you do need the
llvm-spirv tool available.
Either build this in-tree, or place it in the directory pointed to by
LLVM_TOOLS_BINARY_DIR.