Previously we returned i32 on RV32 and i64 on RV64. The instructions
only consume 32 bits and only produce 32 bits. For RV64, the result
is sign extended to 64 bits like *W instructions.
This patch removes this detail from the interface to improve
portability and consistency. This matches the proposal for scalar
intrinsics here https://github.com/riscv-non-isa/riscv-c-api-doc/pull/44
I've included IR autoupgrade support as well.
I'll be doing this for other builtins/intrinsics that currently use
'long' in other patches.
Reviewed By: VincentWu
Differential Revision: https://reviews.llvm.org/D154647
This removes another use of 'long' to mean xlen from builtins.
I've also converted the types to unsigned as proposed in D154616.
clmul_32 is available to RV64 as its emulation is clmul+sext.w
clmulh_32 and clmulr_32 are not available on RV64 as their emulation
is currently 6 instructions in the worst case.
OpenCL and HIP have -cl-fp32-correctly-rounded-divide-sqrt and
-fno-hip-correctly-rounded-divide-sqrt. The corresponding fpmath metadata
was only set on fdiv, and not sqrt. The backend is currently underutilizing
sqrt lowering options, and the responsibility is split between the libraries
and backend and this metadata is needed.
CUDA/NVCC has -prec-div and -prev-sqrt but clang doesn't appear to be
aiming for compatibility with those. Don't know if OpenMP has a similar
control.
D152023 made ubsan consider __builtin_clz of 0 undefined regardless of
the target. This ensures portability and matches gcc.
This causes the ACLE intrinsics to also be considered to also be
considered to be undefined for 0 since they used the generic builtins
as their implementation.
This patch adds builtins for ARM that ubsan doesn't know about to make
the behavior defined for 0. Alternatively, I could have added a zero
check to the intrinsics, but the dedicated builtin will give better -O0
codegen.
Fixes#63113.
Reviewed By: tmatheson
Differential Revision: https://reviews.llvm.org/D154915
Builtin floating-point number classification functions:
- __builtin_isnan,
- __builtin_isinf,
- __builtin_finite, and
- __builtin_isnormal
now are implemented using `llvm.is_fpclass`.
This change makes the target callback `TargetCodeGenInfo::testFPKind`
unneeded. It is preserved in this change and should be removed later.
Differential Revision: https://reviews.llvm.org/D112932
This patch renames the `OpenMPIRBuilderConfig` flags to reduce confusion over
their meaning. `IsTargetCodegen` becomes `IsGPU`, whereas `IsEmbedded` becomes
`IsTargetDevice`. The `-fopenmp-is-device` compiler option is also renamed to
`-fopenmp-is-target-device` and the `omp.is_device` MLIR attribute is renamed
to `omp.is_target_device`. Getters and setters of all these renamed properties
are also updated accordingly. Many unit tests have been updated to use the new
names, but an alias for the `-fopenmp-is-device` option is created so that
external programs do not stop working after the name change.
`IsGPU` is set when the target triple is AMDGCN or NVIDIA PTX, and it is only
valid if `IsTargetDevice` is specified as well. `IsTargetDevice` is set by the
`-fopenmp-is-target-device` compiler frontend option, which is only added to
the OpenMP device invocation for offloading-enabled programs.
Differential Revision: https://reviews.llvm.org/D154591
This adds new intrisics to support the LDAP1 and STL1 Advanced SIMD
(Neon) instructions introduced as part of FEAT_LRCPC3.
The new intrinsics `vldap1(q)_lane`/`vstl1(q)_lane` generate IR code
similar to the existing `vld1(q)_lane/st1(q)_lane` ones, but capturing
the difference in the atomic release/acquire memory model.
The LLVM code generation changes to ensure that this instruction pair
is lowered to the correct LDAP1/STL1 instructions will be covered in a
separate commit.
Based on a patch by Sam Elliott.
Reviewed By: tmatheson
Differential Revision: https://reviews.llvm.org/D153128
This is the way most targets do it for a simple mapping.
We can't do this for all builtins due to type overloading of the IR intrinsics.
Reviewed By: asb
Differential Revision: https://reviews.llvm.org/D154567
These builtins were recently changed to return 'int' like the
similar __builtin_clz/__builtin_ctz builtins, but the IR generation
was not updated to use a truncate.
`CGBuilderTy::CreateElementBitCast()` no longer does what its name suggests.
Remove remaining in-tree uses by one of the following methods.
* drop the call entirely
* fold it to an `Address` construction
* replace it with `Address::withElementType()`
This is a NFC cleanup effort.
Reviewed By: barannikov88, nikic, jrtc27
Differential Revision: https://reviews.llvm.org/D154285
This caused asserts in some Android and Windows builds:
SelectionDAGNodes.h:1138: llvm::SDValue::SDValue(SDNode *, unsigned int):
Assertion `(!Node || !ResNo || ResNo < Node->getNumValues()) && "Invalid result number for the given node!"' failed.
See comment on 85bdea023f
Also revert "HIP: Use frexp builtins in math headers"
which seems to depend on this change.
This reverts commit 85bdea023f.
This reverts commit bf8e92c0e7.
These are basically the same thing and only differ for strictfp,
so add both for future proofing. Note all the elementwise functions are
currently broken for strictfp, and use non-constrained ops. Add a test
that demonstrates this, but doesn't attempt to fix it.
A new builtin function __builtin_isfpclass is added. It is called as:
__builtin_isfpclass(<floating point value>, <test>)
and returns an integer value, which is non-zero if the floating point
argument falls into one of the classes specified by the second argument,
and zero otherwise. The set of classes is an integer value, where each
value class is represented by a bit. There are ten data classes, as
defined by the IEEE-754 standard, they are represented by bits:
0x0001 (__FPCLASS_SNAN) - Signaling NaN
0x0002 (__FPCLASS_QNAN) - Quiet NaN
0x0004 (__FPCLASS_NEGINF) - Negative infinity
0x0008 (__FPCLASS_NEGNORMAL) - Negative normal
0x0010 (__FPCLASS_NEGSUBNORMAL) - Negative subnormal
0x0020 (__FPCLASS_NEGZERO) - Negative zero
0x0040 (__FPCLASS_POSZERO) - Positive zero
0x0080 (__FPCLASS_POSSUBNORMAL) - Positive subnormal
0x0100 (__FPCLASS_POSNORMAL) - Positive normal
0x0200 (__FPCLASS_POSINF) - Positive infinity
They have corresponding builtin macros to facilitate using the builtin
function:
if (__builtin_isfpclass(x, __FPCLASS_NEGZERO | __FPCLASS_POSZERO) {
// x is any zero.
}
The data class encoding is identical to that used in llvm.is.fpclass
function.
Differential Revision: https://reviews.llvm.org/D152351
* Add `Address::withElementType()` as a replacement for
`CGBuilderTy::CreateElementBitCast`.
* Partial progress towards replacing `CreateElementBitCast`, as it no
longer does what its name suggests. Either replace its uses with
`Address::withElementType()`, or remove them if no longer needed.
* Remove unused parameter 'Name' of `CreateElementBitCast`
Reviewed By: barannikov88, nikic
Differential Revision: https://reviews.llvm.org/D153196
This makes the scope and ordering arguments actually do something.
Also add some new OpenCL tests since the existing HIP tests didn't
cover address spaces.
Partial progress towards replacing in-tree uses of `Type::getPointerTo()`.
This needs to be done before deprecating the API.
Reviewed By: nikic, barannikov88
Differential Revision: https://reviews.llvm.org/D152321
Provide direct access to v_exp_f32 and v_exp_f16, so we can start
correctly lowering the generic exp intrinsics.
Unfortunately have to break from the usual naming convention of
matching the instruction name and stripping the v_ prefix. exp is
already taken by the export intrinsic. On the clang builtin side, we
have a choice of maintaining the convention to the instruction name,
or following the intrinsic name.
This will map directly to the hardware instruction which does not
handle denormals for f32. This will allow moving the generic intrinsic
to be lowered correctly. Also handles selecting the f16 version, but
there's no reason to use it over the generic intrinsic.
This commit implements support for WebAssembly table types and
respective builtins. Table tables are WebAssembly objects to store
reference types. They have a large amount of semantic restrictions
including, but not limited to, only being allowed to be declared
at the top-level as static arrays of zero-length. Not being arguments
or result of functions, not being stored ot memory, etc.
This commit introduces the __attribute__((wasm_table)) to attach to
arrays of WebAssembly reference types. And the following builtins to
manage tables:
* ref __builtin_wasm_table_get(table, idx)
* void __builtin_wasm_table_set(table, idx, ref)
* uint __builtin_wasm_table_size(table)
* uint __builtin_wasm_table_grow(table, ref, uint)
* void __builtin_wasm_table_fill(table, idx, ref, uint)
* void __builtin_wasm_table_copy(table, table, uint, uint, uint)
This commit also enables reference-types feature at bleeding-edge.
This is joint work with Alex Bradbury (@asb).
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D139010
AMDGPU has native instructions and target intrinsics for this, but
these really should be subject to legalization and generic
optimizations. This will enable legalization of f16->f32 on targets
without f16 support.
Implement a somewhat horrible inline expansion for targets without
libcall support. This could be better if we could introduce control
flow (GlobalISel version not yet implemented). Support for strictfp
legalization is less complete but works for the simple cases.
This patch adds support for the following SME ACLE intrinsics (as defined
in https://arm-software.github.io/acle/main/acle.html):
- svld1_hor_za8 // also for _za16, _za32, _za64 and _za128
- svld1_hor_vnum_za8 // also for _za16, _za32, _za64 and _za128
- svld1_ver_za8 // also for _za16, _za32, _za64 and _za128
- svld1_ver_vnum_za8 // also for _za16, _za32, _za64 and _za128
- svst1_hor_za8 // also for _za16, _za32, _za64 and _za128
- svst1_hor_vnum_za8 // also for _za16, _za32, _za64 and _za128
- svst1_ver_za8 // also for _za16, _za32, _za64 and _za128
- svst1_ver_vnum_za8 // also for _za16, _za32, _za64 and _za128
SveEmitter.cpp is extended to generate arm_sme.h (currently named
arm_sme_draft_spec_subject_to_change.h) and other SME definitions from
arm_sme.td, which is modeled after arm_sve.td. Common TableGen definitions
are moved into arm_sve_sme_incl.td.
Co-authored-by: Sagar Kulkarni <sagar.kulkarni1@huawei.com>
Reviewed By: sdesmalen, kmclaughlin
Differential Revision: https://reviews.llvm.org/D127910
Add more builtins for stdio functions as in GCC, along with their
mutations under IEEE float128 ABI.
Reviewed By: tuliom
Differential Revision: https://reviews.llvm.org/D150087
For the cover letter of this patch-set, please checkout D146872.
Depends on D147731.
This is the 4th patch of the patch-set.
This patch is a proof-of-concept and will be extended to full coverage
in the future. Currently, the old non-tuple unit-stride segment store is
not removed, and only signed integer unit-strided segment store of NF=2,
EEW=32 is defined here.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D147774
Clang has mechanism to specify required target features of a built-in
function. This patch adds such definitions to Altivec, VSX, HTM,
PairedVec and MMA builtins.
This will help frontend to detect incompatible target features of
bulitin when using target attribute syntax.
Reviewed By: nemanjai, kamaub
Differential Revision: https://reviews.llvm.org/D143467
Currently clang doesn't verify if the first argument in
`_builtin_assume_aligned` is of pointer type. This leads to an
assertion build failure. This patch aims to add a check if the first
argument is of pointer type or not and diagnose it with
diag::err_typecheck_convert_incompatible if its not of pointer type.
Fixes https://github.com/llvm/llvm-project/issues/62305
Differential Revision: https://reviews.llvm.org/D149514
Since we don't always need the vendor extension to be in riscv_vector.td,
so it's better to make it be in separated header.
Depends on D148223 and D148680
Differential Revision: https://reviews.llvm.org/D148308
This commit implements the two NTLH intrinsic functions.
```
type __riscv_ntl_load (type *ptr, int domain);
void __riscv_ntl_store (type *ptr, type val, int domain);
```
```
enum {
__RISCV_NTLH_INNERMOST_PRIVATE = 2,
__RISCV_NTLH_ALL_PRIVATE,
__RISCV_NTLH_INNERMOST_SHARED,
__RISCV_NTLH_ALL
};
```
We encode the non-temporal domain into MachineMemOperand flags.
1. Create the RISC-V built-in function with custom semantic checking.
2. Assume the domain argument is a compile time constant,
and make it as LLVM IR metadata (nontemp_node).
3. Encode domain value as two bits MachineMemOperand TargetMMOflag.
4. According to MachineMemOperand TargetMMOflag, select corrsponding ntlh instruction.
Currently, it supports scalar type and fixed-length vector type.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D143364
Change the return type of `getClobbers` function from `const char*`
to `std::string_view`. Update the function usages in CodeGen module.
The reasoning of these changes is to remove unsafe `const char*`
strings and prevent unnecessary allocations for constructing the
`std::string` in usages of `getClobbers()` function.
Differential Revision: https://reviews.llvm.org/D148799
Make CheckAtomicAlignment() return the computed pointer for reuse to avoid
emitting it twice.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D148422
Addresses the FIXME and removes the only in tree use of
llvm::ConstantFP::getZeroValueForNegation for an FP type.
Reviewed By: dmgreen, SjoerdMeijer
Differential Revision: https://reviews.llvm.org/D147497