The motivating use case is to support import the function declaration
across modules to construct call graph edges for indirect calls [1]
when importing the function definition costs too much compile time
(e.g., the function is too large has no `noinline` attribute).
1. Currently, when the compiled IR module doesn't have a function
definition but its postlink combined summary contains the function
summary or a global alias summary with this function as aliasee, the
function definition will be imported from source module by IRMover. The
implementation is in FunctionImporter::importFunctions [2]
2. In order for FunctionImporter to import a declaration of a function,
both function summary and alias summary need to carry the def / decl
state. Specifically, all existing summary fields doesn't differ across
import modules, but the def / decl state of is decided by
`<ImportModule, Function>`.
This change encodes the def/decl state in `GlobalValueSummary::GVFlags`.
In the subsequent changes
1. The indexing step `computeImportForModule` [3]
will compute the set of definitions and the set of declarations for each
module, and passing on the information to bitcode writer.
2. Bitcode writer will look up the def/decl state and sets the state
when it writes out the flag value. This is demonstrated in
https://github.com/llvm/llvm-project/pull/87600
3. Function importer will read the def/decl state when reading the
combined summary to figure out two sets of global values, and IRMover
will be updated to import the declaration (aka linkGlobalValuePrototype [4])
into the destination module.
- The next change is https://github.com/llvm/llvm-project/pull/87600
[1] mentioned in rfc https://discourse.llvm.org/t/rfc-for-better-call-graph-sort-build-a-more-complete-call-graph-by-adding-more-indirect-call-edges/74029#support-cross-module-function-declaration-import-5
[2] 3b337242ee/llvm/lib/Transforms/IPO/FunctionImport.cpp (L1608-L1764)
[3] 3b337242ee/llvm/lib/Transforms/IPO/FunctionImport.cpp (L856)
[4] 3b337242ee/llvm/lib/Linker/IRMover.cpp (L605)
```
typedef long long t67 __attribute__((aligned (4)));
struct s67 {
int a;
t67 b;
};
void f67(struct s67 x) {
}
```
When classify:
a: Lo = Integer, Hi = NoClass
b: Lo = Integer, Hi = NoClass
struct S: Lo = Integer, Hi = NoClass
```
define dso_local void @f67(i64 %x.coerce) {
```
In this case, only one i64 register is used when the structure parameter
is transferred, which is obviously incorrect.So we need to treat the
split case specially. fix
https://github.com/llvm/llvm-project/issues/85387.
Follow-up to discussion: https://github.com/llvm/llvm-project/pull/87761
As discussed after landing the original PR:
Since fails could happen w.r.t. checking `!6`, these checks should be
removed.
rldimi is 64-bit instruction, due to backward compatibility, it needs to
be expanded into series of rotate and masking in 32-bit environment. In
the future, we may improve bit permutation selector and remove such
direct codegen.
Fix since #75481 got reverted.
- Explicitly set BitfieldBits to 0 to avoid uninitialized field member
for the integer checks:
```diff
- llvm::ConstantInt::get(Builder.getInt8Ty(), Check.first)};
+ llvm::ConstantInt::get(Builder.getInt8Ty(), Check.first),
+ llvm::ConstantInt::get(Builder.getInt32Ty(), 0)};
```
- `Value **Previous` was erroneously `Value *Previous` in
`CodeGenFunction::EmitWithOriginalRHSBitfieldAssignment`, fixed now.
- Update following:
```diff
- if (Kind == CK_IntegralCast) {
+ if (Kind == CK_IntegralCast || Kind == CK_LValueToRValue) {
```
CK_LValueToRValue when going from, e.g., char to char, and
CK_IntegralCast otherwise.
- Make sure that `Value *Previous = nullptr;` is initialized (see
1189e87951)
- Add another extensive testcase
`ubsan/TestCases/ImplicitConversion/bitfield-conversion.c`
---------
Co-authored-by: Vitaly Buka <vitalybuka@gmail.com>
When generating tbaa.struct metadata we treat multiple adjacent
bitfields as a single "field", with one corresponding entry in the
metadata. At the moment this is achieved by adding an entry for the
first bitfield in the run using its StorageSize and skipping the
remaining bitfields. The problem is that "first" is determined by
checking that the Offset of the field in the run is 0, which breaks for
big endian.
PR: https://github.com/llvm/llvm-project/pull/87753
This test exposes what I think is invalid tbaa.struct metadata currently
generated for bitfields when using big endian layout. The regions given
by `!{i64 2, i64 4, [[META3:![0-9]+]], i64 4, i64 4 ...` are
overlapping. This issue was originally observed in
https://github.com/llvm/llvm-project/pull/86709.
Complex division on Windows with `-fcomplex-arithmetic=promoted` and
`-ffp-model=strict` is crashing. This patch fixes the issue.
See https://godbolt.org/z/15Gh7nvdM
Before all the call probe ids are after block ids, in this change, it
mixed the call probe and block probe by reordering them in
lexical(line-number) order. For example:
```
main():
BB1
if(...)
BB2 foo(..);
else
BB3 bar(...);
BB4
```
Before the profile is
```
main
1: ..
2: ..
3: ...
4: ...
5: foo ...
6: bar ...
```
Now the new order is
```
main
1: ..
2: ..
3: foo ...
4: ...
5: bar ...
6: ...
```
This can potentially make it more tolerant of profile mismatch, either from stale profile or frontend change. e.g. before if we add one block, even the block is the last one, all the call probes are shifted and mismatched. Moreover, this makes better use of call-anchor based stale profile matching. Blocks are matched based on the closest anchor, there would be more anchors used for the matching, reduce the mismatch scope.
By generic intrinsics this mean things like dup, ext, zip and bsl that
can always be executed with integer s16 operations and do not require
fullfp16. This makes them always available, and brings them inline with
GCC.
https://godbolt.org/z/azs8eMv54
The relevant test cases have been moved into their own files, to allow
them to be tested with armv8-a and armv8.2-a+fp16.
This patch implements the implicit truncation and implicit sign change
checks for bitfields using UBSan. E.g.,
`-fsanitize=implicit-bitfield-truncation` and
`-fsanitize=implicit-bitfield-sign-change`.
swifttailcc is a new calling convention in LLVM introduced in
https://reviews.llvm.org/D95443. We add a new DWARF tag to capture
this in debuginfo.
Differential Revision: https://reviews.llvm.org/D95704
[Note: This patch seems to have been forgotten about, landing it now with considerable delay]
GNU and MSVC have extensions where flexible array members (or their
equivalent) can be in unions or alone in structs. This is already fully
supported in Clang through the 0-sized array ("fake flexible array")
extension or when C99 flexible array members have been syntactically
obfuscated.
Clang needs to explicitly allow these extensions directly for C99
flexible arrays, since they are common code patterns in active use by
the
Linux kernel (and other projects). Such projects have been using either
0-sized arrays (which is considered deprecated in favor of C99 flexible
array members) or via obfuscated syntax, both of which complicate their
code bases.
For example, these do not error by default:
```
union one {
int a;
int b[0];
};
union two {
int a;
struct {
struct { } __empty;
int b[];
};
};
```
But this does:
```
union three {
int a;
int b[];
};
```
Remove the default error diagnostics for this but continue to provide
warnings under Microsoft or GNU extensions checks. This will allow for
a seamless transition for code bases away from 0-sized arrays without
losing existing code patterns. Add explicit checking for the warnings
under various constructions.
Additionally fixes a CodeGen bug with flexible array members in unions
in C++, which was found when adding a testcase for:
```
union { char x[]; } z = {0};
```
which only had Sema tests originally.
Fixes#84565
It is currently not possible to use "RVV type" and "RVV intrinsics" if
the "zve32x" is not enabled globally. However in some cases we may want
to use them only in some functions, for instance:
```
#include <riscv_vector.h>
__attribute__((target("+zve32x")))
vint32m1_t rvv_add(vint32m1_t v1, vint32m1_t v2, size_t vl) {
return __riscv_vadd(v1, v2, vl);
}
int other_add(int i1, int i2) {
return i1 + i2;
}
```
, it is supposed to be compilable even the vector is not specified, e.g.
`clang -target riscv64 -march=rv64gc -S test.c`.
[RISCV] RISCV vector calling convention (1/2)
This is the vector calling convention based on
https://github.com/riscv-non-isa/riscv-elf-psabi-doc,
the idea is to split between "scalar" callee-saved registers
and "vector" callee-saved registers. "scalar" ones remain the
original strategy, however, "vector" ones are handled together
with RVV objects.
The stack layout would be:
|--------------------------| <-- FP
| callee-allocated save |
| area for register varargs|
|--------------------------|
| callee-saved registers | <-- scalar callee-saved
| (scalar) |
|--------------------------|
| RVV alignment padding |
|--------------------------|
| callee-saved registers | <-- vector callee-saved
| (vector) |
|--------------------------|
| RVV objects |
|--------------------------|
| padding before RVV |
|--------------------------|
| scalar local variables |
|--------------------------| <-- BP
| variable size objects |
|--------------------------| <-- SP
Note: This patch doesn't contain "tuple" type, e.g. vint32m1x2.
It will be handled in https://github.com/riscv-non-isa/riscv-elf-psabi-doc (2/2).
Differential Revision: https://reviews.llvm.org/D154576
Currently, the builtins used for implementing `va_list` handling
unconditionally take their arguments as unqualified `ptr`s i.e. pointers
to AS 0. This does not work for targets where the default AS is not 0 or
AS 0 is not a viable AS (for example, a target might choose 0 to
represent the constant address space). This patch changes the builtins'
signature to take generic `anyptr` args, which corrects this issue. It
is noisy due to the number of tests affected. A test for an upstream
target which does not use 0 as its default AS (SPIRV for HIP device
compilations) is added as well.
…i_check
This causes __cfi_check, just as __cfi_check_fail, to get the proper
target-specific attributes, in particular uwtable for unwind table
generation. Previously, nounwind attribute could be inferred for
__cfi_check, which caused it to lose its unwind table even with
-funwind-table option.
~~
Huawei RRI, OS Lab
Co-authored-by: Nikolai Kholiavin <kholiavin.nikolai@huawei-partners.com>
The latest ACLE allows it and further clarifies the following
in regards to the combination of the two attributes:
"If the `default` matches with another explicitly provided
version in the same translation unit, then the compiler can
emit only one function instead of the two. The explicitly
provided version shall be preferred."
("default" refers to the default clone here)
https://github.com/ARM-software/acle/pull/310
This was a limitation which has now been lifted. Please read the
thread below for more details:
https://github.com/llvm/llvm-project/pull/84405#discussion_r1525583647
Basically it allows to separate versioned implementations across
different TUs without having to share private header files which
contain the default declaration.
The ACLE spec has been updated accordingly to make this explicit:
"Each version declaration should be visible at the translation
unit in which the corresponding function version resides."
https://github.com/ARM-software/acle/pull/310
If a resolver is required (because there is a caller in the TU),
then a default declaration is implicitly generated.
When emitting the storage (or memory copy operations) for constant
initializers, the decision whether to split a constant structure or
array store into a sequence of field stores or to use `memcpy` is
based upon the optimization level and the size of the initializer.
In afe8b93ffd, we extended this by
allowing constants to be split when the array (or struct) type does
not match the type of data the address to the object (constant) is
expected to contain. This may happen when `emitStoresForConstant` is
called by `EmitAutoVarInit`, as the element type of the address gets
shrunk. When this occurs, let the initializer be split into a bunch
of stores only under `-ftrivial-auto-var-init=pattern`.
Fixes: https://github.com/llvm/llvm-project/issues/84178.
The old logic expects the call to be the last thing we emitted, and
since it kicks in before we emit cleanups, and since `swiftasynccall`
functions always return void, that's likely to be true. "Likely" isn't
very reassuring when we're talking about slapping attributes on random
calls, though. And indeed, while I can't find any way to break the logic
directly in current main, our previous (ongoing?) experiments with
shortening argument temporary lifetimes definitely broke it wide open.
So while this commit is prophylactic for now, it's clearly the right
thing to do, and it can cherry-picked to other branches to fix problems.
…ion.
This was reverted because the resolver didn't look as expected in one of
the tests. I believe it had some interaction with #84146. I have now
regenerated it using -target-feature -fp-armv8.
Generalise fold to "bitcast (shuf V0, V1, MaskC) --> shuf (bitcast V0), (bitcast V1), MaskC'".
Reapplied with a clang codegen test fix.
Further prep work for #67803
The current values for PrivateGlobalPrefix and PrivateLabelPrefix (@@
and @ respectively) are, in hindsight, poor choices for multiple
reasons:
First, there exist externally visible routines from the language
environment that begin with @@. These functions are certainly not
local/private by any means and they should not share a prefix with
private globals.
Secondly, both private globals and private labels should be handled the
same way by GOFF, so it doesn't make much sense for them to have
separate prefixes. GOFF remains the only file format where these are
different and there is no reason for that to be the case
Reverts llvm/llvm-project#84405
In between of passing the precommit tests on github and being merged
some change (perhaps in the AArch64 backend?) landed which resulted
in altering the generated resolver. I will regenerate the tests
perhaps using a less sensitive runline to such changes.
We would like the resolver to be generated eagerly, even if the
versioned function is not called from the current translation
unit. Fixes#81494. It further allows Multi Versioning to work
even if the default target version attribute is omitted from
function declarations.
This is re-working of #74460, which adds a soft-float ABI for AArch64.
That was reverted because it causes errors when building the linux and
fuchsia kernels.
The problem is that GCC's implementation of the ABI compatibility checks
when using the hard-float ABI on a target without FP registers does it's
checks after optimisation. The previous version of this patch reported
errors for all uses of floating-point types, which is stricter than what
GCC does in practice.
This changes two things compared to the first version:
* Only check the types of function arguments and returns, not the types
of other values. This is more relaxed than GCC, while still guaranteeing
ABI compatibility.
* Move the check from Sema to CodeGen, so that inline functions are only
checked if they are actually used. There are some cases in the linux
kernel which depend on this behaviour of GCC.
rldimi is 64-bit instruction, so the corresponding builtin should not
be available in 32-bit mode. Rotate amount should be in range and
cases when mask is zero needs special handling.
This change also swaps the first and second operands of rldimi/rlwimi
to match previous behavior. For masks not ending at bit 63-SH,
rotation will be inserted before rldimi.
This defines the basic set of pointer authentication clang builtins
(provided in a new header, ptrauth.h), with diagnostics and IRGen
support. The availability of the builtins is gated on a new flag,
`-fptrauth-intrinsics`.
Note that this only includes the basic intrinsics, and notably excludes
`ptrauth_sign_constant`, `ptrauth_type_discriminator`, and
`ptrauth_string_discriminator`, which need extra logic to be fully
supported.
This also introduces clang/docs/PointerAuthentication.rst, which
describes the ptrauth model in general, in addition to these builtins.
Co-Authored-By: Akira Hatanaka <ahatanaka@apple.com>
Co-Authored-By: John McCall <rjmccall@apple.com>
It's useful to provide an indicator code with the trap, which the generic
__builtin_trap can't do. asm("brk #N") is an option, but following that with a
__builtin_unreachable() leads to two traps when the compiler doesn't know the
block can't return. So compiler support like this is useful.
C23 6.3.1.8 ‘Usual arithmetic conversions’ p1 states (emphasis mine):
> Otherwise, if the corresponding real type of either operand is
`float`, the other operand is converted, *without change of type
domain*, to a type whose corresponding real type is `float`.
‘type domain’ here refers to `_Complex` vs real (i.e. non-`_Complex`);
there is another clause that states the same for `double`.
Consider the following code:
```c++
_Complex float f;
int x;
f / x;
```
After talking this over with @AaronBallman, we came to the conclusion
that `x` should be converted to `float` and *not* `_Complex float` (that
is, we should perform a division of `_Complex float / float`, and *not*
`_Complex float / _Complex float`; the same also applies to `-+*`). This
was already being done correctly for cases where `x` was already a
`float`; it’s just mixed `_Complex float`+`int` operations that
currently suffer from this problem.
This pr removes the extra `FloatingRealToComplex` conversion that we
were erroneously inserting and adds some tests to make sure we’re
actually doing `_Complex float / float` and not `_Complex float /
_Complex float` (and analogously for `double` and `-+*`).
The only exception here is `float / _Complex float`, which calls a
library function (`__divsc3`) that takes 4 `float`s, so we end up having
to convert the `float` to a `_Complex float` after all (and analogously
for `double`); I don’t believe there is a way around this.
Lastly, we were also missing tests for `_Complex` arithmetic at compile
time, so this adds some tests for that as well.