- Set the shader flag `DisableOptimizations` based on `optnone`
attribute of shader entry functions.
- Add DXIL Metadata Analysis pass as pre-requisite for Shader Flags pass
to obtain entry function information collected therein.
- Named module metadata `dx.disable_optimizations` is intended to
indicate disabling optimizations (`-O0`) via commandline flag. However,
its intent is fulfilled by `optnone` attribute of shader entry functions as
implemented in a recent change, and thus not needed. Delete
generation of named metadata and corresponding test file
`disable_opt.ll`.
- Add tests to verify correctness of setting shader flag.
Closes#112263
These files are relatively old and don't confront our formatting rules.
It's hard to change them without massive clang-format changes.
---------
Signed-off-by: Peter Rong <PeterRong@meta.com>
After #124287 updated several functions to return iterators rather than
Instruction *, it was no longer straightforward to pass their result to
DIBuilder. This commit updates DIBuilder methods to accept an
InsertPosition instead, so that they can be called with an iterator
(preferred), or with a deprecation warning an Instruction *, or a
BasicBlock *. This commit also updates the existing calls to the
DIBuilder methods to pass in iterators.
We should be able to use `spirv64` as a device variant match and it
should be considered a GPU.
Also add the triple to an RTTI check.
Signed-off-by: Sarnie, Nick <nick.sarnie@intel.com>
Replace some more nvvm.annotations with function attributes,
auto-upgrading the annotations as needed. These new attributes will be
more idiomatic and compile-time efficient than the annotations.
- !"maxclusterrank" / !"cluster_max_blocks" -> "nvvm.maxclusterrank"
- !"minctasm" -> "nvvm.minctasm"
- !"maxnreg" -> "nvvm.maxnreg"
A proposed fix for the issue #95611, [OpenMP][SIMD] ordered has no
effect in a loop SIMD region as of LLVM 18.1.0
Changes:
- Implement new lowering behavior: Conservatively serialize "omp simd"
loops that have `omp simd ordered` directive to prevent incorrect
vectorization (which results in incorrect execution behavior of the
miscompiled program).
Implementation outline:
- We start with the optimistic default initial value of
`LoopStack.setParallel(/Enable=/true);` in
`CodeGenFunction::EmitOMPSimdInit(const OMPLoopDirective &D)`.
- We only disable the loop parallel memory access assumption with `if
(HasOrderedDirective) LoopStack.setParallel(/Enable=/false);` using the
`HasOrderedDirective` (which tests for the presence of an
`OMPOrderedDirective`).
- This results in no longer incorrectly vectorizing the loop when the
`omp simd ordered` directive is present.
Motivation: We'd like to prevent incorrect vectorization of the loops
marked with the `#pragma omp ordered simd` directive which has
previously resulted in miscompiled code.
At the same time, we'd like the usage outside of the `#pragma omp
ordered simd` context to remain unaffected: Note that in the test
"clang/test/OpenMP/ordered_codegen.cpp" we only "lose" the
`!llvm.access.group` metadata in `foo_simd` alone.
This is conservative, in that it's possible some of the loops would be
possible to vectorize, but we prefer to avoid miscompilation of the
loops that are currently illegal to vectorize.
A concrete example follows:
```cpp
// "test.c"
#include <float.h>
#include <math.h>
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int compare_float(float x1, float x2, float scalar) {
const float diff = fabsf(x1 - x2);
x1 = fabsf(x1);
x2 = fabsf(x2);
const float l = (x2 > x1) ? x2 : x1;
if (diff <= l * scalar * FLT_EPSILON)
return 1;
else
return 0;
}
#define ARRAY_SIZE 256
__attribute__((noinline)) void initialization_loop(
float X[ARRAY_SIZE][ARRAY_SIZE], float Y[ARRAY_SIZE][ARRAY_SIZE]) {
const float max = 1000.0;
srand(time(NULL));
for (int r = 0; r < ARRAY_SIZE; r++) {
for (int c = 0; c < ARRAY_SIZE; c++) {
X[r][c] = ((float)rand() / (float)(RAND_MAX)) * max;
Y[r][c] = X[r][c];
}
}
}
__attribute__((noinline)) void omp_simd_loop(float X[ARRAY_SIZE][ARRAY_SIZE]) {
for (int r = 1; r < ARRAY_SIZE; ++r) {
for (int c = 1; c < ARRAY_SIZE; ++c) {
#pragma omp simd
for (int k = 2; k < ARRAY_SIZE; ++k) {
#pragma omp ordered simd
X[r][k] = X[r][k - 2] + sinf((float)(r / c));
}
}
}
}
__attribute__((noinline)) int comparison_loop(float X[ARRAY_SIZE][ARRAY_SIZE],
float Y[ARRAY_SIZE][ARRAY_SIZE]) {
int totalErrors_simd = 0;
const float scalar = 1.0;
for (int r = 1; r < ARRAY_SIZE; ++r) {
for (int c = 1; c < ARRAY_SIZE; ++c) {
for (int k = 2; k < ARRAY_SIZE; ++k) {
Y[r][k] = Y[r][k - 2] + sinf((float)(r / c));
}
}
// check row for simd update
for (int k = 0; k < ARRAY_SIZE; ++k) {
if (!compare_float(X[r][k], Y[r][k], scalar)) {
++totalErrors_simd;
}
}
}
return totalErrors_simd;
}
int main(void) {
float X[ARRAY_SIZE][ARRAY_SIZE];
float Y[ARRAY_SIZE][ARRAY_SIZE];
initialization_loop(X, Y);
omp_simd_loop(X);
const int totalErrors_simd = comparison_loop(X, Y);
if (totalErrors_simd) {
fprintf(stdout, "totalErrors_simd: %d \n", totalErrors_simd);
fprintf(stdout, "%s : %d - FAIL: error in ordered simd computation.\n",
__FILE__, __LINE__);
} else {
fprintf(stdout, "Success!\n");
}
return totalErrors_simd;
}
```
Before:
```
$ clang -fopenmp-simd -O3 -ffast-math -lm test.c -o test && ./test
totalErrors_simd: 15408
test.c : 76 - FAIL: error in ordered simd computation.
```
clang 19.1.0: https://godbolt.org/z/6EvhxqEhe
After:
```
$ clang -fopenmp-simd -O3 -ffast-math test.c -o test && ./test
Success!
```
Co-authored-by: Matt P. Dziubinski <matt-p.dziubinski@hpe.com>
As Intel is working to add support for SPIR-V OpenMP device offloading
in upstream clang/liboffload, we need to modify the OpenMP frontend to
allow SPIR-V as well as generate valid IR for SPIR-V. For example, we
need the frontend to generate code to define and interact with device
globals used in the DeviceRTL.
This is the beginning of what I expect will be (many) other changes, but
let's get started with something simple.
---------
Signed-off-by: Sarnie, Nick <nick.sarnie@intel.com>
This introduces options `-floop-interchange` and `-fno-loop-interchange`
to enable/disable the loop-interchange pass. This is part of the work
that tries to get that pass enabled by default (#124911), where it was
remarked that a user facing option to control this would be convenient
to have. The option name is the same as GCC's.
This is the 2nd part to
https://github.com/llvm/llvm-project/pull/124752. Here we make sure to
set the `DICompositeType` `EnumKind` if the enum was declared with
`__attribute__((enum_extensibility(...)))`. In DWARF this will be
rendered as `DW_AT_APPLE_enum_kind` and will be used by LLDB when
creating `clang::EnumDecl`s from debug-info.
Depends on https://github.com/llvm/llvm-project/pull/126044
Implement HLSLElementwiseCast excluding support for splat cases
Do not support casting types that contain bitfields.
Partly closes#100609 and partly closes#100619
A proposed fix for #95611 [OpenMP][SIMD] ordered has no effect in a loop
SIMD region as of LLVM 18.1.0
Changes:
- Implement new lowering behavior: Conservatively serialize "omp simd"
loops that have `omp simd ordered` directive to prevent incorrect
vectorization (which results in incorrect execution behavior of the
miscompiled program).
Implementation outline:
- We start with the optimistic default initial value of
`LoopStack.setParallel(/Enable=/true);` in
`CodeGenFunction::EmitOMPSimdInit(const OMPLoopDirective &D)`.
- We only disable the loop parallel memory access assumption with `if
(HasOrderedDirective) LoopStack.setParallel(/Enable=/false);` using the
`HasOrderedDirective` (which tests for the presence of an
`OMPOrderedDirective`).
- This results in no longer incorrectly vectorizing the loop when the
`omp simd ordered` directive is present.
Motivation: We'd like to prevent incorrect vectorization of the loops
marked with the `#pragma omp ordered simd` directive which has
previously resulted in miscompiled code.
At the same time, we'd like the usage outside of the `#pragma omp
ordered simd` context to remain unaffected: Note that in the test
"clang/test/OpenMP/ordered_codegen.cpp" we only "lose" the
`!llvm.access.group` metadata in `foo_simd` alone.
This is conservative, in that it's possible some of the loops would be
possible to vectorize, but we prefer to avoid miscompilation of the
loops that are currently illegal to vectorize.
A concrete example follows:
```cpp
// "test.c"
#include <float.h>
#include <math.h>
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int compare_float(float x1, float x2, float scalar) {
const float diff = fabsf(x1 - x2);
x1 = fabsf(x1);
x2 = fabsf(x2);
const float l = (x2 > x1) ? x2 : x1;
if (diff <= l * scalar * FLT_EPSILON)
return 1;
else
return 0;
}
#define ARRAY_SIZE 256
__attribute__((noinline)) void initialization_loop(
float X[ARRAY_SIZE][ARRAY_SIZE], float Y[ARRAY_SIZE][ARRAY_SIZE]) {
const float max = 1000.0;
srand(time(NULL));
for (int r = 0; r < ARRAY_SIZE; r++) {
for (int c = 0; c < ARRAY_SIZE; c++) {
X[r][c] = ((float)rand() / (float)(RAND_MAX)) * max;
Y[r][c] = X[r][c];
}
}
}
__attribute__((noinline)) void omp_simd_loop(float X[ARRAY_SIZE][ARRAY_SIZE]) {
for (int r = 1; r < ARRAY_SIZE; ++r) {
for (int c = 1; c < ARRAY_SIZE; ++c) {
#pragma omp simd
for (int k = 2; k < ARRAY_SIZE; ++k) {
#pragma omp ordered simd
X[r][k] = X[r][k - 2] + sinf((float)(r / c));
}
}
}
}
__attribute__((noinline)) int comparison_loop(float X[ARRAY_SIZE][ARRAY_SIZE],
float Y[ARRAY_SIZE][ARRAY_SIZE]) {
int totalErrors_simd = 0;
const float scalar = 1.0;
for (int r = 1; r < ARRAY_SIZE; ++r) {
for (int c = 1; c < ARRAY_SIZE; ++c) {
for (int k = 2; k < ARRAY_SIZE; ++k) {
Y[r][k] = Y[r][k - 2] + sinf((float)(r / c));
}
}
// check row for simd update
for (int k = 0; k < ARRAY_SIZE; ++k) {
if (!compare_float(X[r][k], Y[r][k], scalar)) {
++totalErrors_simd;
}
}
}
return totalErrors_simd;
}
int main(void) {
float X[ARRAY_SIZE][ARRAY_SIZE];
float Y[ARRAY_SIZE][ARRAY_SIZE];
initialization_loop(X, Y);
omp_simd_loop(X);
const int totalErrors_simd = comparison_loop(X, Y);
if (totalErrors_simd) {
fprintf(stdout, "totalErrors_simd: %d \n", totalErrors_simd);
fprintf(stdout, "%s : %d - FAIL: error in ordered simd computation.\n",
__FILE__, __LINE__);
} else {
fprintf(stdout, "Success!\n");
}
return totalErrors_simd;
}
```
Before:
```
$ clang -fopenmp-simd -O3 -ffast-math -lm test.c -o test && ./test
totalErrors_simd: 15408
test.c : 76 - FAIL: error in ordered simd computation.
```
clang 19.1.0: https://godbolt.org/z/6EvhxqEhe
After:
```
$ clang -fopenmp-simd -O3 -ffast-math test.c -o test && ./test
Success!
```
Co-authored-by: Matt P. Dziubinski <matt-p.dziubinski@hpe.com>
Summary:
This patch unifies the existing offloading entires into a single section
called `llvm_offload_entires`. This lets us use a more unified
offloading infrastructure so that all targets share the same handling.
The effect is that people in the runtimes now need to check if the kind
is what they expect, but the expectation is that you can combine
multiple potential providers into a compile job. Doesn't fully work
yet because of other runtime issues, but some day. Mostly this helps the
future of liboffload where we want to handle different languages than
OpenMP.
Kernel Control Flow Integrity (kCFI) is a feature that hardens indirect
calls by comparing a 32-bit hash of the function pointer's type against
a hash of the target function's type. If the hashes do not match, the
kernel may panic (or log the hash check failure, depending on the
kernel's configuration). These hashes are computed at compile time by
applying the xxHash64 algorithm to each mangled canonical function (or
function pointer) type, then truncating the result to 32 bits. This hash
is written into each indirect-callable function header by encoding it as
the 32-bit immediate operand to a `MOVri` instruction, e.g.:
```
__cfi_foo:
nop
nop
nop
nop
nop
nop
nop
nop
nop
nop
nop
movl $199571451, %eax # hash of foo's type = 0xBE537FB
foo:
...
```
This PR extends x86-based kCFI with a 3-bit arity indicator encoded in
the `MOVri` instruction's register (reg) field as follows:
| Arity Indicator | Description | Encoding in reg field |
| --------------- | --------------- | --------------- |
| 0 | 0 parameters | EAX |
| 1 | 1 parameter in RDI | ECX |
| 2 | 2 parameters in RDI and RSI | EDX |
| 3 | 3 parameters in RDI, RSI, and RDX | EBX |
| 4 | 4 parameters in RDI, RSI, RDX, and RCX | ESP |
| 5 | 5 parameters in RDI, RSI, RDX, RCX, and R8 | EBP |
| 6 | 6 parameters in RDI, RSI, RDX, RCX, R8, and R9 | ESI |
| 7 | At least one parameter may be passed on the stack | EDI |
For example, if `foo` takes 3 register arguments and no stack arguments
then the `MOVri` instruction in its kCFI header would instead be written
as:
```
movl $199571451, %ebx # hash of foo's type = 0xBE537FB
```
This PR will benefit other CFI approaches that build on kCFI, such as
FineIBT. For example, this proposed enhancement to FineIBT must be able
to infer (at kernel init time) which registers are live at an indirect
call target: https://lkml.org/lkml/2024/9/27/982. If the arity bits are
available in the kCFI function header, then this information is trivial
to infer.
Note that there is another existing PR proposal that includes the 3-bit
arity within the existing 32-bit immediate field, which introduces
different security properties:
https://github.com/llvm/llvm-project/pull/117121.
When building on Windows, dealing with the BlocksRuntime is slightly
more complicated. As we are not guaranteed a formward declaration for
the blocks runtime ABI symbols, we may generate the declarations for
them. In order to properly link against the well-known types, we always
annotated them as `__declspec(dllimport)`. This would require the
dynamic linking of the blocks runtime under all conditions. However,
this is the only the only possible way to us the library. We may be
building a fully sealed (static) executable. In such a case, the well
known symbols should not be marked as `dllimport` as they are assumed to
be statically available with the static linking to the BlocksRuntime.
Introduce a new driver/cc1 option `-static-libclosure` which mirrors the
myriad of similar options (`-static-libgcc`, `-static-libstdc++`,
-static-libsan`, etc).
This patch does two things.
1. Previously, when checking driver arguments, we emitted an error for
unsupported values of `-mbranch-protection` when using pauthtest ABI.
The reason for that was ptrauth-returns being enabled as part of
pauthtest. This patch changes the check against pauthtest to a check
against ptrauth-returns.
2. Similarly, check against values of the following function attribute
which are unsupported with ptrauth-returns:
`__attribute__((target("branch-protection=XXX`. Note that existing
`validateBranchProtection` function is used, and current behavior is to
ignore the unsupported attribute value, so no error is emitted.
This both reapplies #118734, the initial attempt at this, and updates it
significantly.
First, it uses the newly added `StringTable` abstraction for string
tables, and simplifies the construction to build the string table and
info arrays separately. This should reduce any `constexpr` compile time
memory or CPU cost of the original PR while significantly improving the
APIs throughout.
It also restructures the builtins to support sharding across several
independent tables. This accomplishes two improvements from the
original PR:
1) It improves the APIs used significantly.
2) When builtins are defined from different sources (like SVE vs MVE in
AArch64), this allows each of them to build their own string table
independently rather than having to merge the string tables and info
structures.
3) It allows each shard to factor out a common prefix, often cutting the
size of the strings needed for the builtins by a factor two.
The second point is important both to allow different mechanisms of
construction (for example a `.def` file and a tablegen'ed `.inc` file,
or different tablegen'ed `.inc files), it also simply reduces the sizes
of these tables which is valuable given how large they are in some
cases. The third builds on that size reduction.
Initially, we use this new sharding rather than merging tables in
AArch64, LoongArch, RISCV, and X86. Mostly this helps ensure the system
works, as without further changes these still push scaling limits.
Subsequent commits will more deeply leverage the new structure,
including using the prefix capabilities which cannot be easily factored
out here and requires deep changes to the targets.
This is similar in spirit to previous changes to make _mm_mfence
builtins to avoid conflicts with winnt.h and other MSVC ecosystem
headers that pre-declare compiler intrinsics as extern "C" symbols.
Also update the feature flag for _mm_prefetch to sse, which is more accurate than mmx.
This should fix issue #87515.
Note that PointerUnion::dyn_cast has been soft deprecated in
PointerUnion.h:
// FIXME: Replace the uses of is(), get() and dyn_cast() with
// isa<T>, cast<T> and the llvm::dyn_cast<T>
Literal migration would result in dyn_cast_if_present (see the
definition of PointerUnion::dyn_cast), but this patch uses dyn_cast
because we expect E to be nonnull.
The atomic construct is a particularly complicated one. The directive
itself is pretty simple, it has 5 options for the 'atomic-clause'.
However, the associated statement is fairly complicated.
'read' accepts:
v = x;
'write' accepts:
x = expr;
'update' (or no clause) accepts:
x++;
x--;
++x;
--x;
x binop= expr;
x = x binop expr;
x = expr binop x;
'capture' accepts either a compound statement, or:
v = x++;
v = x--;
v = ++x;
v = --x;
v = x binop= expr;
v = x = x binop expr;
v = x = expr binop x;
IF 'capture' has a compound statement, it accepts:
{v = x; x binop= expr; }
{x binop= expr; v = x; }
{v = x; x = x binop expr; }
{v = x; x = expr binop x; }
{x = x binop expr ;v = x; }
{x = expr binop x; v = x; }
{v = x; x = expr; }
{v = x; x++; }
{v = x; ++x; }
{x++; v = x; }
{++x; v = x; }
{v = x; x--; }
{v = x; --x; }
{x--; v = x; }
{--x; v = x; }
While these are all quite complicated, there is a significant amount
of similarity between the 'capture' and 'update' lists, so this patch
reuses a lot of the same functions.
This patch implements the entirety of 'atomic', creating a new Sema file
for the sema for it, as it is fairly sizable.
Per the ELF spec, section groups may only contain local symbols if those
symbols are only
referenced from within the section group. [1] In the case of template
parameter objects,
they can be referenced from outside the group when the type of the
object was declared
in an anonymous namespace. In that case, we can't place the object in a
COMDAT. This matches
GCC's linkage behavior on the test input.
[1]:
https://www.sco.com/developers/gabi/latest/ch4.sheader.html#section_groups
Note that PointerUnion::dyn_cast has been soft deprecated in
PointerUnion.h:
// FIXME: Replace the uses of is(), get() and dyn_cast() with
// isa<T>, cast<T> and the llvm::dyn_cast<T>
Literal migration would result in dyn_cast_if_present (see the
definition of PointerUnion::dyn_cast), but this patch uses dyn_cast
because we expect E to be nonnull.
For C++ (but not C), empty structs should be passed to functions as if
they are a 1 byte object with 1 byte alignment.
This is defined in Arm's CPPABI32:
https://github.com/ARM-software/abi-aa/blob/main/cppabi32/cppabi32.rst
For the purposes of parameter passing in AAPCS32, a parameter whose
type is an empty class shall be treated as if its type were an
aggregate with a single member of type unsigned byte.
The AArch64 equivalent of this has an exception for structs containing
an array of size zero, I've kept that logic for ARM. I've not found a
reason for this exception, but I've checked that GCC does have the same
behaviour for ARM as it does for AArch64.
The AArch64 version has an Apple ABI with different rules, which ignores
empty structs in both C and C++. This is documented at
https://developer.apple.com/documentation/xcode/writing-arm64-code-for-apple-platforms.
The ARM equivalent of that appears to be AAPCS16_VFP, used for WatchOS,
but I can't find any documentation for that ABI, so I'm not sure what
rules it should follow. For now I've left it following the AArch64 Apple
rules.
If we have +sme but not +sve, we would not set vscale_range on
functions. It should be valid to apply it with the same range with just
+sme, which can help mitigate some performance regressions in cases such
as scalable vector bitcasts (https://godbolt.org/z/exhe4jd8d).
Refactoring of how __builtin_dynamic_object_size() is calculated for
flexible array members (in preparation for adding support for the
'counted_by' attribute on pointers in structs).
The only functionality change is that we use the already emitted Expr
code to build our calculations off of rather than re-emitting the Expr.
That allows the 'StructFieldAccess' visitor to sift through all casts
and ArraySubscriptExprs to find the first MemberExpr. We build our GEPs
and calculate offsets based off of relative distances from that
MemberExpr.
The testcase passes execution tests.
Calculate the flexible array member's object size using these formulae
(note: if the calculation is negative, we return 0.):
struct p;
struct s {
/* ... */
int count;
struct p *array[] __attribute__((counted_by(count)));
};
1) 'ptr->array':
count = ptr->count;
flexible_array_member_base_size = sizeof (*ptr->array);
flexible_array_member_size =
count * flexible_array_member_base_size;
if (flexible_array_member_size < 0)
return 0;
return flexible_array_member_size;
2) '&ptr->array[idx]':
count = ptr->count;
index = idx;
flexible_array_member_base_size = sizeof (*ptr->array);
flexible_array_member_size =
count * flexible_array_member_base_size;
index_size = index * flexible_array_member_base_size;
if (flexible_array_member_size < 0 || index < 0)
return 0;
return flexible_array_member_size - index_size;
3) '&ptr->field':
count = ptr->count;
sizeof_struct = sizeof (struct s);
flexible_array_member_base_size = sizeof (*ptr->array);
flexible_array_member_size =
count * flexible_array_member_base_size;
field_offset = offsetof (struct s, field);
offset_diff = sizeof_struct - field_offset;
if (flexible_array_member_size < 0)
return 0;
return offset_diff + flexible_array_member_size;
4) '&ptr->field_array[idx]':
count = ptr->count;
index = idx;
sizeof_struct = sizeof (struct s);
flexible_array_member_base_size = sizeof (*ptr->array);
flexible_array_member_size =
count * flexible_array_member_base_size;
field_base_size = sizeof (*ptr->field_array);
field_offset = offsetof (struct s, field)
field_offset += index * field_base_size;
offset_diff = sizeof_struct - field_offset;
if (flexible_array_member_size < 0 || index < 0)
return 0;
return offset_diff + flexible_array_member_size;
---------
Signed-off-by: Bill Wendling <morbo@google.com>
This adds the plumbing between -fsanitize-skip-hot-cutoff (introduced in
https://github.com/llvm/llvm-project/pull/121619) and
LowerAllowCheckPass<cutoffs> (introduced in
https://github.com/llvm/llvm-project/pull/124211).
The net effect is that -fsanitize-skip-hot-cutoff now combines the
functionality of -ubsan-guard-checks and
-lower-allow-check-percentile-cutoff (though this patch does not remove
those yet), and generalizes the latter to allow per-sanitizer cutoffs.
Note: this patch replaces Intrinsic::allow_ubsan_check's
SanitizerHandler parameter with SanitizerOrdinal; this is necessary
because the hot cutoffs are specified in terms of SanitizerOrdinal
(e.g., null, alignment), not SanitizerHandler (e.g., TypeMismatch).
Likewise, CodeGenFunction::EmitCheck is changed to emit
allow_ubsan_check() for each individual check.
---------
Co-authored-by: Vitaly Buka <vitalybuka@gmail.com>
Co-authored-by: Vitaly Buka <vitalybuka@google.com>
This is an implementation of P1061 Structure Bindings Introduce a Pack
without the ability to use packs outside of templates. There is a couple
of ways the AST could have been sliced so let me know what you think.
The only part of this change that I am unsure of is the
serialization/deserialization stuff. I followed the implementation of
other Exprs, but I do not really know how it is tested. Thank you for
your time considering this.
---------
Co-authored-by: Yanzuo Liu <zwuis@outlook.com>
This PR removes the old `nocapture` attribute, replacing it with the new
`captures` attribute introduced in #116990. This change is
intended to be essentially NFC, replacing existing uses of `nocapture`
with `captures(none)` without adding any new analysis capabilities.
Making use of non-`none` values is left for a followup.
Some notes:
* `nocapture` will be upgraded to `captures(none)` by the bitcode
reader.
* `nocapture` will also be upgraded by the textual IR reader. This is to
make it easier to use old IR files and somewhat reduce the test churn in
this PR.
* Helper APIs like `doesNotCapture()` will check for `captures(none)`.
* MLIR import will convert `captures(none)` into an `llvm.nocapture`
attribute. The representation in the LLVM IR dialect should be updated
separately.