Files
clang-p2996/llvm/test/TableGen/HasNoUse.td
Pierre van Houtryve 9375962ac9 [TableGen][GlobalISel] Specialize more MatchTable Opcodes (#89736)
The vast majority of the following (very common) opcodes were always
called with identical arguments:

- `GIM_CheckType` for the root
- `GIM_CheckRegBankForClass` for the root
- `GIR_Copy` between the old and new root
- `GIR_ConstrainSelectedInstOperands` on the new root
- `GIR_BuildMI` to create the new root

I added overloaded version of each opcode specialized for the root
instructions. It always saves between 1 and 2 bytes per instance
depending on the number of arguments specialized into the opcode. Some
of these opcodes had between 5 and 15k occurences in the AArch64
GlobalISel Match Table.

Additionally, the following opcodes are almost always used in the same
sequence:

- `GIR_EraseFromParent 0` + `GIR_Done` 
- `GIR_EraseRootFromParent_Done` has been created to do both. Saves 2
bytes per occurence.
- `GIR_IsSafeToFold` was *always* called for each InsnID except 0.
- Changed the opcode to take the number of instructions to check after
`MI[0]`

The savings from these are pretty neat. For `AArch64GenGlobalISel.inc`:
- `AArch64InstructionSelector.cpp.o` goes down from 772kb to 704kb (-10%
code size)
- Self-reported MatchTable size goes from 420380 bytes to 352426 bytes
(~ -17%)

A smaller match table means a faster match table because we spend less
time iterating and decoding.
I don't have a solid measurement methodology for GlobalISel performance
so I don't have precise numbers but I saw a few % of improvements in a
simple testcase.
2024-04-24 09:19:18 +02:00

41 lines
2.0 KiB
TableGen

// RUN: llvm-tblgen -gen-dag-isel -I %p/../../include -I %p/Common %s -o - < %s | FileCheck -check-prefix=SDAG %s
// RUN: llvm-tblgen -gen-global-isel -warn-on-skipped-patterns -I %p/../../include -I %p/Common %s -o - < %s | FileCheck -check-prefix=GISEL %s
include "llvm/Target/Target.td"
include "GlobalISelEmitterCommon.td"
// Test the HasNoUse predicate
def NO_RET_ATOMIC_ADD : I<(outs), (ins GPR32Op:$src0, GPR32Op:$src1), []>;
// SDAG: case 0: {
// SDAG-NEXT: // Predicate_atomic_load_add_no_ret_32
// SDAG-NEXT: SDNode *N = Node;
// SDAG-NEXT: (void)N;
// SDAG-NEXT: if (cast<MemSDNode>(N)->getMemoryVT() != MVT::i32) return false;
// SDAG-NEXT: if (!SDValue(N, 0).use_empty()) return false;
// SDAG-NEXT: return true;
// GISEL: GIM_CheckOpcode, /*MI*/0, GIMT_Encode2(TargetOpcode::G_ATOMICRMW_ADD),
// GISEL-NEXT: GIM_RootCheckType, /*Op*/0, /*Type*/GILLT_s32,
// GISEL-NEXT: GIM_RootCheckType, /*Op*/2, /*Type*/GILLT_s32,
// GISEL-NEXT: GIM_CheckMemorySizeEqualTo, /*MI*/0, /*MMO*/0, /*Size*/GIMT_Encode4(4),
// GISEL-NEXT: GIM_CheckHasNoUse, /*MI*/0,
// GISEL-NEXT: // MIs[0] src0
// GISEL-NEXT: GIM_CheckPointerToAny, /*MI*/0, /*Op*/1, /*SizeInBits*/0,
// GISEL-NEXT: // (atomic_load_add:{ *:[i32] } iPTR:{ *:[iPTR] }:$src0, i32:{ *:[i32] }:$src1)<<P:Predicate_atomic_load_add_no_ret_32>> => (NO_RET_ATOMIC_ADD GPR32:{ *:[i32] }:$src0, GPR32:{ *:[i32] }:$src1)
// GISEL-NEXT: GIR_BuildRootMI, /*Opcode*/GIMT_Encode2(MyTarget::NO_RET_ATOMIC_ADD),
// GISEL-NEXT: GIR_RootToRootCopy, /*OpIdx*/1, // src0
// GISEL-NEXT: GIR_RootToRootCopy, /*OpIdx*/2, // src1
// GISEL-NEXT: GIR_MergeMemOperands, /*InsnID*/0, /*NumInsns*/1, /*MergeInsnID's*/0,
// GISEL-NEXT: GIR_RootConstrainSelectedInstOperands,
// GISEL-NEXT: // GIR_Coverage, 0,
// GISEL-NEXT: GIR_EraseRootFromParent_Done,
let HasNoUse = true in
defm atomic_load_add_no_ret : binary_atomic_op<atomic_load_add>;
def : Pat <
(atomic_load_add_no_ret_32 iPTR:$src0, i32:$src1),
(NO_RET_ATOMIC_ADD GPR32:$src0, GPR32:$src1)
>;