Commit Graph

3370 Commits

Author SHA1 Message Date
Austin Kerbow
30f18ed387 [AMDGPU] Handle SMRD signed offset immediate
Summary:
This fixes a few issues related to SMRD offsets. On gfx9 and gfx10 we have a
signed byte offset immediate, however we can overflow into a negative since we
treat it as unsigned.

Also, the SMRD SOFFSET sgpr is an unsigned offset on all subtargets. We
sometimes tried to use negative values here.

Third, S_BUFFER instructions should never use a signed offset immediate.

Differential Revision: https://reviews.llvm.org/D77082
2020-04-02 17:41:52 -07:00
Matt Arsenault
2680e88069 AMDGPU: Fix broken check lines 2020-04-02 18:52:49 -04:00
Matt Arsenault
f68cc2a7ed AMDGPU: Use 128-bit DS operations by default 2020-04-02 17:17:47 -04:00
Matt Arsenault
192cccb152 AMDGPU: Add some tests for exotic denormal mode combinations 2020-04-02 17:17:12 -04:00
Matt Arsenault
5660bb6bc9 AMDGPU: Remove denormal subtarget features
Switch to using the denormal-fp-math/denormal-fp-math-f32 attributes.
2020-04-02 17:17:12 -04:00
Matt Arsenault
75cf30918f AMDGPU: Assume f32 denormals are enabled by default
This will likely introduce catastrophic performance regressions on
older subtargets, but should be correct. A follow up change will
remove the old fp32-denormals subtarget features, and switch to using
the new denormal-fp-math/denormal-fp-math-f32 attributes. Frontends
should be making sure to add the denormal-fp-math-f32 attribute when
appropriate to avoid performance regressions.
2020-04-02 17:17:12 -04:00
Matt Arsenault
c3d3c22a58 AMDGPU: Hack out noinline on functions using LDS globals
This is a workaround for clang adding noinline to all functions at
-O0. Previously, we would just add alwaysinline, and the verifier
would complain about having both noinline and alwaysinline. We
currently can't truly codegen this case as a freestanding function, so
override the user forcing noinline.
2020-04-02 14:12:07 -04:00
Stanislav Mekhanoshin
f2334a7ef2 [AMDGPU] Fix crash in SILoadStoreOptimizer
SILoadStoreOptimizer::checkAndPrepareMerge() expects base and
paired instruction to come in order and scans MBB from base to
the paired instruction. An original order can be changed if
there were a dependent instruction in between and base instruction
was moved.

Fixed by bailing the optimization. In theory it might be possible
still to perform a merge by swapping instructions, but on practice
it bails anyway because it finds dependency on that same instruction
which has resulted in the base move.

Differential Revision: https://reviews.llvm.org/D77245
2020-04-02 10:26:47 -07:00
Matt Arsenault
68e283940a AMDGPU/GlobalISel: Switch test to checking final ISA
The naming convention is for unprefixed .ll tests to check the final
ISA instructions.
2020-04-01 13:03:02 -04:00
Matt Arsenault
5e4e8d0388 AMDGPU/GlobalISel: Change intrinsic ID for _L to _LZ opt
Still should handle the other case changes the opcode this way.
2020-04-01 13:03:02 -04:00
Matt Arsenault
43e576593e AMDGPU/GlobalISel: Fix insert point when lowering G_FMAD 2020-03-31 19:57:06 -04:00
Stanislav Mekhanoshin
08682dcc86 [AMDGPU] Define 16 bit VGPR subregs
We have loads preserving low and high 16 bits of their
destinations. However, we always use a whole 32 bit register
for these. The same happens with 16 bit stores, we have to
use full 32 bit register so if high bits are clobbered the
register needs to be copied. One example of such code is
added to the load-hi16.ll.

The proper solution to the problem is to define 16 bit subregs
and use them in the operations which do not read another half
of a VGPR or preserve it if the VGPR is written.

This patch simply defines subregisters and register classes.
At the moment there should be no difference in code generation.
A lot more work is needed to actually use these new register
classes. Therefore, there are no new tests at this time.

Register weight calculation has changed with new subregs so
appropriate changes were made to keep all calculations just
as they are now, especially calculations of register pressure.

Differential Revision: https://reviews.llvm.org/D74873
2020-03-31 11:49:06 -07:00
Sebastian Neubauer
5d3a69feca [AMDGPU] New llvm.amdgcn.ballot intrinsic
Add a new llvm.amdgcn.ballot intrinsic modeled on the ballot function
in GLSL and other shader languages. It returns a bitfield containing the
result of its boolean argument in all active lanes, and zero in all
inactive lanes.

This is intended to replace the existing llvm.amdgcn.icmp and
llvm.amdgcn.fcmp intrinsics after a suitable transition period.

Use the new intrinsic in the atomic optimizer pass.

Differential Revision: https://reviews.llvm.org/D65088
2020-03-31 10:35:39 +02:00
Matt Arsenault
db9f0d1ce5 AMDGPU: Form v_cvt_ubyte* with f16 results
We get 2 conversion instructions anyway. Previously we would get a
conversion with SDWA reading from a byte source, which has a larger
encoding.
2020-03-30 17:59:49 -04:00
Matt Arsenault
b27d255e1e AMDGPU/GlobalISel: Form CVT_F32_UBYTE0 2020-03-30 17:45:55 -04:00
Matt Arsenault
bcb643c8af AMDGPU/GlobalISel: Handle image atomics 2020-03-30 17:41:04 -04:00
Matt Arsenault
48eda37282 AMDGPU/GlobalISel: Start selecting image intrinsics
Does not handled atomics yet.
2020-03-30 17:33:04 -04:00
Matt Arsenault
570a578e46 AMDGPU: Account for dmask when computing image mem size
Only the number of elements in the dmask will really be accessed.
2020-03-30 17:30:58 -04:00
Jay Foad
cee65d51fe AMDGPU: Implement getMemcpyLoopLoweringType
Summary: Based on a patch by Matt Arsenault.

Reviewers: rampitec, kerbowa, nhaehnle, arsenm

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77057
2020-03-30 22:21:01 +01:00
Matt Arsenault
2641ba52a9 AMDGPU/GlobalISel: Round up image operations with 5, 6 or 7 addresses
The instruction definitions are missing for these register types, so
round up to 8 like the DAG.
2020-03-30 17:02:47 -04:00
Matt Arsenault
42d5609809 AMDGPU/GlobalISel: Start handling _L to _LZ optimization
We currently don't have a way to map to the equivalent intrinsic
opcode, so track immediate 0s in place of the address for the
selection to know to change the final opcode.
2020-03-30 17:02:30 -04:00
Matt Arsenault
4919f2e1c5 AMDGPU/GlobalISel: Basic legalize rules for G_FSHR
Only handles easy 32-bit cases.
2020-03-30 11:53:01 -07:00
Jakub Kuderski
77ce2e21a8 [AMDGPU] Add Relocation Constant Support
Summary:
This change adds amdgcn.reloc.constant intrinsic to the amdgpu backend, which will compile into a relocation entry in the resulting elf.

The intrinsics takes a MetadataNode (String) as its only argument, which specifies the symbol name of the relocation entry.

`SelectionDAGBuilder::getValueImpl` is changed to allow metadata operands passed through to ISel.

Author: csyonghe <yonghe@google.com>

Reviewers: tpr, nhaehnle

Reviewed By: nhaehnle

Subscribers: arsenm, kzhuravl, jvesely, wdng, yaxunl, dstuttard, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76440
2020-03-30 13:49:20 -04:00
Sameer Sahasrabuddhe
3cbbded68c Introduce unify-loop-exits pass.
For each natural loop with multiple exit blocks, this pass creates a
new block N such that all exiting blocks now branch to N, and then
control flow is redistributed to all the original exit blocks.

The bulk of the tranformation is a new function introduced in
BasicBlockUtils that an redirect control flow from a set of incoming
blocks to a set of outgoing blocks via a common "hub".

This is a useful workaround for a limitation in the structurizer which
incorrectly orders blocks when processing a nest of loops. This pass
bypasses that issue by ensuring that each natural loop is recognized
as a separate region. Since the structurizer is a region pass, it no
longer sees a nest of loops in a single region, and instead processes
each "level" in the nesting as a separate region.

The AMDGPU backend provides a new option to enable this pass before
the structurizer, which may eventually be enabled by default.

Reviewers: madhur13490, arsenm, nhaehnle

Reviewed By: nhaehnle

Differential Revision: https://reviews.llvm.org/D75865
2020-03-30 13:23:56 -04:00
Matt Arsenault
bb009498c2 AMDGPU/GlobalISel: Hack to fix i24 argument lowering
I still think the call lowering type legalization logic split between
the generic code and target is too confusing, but largely induced by
the reliance on the DAG infrastructure.
2020-03-30 11:00:45 -04:00
Matt Arsenault
90a36bbd7c AMDGPU/GlobalISel: Legalize 64-bit G_UDIV/G_UREM
Mostly ported from the DAG version. This results in much worse code
than the DAG version, largely due to a much worse expansion for
G_UMULH.
2020-03-30 10:57:37 -04:00
Matt Arsenault
0b68ca5162 AMDGPU: Add some additional tests for v_cvt_ubyte* formation
Use functions now that we have them for less boilerplate in the
output.
2020-03-29 14:03:07 -04:00
Jay Foad
a6dfd827e5 [AMDGPU] Fix getEUsPerCU for gfx10 in CU mode
Summary:
"Per CU" is a bit simplistic for gfx10, but I couldn't think of a better
name.

Reviewers: arsenm, rampitec, nhaehnle, dstuttard, tpr

Subscribers: kzhuravl, jvesely, wdng, yaxunl, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76861
2020-03-27 20:36:49 +00:00
Matt Arsenault
348735b723 AMDGPU: Stop setting attributes based on TargetOptions
Having arbitrary passes looking at the TargetOptions is pretty
messy. This was also disregarding if a function already had an
explicit attribute setting on it. opt/llc now add the attributes to
functions that don't specify the attribute. clang and lld do not call
the function to do this, which they maybe should.

This was also treating unsafe-fp-math as implying the others, and
setting the other attributes based on it. This is not done anywhere
else, and I'm not sure is correct based on the current description of
the option bit.

Effectively reverts 1d8cf2be89
2020-03-27 13:13:43 -07:00
Stanislav Mekhanoshin
4c4b71843b [AMDGPU] Propagate amdgpu-waves-per-eu to callees
Differential Revision: https://reviews.llvm.org/D76868
2020-03-26 14:43:44 -07:00
Stanislav Mekhanoshin
e06d707aa2 [AMDGPU] Fixed function traversal in attribute propagation
AMDGPUPropagateAttributes pass was skipping some of the functions
when cloning. Functions were added to root set and then skipped
on the next interation because they are already in the root set,
while were meant to be processed with different features.

Differential Revision: https://reviews.llvm.org/D76815
2020-03-25 18:47:09 -07:00
Stanislav Mekhanoshin
6e00e3fcb0 [AMDGPU] Preserve original symbol during attribute propagation
AMDGPUPropagateAttributes can swap names while cloning a function.
Only do it if original symbol was not externally visible.

Differential Revision: https://reviews.llvm.org/D76789
2020-03-25 15:26:30 -07:00
cdevadas
ce984129ea [AMDGPU] Add SIPreEmitPeephole pass.
This pass can handle all the optimization
opportunities found just before code emission.
Presently it includes the handling of vcc branch
optimization that was handled earlier in SIInsertSkips.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D76712
2020-03-25 15:35:35 +00:00
Matt Arsenault
baa78179fe AMDGPU/GlobalISel: Add a testcase for G_UNMERGE_VALUES legalization
I had a note that this doesn't work, but it seems to now.
2020-03-24 21:54:43 -04:00
Matt Arsenault
d16ee1174a AMDGPU/GlobalISel: Add some end to end tests for fma selection 2020-03-24 21:23:37 -04:00
Matt Arsenault
bba8c92d54 AMDGPU/GlobalISel: Add select patterns for v_and_or_b32 2020-03-24 20:47:54 -04:00
Matt Arsenault
c9e0b448b8 AMDGPU/GlobalISel: Add load legalization tests 2020-03-24 20:41:01 -04:00
Matt Arsenault
01a337cfc9 AMDGPU/GlobalISel: Add missing tests for G_FRINT selection 2020-03-24 20:41:01 -04:00
Matt Arsenault
bb3aa09b15 AMDGPU/GlobalISel: Add more tests for add3 folding
Forget to squash into 2ea4605105
2020-03-24 14:30:24 -04:00
Matt Arsenault
2ea4605105 AMDGPU/GlobalISel: Add some more tests for add3 folding
These currently fail to form add3 due to the pointer type, but they
should be handle.
2020-03-24 14:26:23 -04:00
Jay Foad
0444d16a16 [GlobalISel] Add generic opcodes for saturating add/subtract
Summary:
Add new generic MIR opcodes G_SADDSAT etc. Add support in IRTranslator
for translating the saturating add/subtract intrinsics to the new
opcodes.

Reviewers: aemerson, dsanders, paquette, arsenm

Subscribers: jvesely, wdng, nhaehnle, rovka, hiraditya, volkan, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76600
2020-03-23 15:16:45 +00:00
Matt Arsenault
db3f3f0240 AMDGPU/GlobalISel: Add some oversized G_IMPLICIT_DEF tests
Not all of these legalize correctly yet.
2020-03-23 11:16:10 -04:00
Dominik Montada
ccf49b9ef0 [GlobalISel] support widen unmerge if WideTy > SrcTy
Summary:
Widening G_UNMERGE_VALUES to a type which is larger than the
original source type is the same as widening it to the same
type as the source type: in both cases, G_UNMERGE_VALUES has
to be replaced with bit arithmetic which. Although the arithmetic
itself is independent of whether the source type is smaller
or equal to the widen type, widening the source type to the
widen type should result in less artifacts being emitted,
since this is the type that the user explicitly requested.

Reviewers: arsenm, dsanders, aemerson, aditya_nandakumar

Reviewed By: arsenm, dsanders

Subscribers: jvesely, wdng, nhaehnle, rovka, hiraditya, volkan, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76494
2020-03-23 09:16:45 +01:00
Qiu Chaofan
763871053c [DAGCombiner] Require nsz for aggressive fma fold
For folding pattern `x-(fma y,z,u*v) -> (fma -y,z,(fma -u,v,x))`, if
`yz` is 1, `uv` is -1 and `x` is -0, sign of result would be changed.

Differential Revision: https://reviews.llvm.org/D76419
2020-03-22 23:10:07 +08:00
Matt Arsenault
53d6b156bb AMDGPU: Add more tests for fshr 2020-03-20 11:01:51 -04:00
alex-t
6e34e71869 [AMDGPU] Enable divergence driven ISel for ADD/SUB i64
Summary:
Currently we custom select add/sub with carry out to scalar form relying on later replacing them to vector form if necessary.
This change enables custom selection code to take the divergence of adde/addc SDNodes into account and select the appropriate form in one step.

Reviewers: arsenm, vpykhtin, rampitec

Reviewed By: arsenm, vpykhtin

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa

Differential Revision: https://reviews.llvm.org/D76371
2020-03-20 17:06:11 +03:00
Austin Kerbow
2cbb8c946a [AMDGPU] Reuse register during frame index elimination
If there were no free VGPRs we would need two emergency spill slots for register
scavenging during PEI/frame index elimination. Reuse 'ResultReg' for scale
calculation so that only one spill is needed.

Differential Revision: https://reviews.llvm.org/D76387
2020-03-20 00:19:15 -07:00
cdevadas
728b878de6 [AMDGPU] Set the CostPerUse value for vgpr registers.
Apart from the argument registers, set the CostPerUse
value as per the ratio reg_index/allocation_granularity.
It is a pre-commit for introducing the scratch registers
in the ABI. This change should help in a balanced
register allocation.

Differential Revision: https://reviews.llvm.org/D76417
2020-03-20 11:49:35 +05:30
Scott Linder
0e9368cc8c [AMDGPU] Move frame pointer from s34 to s33
Remove the gap left between the stack pointer (s32) and frame pointer
(s34) now that the scratch wave offset is no longer a part of the
calling convention ABI.

Update llvm/docs/AMDGPUUsage.rst to reflect the change.

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75657
2020-03-19 15:35:16 -04:00
Scott Linder
60b1967c39 [AMDGPU] Add Scratch Wave Offset to Scratch Buffer Descriptor in entry functions
Add the scratch wave offset to the scratch buffer descriptor (SRSrc) in
the entry function prologue. This allows us to removes the scratch wave
offset register from the calling convention ABI.

As part of this change, allow the use of an inline constant zero for the
SOffset of MUBUF instructions accessing the stack in entry functions
when a frame pointer is not requested/required. Entry functions with
calls still need to set up the calling convention ABI stack pointer
register, and reference it in order to address arguments of called
functions. The ABI stack pointer register remains unswizzled, but is now
wave-relative instead of queue-relative.

Non-entry functions also use an inline constant zero SOffset for
wave-relative scratch access, but continue to use the stack and frame
pointers as before. When the stack or frame pointer is converted to a
swizzled offset it is now scaled directly, as the scratch wave offset no
longer needs to be subtracted first.

Update llvm/docs/AMDGPUUsage.rst to reflect these changes to the calling
convention.

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75138
2020-03-19 15:35:16 -04:00