Commit Graph

815 Commits

Author SHA1 Message Date
Ivan Kosarev
ad1d60c3be [FileCheck] Catch missspelled directives.
Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D125604
2022-05-26 11:37:19 +01:00
Hongtao Yu
9641b9be9d [Inliner] Preserve !prof metadata when converting call to invoke.
When a callee function is inlined via an invoke instruction, every function call inside the callee, if not an invoke,  will be converted to an invoke after cloned to the caller body. I found that during the conversion the !prof metadata was dropped. This in turned caused a cloned indirect call not properly promoted in subsequent passes.

The particular scenario I was investigating was with AutoFDO and thinLTO. In prelink, no ICP was triggered (neither by the sample loader nor PGO ICP), no indirect call was promoted. This is because 1) the particular indirect call did not have inlined samples;  and 2) PGO ICP was intentionally disabled.  After inlining, the prof metadata was dropped. Then in postlink, PGO ICP jumped in but didn't do anything. Thus the opportunity was missed.

I'm making a simple fix to preserve !prof metadata when converting call to invoke.

Reviewed By: davidxl

Differential Revision: https://reviews.llvm.org/D125249
2022-05-09 15:08:09 -07:00
Liqiang Tao
b9fc18f89a [llvm][Inline] Remove PriorityInlineOrder in SCC inliner
Since the size of most of SCC's is 1, the PriorityInlineOrder would not change the inline
order in SCC inliner.

Reviewed By: kazu

Differential Revision: https://reviews.llvm.org/D123608
2022-04-26 20:20:10 +08:00
David Green
9727c77d58 [NFC] Rename Instrinsic to Intrinsic 2022-04-25 18:13:23 +01:00
Nikita Popov
8d5c8d57c6 [InlineCost] Check that function types match
Retain the behavior we get without opaque pointers: A call to a
known function with different function type is considered an
indirect call.

This fixes the crash reported in https://reviews.llvm.org/D123300#3444772.
2022-04-12 11:05:33 +02:00
Serge Pavlov
47b3b76825 Implement inlining of strictfp functions
According to the current design, if a floating point operation is
represented by a constrained intrinsic somewhere in a function, all
floating point operations in the function must be represented by
constrained intrinsics. It imposes additional requirements to inlining
mechanism. If non-strictfp function is inlined into strictfp function,
all ordinary FP operations must be replaced with their constrained
counterparts.

Inlining strictfp function into non-strictfp is not implemented as it
would require replacement of all FP operations in the host function,
which now is undesirable due to expected performance loss.

Differential Revision: https://reviews.llvm.org/D69798
2022-03-31 19:15:52 +07:00
Johannes Doerfert
a81fff8afd Reapply "[Intrinsics] Add nocallback to the default intrinsic attributes"
This reverts commit c5f789050d and
reapplies 7aea3ea8c3 with additional test
changes.
2022-03-25 09:36:50 -05:00
Philip Reames
7abefc4222 [instcombine] Fold away memset/memmove from otherwise unused alloca
The motivation for this is that while both memcpyopt and dse will catch this case, both are limited by MSSA's walk back threshold when finding clobbers.  As such, if you have a memcpy of an otherwise dead alloca placed towards the end of a long basic block with lots of other memory instructions, it would be missed.  This is a bit undesirable for such an "obviously" useless bit of code.

As noted in comments, we should probably generalize instcombine's escape analysis peephole (see visitAllocInst) to allow read xor write.  Doing that would subsume this code in a more general way, but is also a more involved change.  For the moment, I went with the easiest fix.
2022-03-22 13:48:48 -07:00
Philip Reames
03949165cd [test] Autogen a test for ease of update 2022-03-22 12:41:23 -07:00
Arthur Eubanks
ddc702376a [NewPM] Don't skip SCCs not in current RefSCC
With D107249 I saw huge compile time regressions on a module (150s ->
5700s). This turned out to be due to a huge RefSCC in
the module. As we ran the function simplification pipeline on functions
in the SCCs in the RefSCC, some of those SCCs would be split out to
their RefSCC, a child of the current RefSCC. We'd skip the remaining
SCCs in the huge RefSCC because the current RefSCC is now the RefSCC
just split out, then revisit the original huge RefSCC from the
beginning.  This happened many times because many functions in the
RefSCC were optimizable to the point of becoming their own RefSCC.

This patch makes it so we don't skip SCCs not in the current RefSCC so
that we split out all the child RefSCCs on the first iteration of
RefSCC. When we split out a RefSCC, we invalidate the original RefSCC
and add the remainder of the SCCs into a new RefSCC in
RCWorklist. This happens repeatedly until we finish visiting all
SCCs, at which point there is only one valid RefSCC in
RCWorklist from the original RefSCC containing all the SCCs that
were not split out, and we visit that.

For example, in the newly added test cgscc-refscc-mutation-order.ll,
we'd previously run instcombine in this order:
f1, f2, f1, f3, f1, f4, f1

Now it's:
f1, f2, f3, f4, f1

This can cause more passes to be run in some specific cases,
e.g. if f1<->f2 gets optimized to f1<-f2, we'd previously run f1, f2;
now we run f1, f2, f2.

This improves kimwitu++ compile times by a lot (12-15% for various -O3 configs):
https://llvm-compile-time-tracker.com/compare.php?from=2371c5a0e06d22b48da0427cebaf53a5e5c54635&to=00908f1d67400cab1ad7bcd7cacc7558d1672e97&stat=instructions

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D121953
2022-03-18 14:16:29 -07:00
Ellis Hoag
f6b5142ac2 [AlwaysInliner] Emit inline remark only when successful
Failures in `InlineFunction()` are caught after D121722, but `emitInlinedIntoBasedOnCost()` should only be called when inlining is successful. This also removes an unnecessary call to `shouldInline()` which always returned `InlineCost::getAlways()`.

Reviewed By: kyulee, nikic

Differential Revision: https://reviews.llvm.org/D121946
2022-03-17 15:40:24 -07:00
Ellis Hoag
84c6689b15 [AlwaysInliner] Check inliner errors even without assserts
When we build clang without asserts we should still check the result of
`InlineFunction()` to be sure there wasn't an error. Otherwise we could
incorrectly merge attributes in the next line.

This also removes a redundent call to `getCaller()`.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D121722
2022-03-17 10:16:23 -07:00
Arthur Eubanks
53e5e58670 [NewPM][Inliner] Make inlined calls to functions in same SCC as callee exponentially expensive
Introduce a new attribute "function-inline-cost-multiplier" which
multiplies the inline cost of a call site (or all calls to a callee) by
the multiplier.

When processing the list of calls created by inlining, check each call
to see if the new call's callee is in the same SCC as the original
callee. If so, set the "function-inline-cost-multiplier" attribute of
the new call site to double the original call site's attribute value.
This does not happen when the original call site is intra-SCC.

This is an alternative to D120584, which marks the call sites as
noinline.

Hopefully fixes PR45253.

Reviewed By: davidxl

Differential Revision: https://reviews.llvm.org/D121084
2022-03-07 23:51:09 -08:00
Matt Arsenault
27712243ab Revert "Inliner: Correctly merge amdgpu-unsafe-fp-atomics attribute"
This reverts commit 169ebf03ab.

This was effectively rendering the attribute useless in the real
world, although this is still broken.
2022-03-03 15:25:32 -05:00
Dávid Bolvanský
f0e6ec1547 [Inliner] Respect noinline call site attribute
```
always_inline foo() { }

bar () {

noinline foo();
}
```
We should prefer call site attribute over attribute on decl.

Related to https://reviews.llvm.org/D119061

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D119579
2022-02-14 18:35:52 +01:00
Dávid Bolvanský
d828281e78 [AlwaysInliner] Respect noinline call site attribute
```
always_inline foo() { }

bar () {

noinline foo();
}
```

We should prefer call site attribute over attribute on decl. This is fix for AlwaysInliner, similar fix is needed for normal Inliner (follow up).

Related to https://reviews.llvm.org/D119061

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D119553
2022-02-11 19:23:11 +01:00
Nikita Popov
4810051a82 [Inline][Cloning] Reliably remove unreachable blocks during cloning (PR53206)
The pruning cloner already tries to remove unreachable blocks. The
original cloning process will simplify instructions and constant
terminators, and only clone blocks that are reachable at that point.
However, phi nodes can only be simplified after everything has been
cloned. For that reason, additional blocks may become unreachable
after phi simplification.

The code does try to handle this as well, but only removes blocks
that don't have predecessors. It misses unreachable cycles. This
can cause issues if SEH exception handling code is part of an
unreachable cycle, as the inliner is not prepared to deal with that.

This patch instead performs an explicit scan for reachable blocks,
and drops everything else.

Fixes https://github.com/llvm/llvm-project/issues/53206.

Differential Revision: https://reviews.llvm.org/D118449
2022-01-31 09:31:34 +01:00
Dávid Bolvanský
fe30370b00 Reland "[AlwaysInliner] Enable call site inlining to make flatten attribute working again (#53360)" 2022-01-26 01:11:06 +01:00
Dávid Bolvanský
90f185c964 Revert "[AlwaysInliner] Enable call site inlining to make flatten attribute working again (#53360)"
This reverts commit ceec438368. Clang tests fail.
2022-01-25 23:13:46 +01:00
Dávid Bolvanský
ceec438368 [AlwaysInliner] Enable call site inlining to make flatten attribute working again (#53360)
Problem: Migration to new PM broke flatten attribute.

This is one use case why LLVM should support inlining call-site with alwaysinline.  The flatten attribute is nowdays broken, so we should either land patch like this one or remove everything related to  flatten attribute from Clang.

Second use case is something like "per call site inlining intrinsics" to control inlining even more; mentioned in
https://lists.llvm.org/pipermail/cfe-dev/2018-September/059232.html

Fixes https://github.com/llvm/llvm-project/issues/53360

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D117965
2022-01-25 22:55:30 +01:00
Mircea Trofin
f29256a64a [MLGO] Improved support for AOT cross-targeting scenarios
The tensorflow AOT compiler can cross-target, but it can't run on (for
example) arm64. We added earlier support where the AOT-ed header and object
would be built on a separate builder and then passed at build time to
a build host where the AOT compiler can't run, but clang can be otherwise
built.

To simplify such scenarios given we now support more than one AOT-able
case (regalloc and inliner), we make the AOT scenario centered on whether
files are generated, case by case (this includes the "passed from a
different builder" scenario).
This means we shouldn't need an 'umbrella' LLVM_HAVE_TF_AOT, in favor of
case by case control. A builder can opt out of an AOT case by passing that case's
model path as `none`. Note that the overrides still take precedence.

This patch controls conditional compilation with case-specific flags,
which can be enabled locally, for the component where those are
available. We still keep an overall flag for some tests.

The 'development/training' mode is unchanged, because there the model is
passed from the command line and interpreted.

Differential Revision: https://reviews.llvm.org/D117752
2022-01-20 07:05:39 -08:00
Nikita Popov
da61cb019e [Attributes] Make attribute addition behavior consistent
Currently, the behavior when adding an attribute with the same key
as an existing attribute is inconsistent, depending on the type of
the attribute and the method used to add it. When going through
AttrBuilder::addAttribute(), the new attribute always overwrites
the old one. When going through AttrBuilder::merge() the new
attribute overwrites the existing one if it is a string attribute,
but keeps the existing one for int and type attributes. One
particular API also asserts that you can't overwrite an align
attribute, but does not handle any of the other int, type or string
attributes.

This patch makes the behavior consistent by always overwriting with
the new attribute, which is the behavior I would intuitively expect.
Two tests are affected, which now make a different (but equally
valid) choice. Those tests could be improved by taking the maximum
deref bytes, but I haven't bothered with that, since this is testing
a degenerate case -- the important bit is that it doesn't crash.

Differential Revision: https://reviews.llvm.org/D117552
2022-01-19 12:05:27 +01:00
Mircea Trofin
c4f66632da Fix build break introduced by D115847
Because CoroEarly is also run, the `coroutine.presplit` attr should be
0.
2022-01-18 11:34:18 -08:00
Mircea Trofin
3e8553aab4 [mlgo][inline] Improve global state tracking
The global state refers to the number of the nodes currently in the
module, and the number of direct calls between nodes, across the
module.

Node counts are not a problem; edge counts are because we want strictly
the kind of edges that affect inlining (direct calls), and that is not
easily obtainable without iteration over the whole module.

This patch avoids relying on analysis invalidation because it turned out
to be too aggressive in some cases. It leverages the fact that Node
objects are stable - they do not get deleted while cgscc passes are
run over the module; and cgscc pass manager invariants.

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D115847
2022-01-18 17:45:34 +00:00
Arthur Eubanks
9a0fe1b0fc [Inline] Attempt to delete any discardable if unused functions
Previously we limited ourselves to only internal/private functions. We
can also delete linkonce_odr functions.

Minor compile time wins:
https://llvm-compile-time-tracker.com/compare.php?from=d51e3474e060cb0e90dc2e2487f778b0d3e6a8de&to=bccffe3f8d5dd4dda884c9ac1f93e51772519cad&stat=instructions

Major memory wins on tramp3d:
https://llvm-compile-time-tracker.com/compare.php?from=d51e3474e060cb0e90dc2e2487f778b0d3e6a8de&to=bccffe3f8d5dd4dda884c9ac1f93e51772519cad&stat=max-rss

Relanding with fix for compile times D117236.

Reviewed By: nikic, mtrofin

Differential Revision: https://reviews.llvm.org/D115545
2022-01-13 14:48:38 -08:00
Hans Wennborg
2bc57d85eb Don't override __attribute__((no_stack_protector)) by inlining (PR52886)
Since 26c6a3e736, LLVM's inliner will "upgrade" the caller's stack protector
attribute based on the callee. This lead to surprising results with Clang's
no_stack_protector attribute added in 4fbf84c173 (D46300). Consider the
following code compiled with clang -fstack-protector-strong -Os
(https://godbolt.org/z/7s3rW7a1q).

  extern void h(int* p);

  inline __attribute__((always_inline)) int g() {
    return 0;
  }

  int __attribute__((__no_stack_protector__)) f() {
    int a[1];
    h(a);
    return g();
  }

LLVM will inline g() into f(), and f() would get a stack protector, against the
users explicit wishes, potentially breaking the program e.g. if h() changes the
value of the stack cookie. That's a miscompile.

More recently, bc044a88ee (D91816) addressed this problem by preventing
inlining when the stack protector is disabled in the caller and enabled in the
callee or vice versa. However, the problem remained if the callee is marked
always_inline as in the example above. This affected users, see e.g.
http://crbug.com/1274129 and http://llvm.org/pr52886.

One way to fix this would be to prevent inlining also in the always_inline
case. Despite the name, always_inline does not guarantee inlining, so this
would be legal but potentially surprising to users.

However, I think the better fix is to not enable the stack protector in a
caller based on the callee. The motivation for the old behaviour is unclear, it
seems counter-intuitive, and causes real problems as we've seen.

This commit implements that fix, which means in the example above, g() gets
inlined into f() (also without always_inline), and f() is emitted without stack
protector. I think that matches most developers' expectations, and that's also
what GCC does.

Another effect of this change is that a no_stack_protector function can now be
inlined into a stack protected function, e.g. (https://godbolt.org/z/hafP6W856):

  extern void h(int* p);

  inline int __attribute__((__no_stack_protector__)) __attribute__((always_inline)) g() {
    return 0;
  }

  int f() {
    int a[1];
    h(a);
    return g();
  }

I think that's fine. Such code would be unusual since no_stack_protector is
normally applied to a program entry point which sets up the stack canary. And
even if such code exists, inlining doesn't change the semantics: there is still
no stack cookie setup/check around entry/exit of the g() code region, but there
may be in the surrounding context, as there was before inlining. This also
matches GCC.

See also the discussion at https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94722

Differential revision: https://reviews.llvm.org/D116589
2022-01-13 12:04:49 +01:00
Hans Wennborg
2eb7d8d749 Simplify llvm/test/Transforms/Inline/inline_ssp.ll (NFC)
The nounwind and uwtable attributes were just cluttering up the test.
Using regexes to give symbolic names to the attribute lists make the
test more readable.

This is pre-committing parts of D116589.
2022-01-13 11:34:30 +01:00
James Y Knight
55fcbf0a84 Revert "[Inline] Attempt to delete any discardable if unused functions"
Somehow this ends up causing an infinite loop in the inliner.

This reverts commit d5be48c66d.
2022-01-13 03:06:47 +00:00
Arthur Eubanks
d5be48c66d [Inline] Attempt to delete any discardable if unused functions
Previously we limited ourselves to only internal/private functions. We
can also delete linkonce_odr functions.

Minor compile time wins:
https://llvm-compile-time-tracker.com/compare.php?from=d51e3474e060cb0e90dc2e2487f778b0d3e6a8de&to=bccffe3f8d5dd4dda884c9ac1f93e51772519cad&stat=instructions

Major memory wins on tramp3d:
https://llvm-compile-time-tracker.com/compare.php?from=d51e3474e060cb0e90dc2e2487f778b0d3e6a8de&to=bccffe3f8d5dd4dda884c9ac1f93e51772519cad&stat=max-rss

Reviewed By: nikic, mtrofin

Differential Revision: https://reviews.llvm.org/D115545
2022-01-12 08:36:04 -08:00
Nick Desaulniers
79ebc3b0dd [llvm][test] rewrite callbr to use i rather than X constraint NFC
In D115311, we're looking to modify clang to emit i constraints rather
than X constraints for callbr's indirect destinations. Prior to doing
so, update all of the existing tests in llvm/ to match.

Reviewed By: void, jyknight

Differential Revision: https://reviews.llvm.org/D115410
2022-01-11 11:31:08 -08:00
Arthur Eubanks
f96ab6cc1b Revert "[Inline] Attempt to delete any discardable if unused functions"
This reverts commit 335a3163aa.

Causes crashes when building llvm-test-suite's kc under ReleaseLTO-g.
2022-01-07 13:12:40 -08:00
Arthur Eubanks
335a3163aa [Inline] Attempt to delete any discardable if unused functions
Previously we limited ourselves to only internal/private functions. We
can also delete linkonce_odr functions.

Minor compile time wins:
https://llvm-compile-time-tracker.com/compare.php?from=d51e3474e060cb0e90dc2e2487f778b0d3e6a8de&to=bccffe3f8d5dd4dda884c9ac1f93e51772519cad&stat=instructions

Major memory wins on tramp3d:
https://llvm-compile-time-tracker.com/compare.php?from=d51e3474e060cb0e90dc2e2487f778b0d3e6a8de&to=bccffe3f8d5dd4dda884c9ac1f93e51772519cad&stat=max-rss

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D115545
2022-01-07 11:05:26 -08:00
Nikita Popov
f430c1eb64 [Tests] Add elementtype attribute to indirect inline asm operands (NFC)
This updates LLVM tests for D116531 by adding elementtype attributes
to operands that correspond to indirect asm constraints.
2022-01-06 14:23:51 +01:00
Nikita Popov
7c3cf4c2c0 [Inline][X86] Avoid inlining if it would create ABI-incompatible calls (PR52660)
X86 allows inlining functions if the callee target features are a
subset of the caller target features. This ensures that we don't
inline something into a caller that does not support it.

However, this does not account for possible call ABI mismatches as
a result of inlining. If a call passing a vector argument was
originally in a -avx function, calling another -avx function, the
vector is passed in xmm. If we now inline it into a +avx function,
then it will be passed in ymm, even though the callee expects it in xmm.

Fix this by scanning over all calls in the function and checking
whether ABI incompatibility is possible. Calls that only pass scalar
types are excluded, as I believe those always use the same ABI
independent of target features.

Fixes https://github.com/llvm/llvm-project/issues/52660.

Differential Revision: https://reviews.llvm.org/D116036
2021-12-27 09:36:21 +01:00
Mircea Trofin
edf8e3ea5e [NFC][mlgo]Make the test model generator inlining-specific
When looking at building the generator for regalloc, we realized we'd
need quite a bit of custom logic, and that perhaps it'd be easier to
just have each usecase (each kind of mlgo policy) have it's own
stand-alone test generator.

This patch just consolidates the old `config.py` and
`generate_mock_model.py` into one file, and does away with
subdirectories under Analysis/models.
2021-12-22 13:38:45 -08:00
Nikita Popov
c79a671968 [Inline] Add test for PR52660 (NFC) 2021-12-20 12:59:12 +01:00
Nikita Popov
a8c2ba105d [Inline] Disable deferred inlining
After the switch to the new pass manager, we have observed multiple
instances of catastrophic inlining, where the inliner produces huge
functions with many hundreds of thousands of instructions from small
input IR. We were forced to back out the switch to the new pass
manager for this reason. This patch fixes at least one of the root
cause issues.

LLVM uses a bottom-up inliner, and the fact that functions are processed
bottom-up is not just a question of optimality -- it is an imporant
requirement to prevent runaway inlining. The premise of the current
inlining approach and cost model is that after all calls inside a function
have been inlined, it may get large enough that inlining it into its
callers is no longer considered profitable. This safeguard does not
exist if inlining doesn't happen bottom-up, as inlining the callees,
and their callees, and their callees etc. will always seem individually
profitable, and the inliner can easily flatten the whole call tree.

There are instances where we necessarily have to deviate from bottom-up
inlining: When inlining in an SCC there is no natural "bottom", so
inlining effectively happens top-down. This requires special care,
and the inliner avoids exponential blowup by ensuring that functions
in the SCC grow in a balanced way and will eventually hit the threshold.

However, there is one instance where the inlining advisor explicitly
violates the bottom-up principle: Deferred inlining tries to "defer"
inlining a call if it determines that inlining the caller into all
its call-sites would be more profitable. Something very important to
understand about deferred inlining is that it doesn't make one inlining
choice in place of another -- it effectively chooses to do both. If we
have a call chain A -> B -> C and cost modelling tells us that inlining
B -> C is profitable, but we defer this and instead inline A -> B first,
then we'll now have a call A -> C, and the cost model will (a few special
cases notwithstanding) still tell us that this is profitable. So the end
result is that we inlined *both* B and C, even though under the usual
cost model function B would have been too large to further inline after
C has been integrated into it.

Because deferred inlining violates the bottom-up invariant of the inliner,
it can result in exponential inlining. The exponential-deferred-inlining.ll
test case illustrates this on a simple example (see
https://gist.github.com/nikic/1262b5f7d27278e1b34a190ae10947f5 for a
much more catastrophic case with about 5000x size blowup). If the call
chain A -> B -> C is not a chain but a tree of calls, then we end up
deferring inlining across the tree and end up flattening everything into
the root node.

This patch proposes to address this by disabling deferred inlining
entirely (currently still behind an option). Beyond the issue of
exponential inlining, I don't think that the whole concept makes sense,
at least as long as deferred inlining still ends up inlining both call
edges.

I believe the motivation for having deferred inlining in the first place
is that you might have a small wrapper function with local linkage that
could be eliminated if inlined. This would automatically happen if there
was a single caller, due to the large "last call to local" bonus. However,
this bonus is not extended if there are multiple callers, even if we
would eventually end up inlining into all of them (if the bonus were
extended).

Now, unlike the normal inlining cost model, the deferred inlining cost
model does look at all callers, and will extend the "last call to local"
bonus if it determines that we could inline all of them as long as we
defer the current inlining decision. This makes very little sense.
The "last call to local" bonus doesn't really cost model anything.
It's basically an "infinite" bonus that ensures we always inline the
last call to a local. The fact that it's not literally infinite just
prevents inlining of huge functions, which can easily result in
scalability issues. I very much doubt that it was an intentional
cost-modelling choice to say that getting rid of a small local function
is worth adding 15000 instructions elsewhere, yet this is exactly how
this value is getting used here.

The main alternative I see to complete removal is to change deferred
inlining to an actual either/or decision. That is, to mark deferred
calls as noinline so we're actually trading off one inlining decision
against another, and not just adding a side-channel to the cost model
to do both.

Apart from fixing the catastrophic inlining case, the effect on rustc
is a modest compile-time improvement on average (up to 8% for a
parsing-type crate, where tree-like calls are expected) and pretty
neutral where run-time performance is concerned (mix of small wins
and losses, usually in the sub-1% category).

Differential Revision: https://reviews.llvm.org/D115497
2021-12-16 09:59:50 +01:00
Matt Arsenault
169ebf03ab Inliner: Correctly merge amdgpu-unsafe-fp-atomics attribute
It seems we don't have already have any target specific attributes
handled in the inliner. Include a separate tablegen file to define the
rules, similar to the target specific intrinsics.
2021-12-15 18:37:18 -05:00
Matt Arsenault
946e805665 AMDGPU: Add baseline test for unsafe fp atomics attribute inlining 2021-12-15 18:37:18 -05:00
Nikita Popov
ccafd2d0b5 [Inline] Add test for exponential deferred inlining (NFC)
This shows a case where deferred inlining produces an exponential
result. The test case demonstrates the basic exponential behavior,
but is nowhere close to the worst case. For example, the file at
https://gist.github.com/nikic/1262b5f7d27278e1b34a190ae10947f5
currently gets expanded from <100 lines to nearly 500000 lines of
IR by opt -inline.
2021-12-10 08:59:59 +01:00
Zarko Todorovski
7f7dac7126 [NFC][llvm] Inclusive language: reword uses of sanity test and check
Part of continuing work to use more inclusive language. Reworded uses
of sanity check and sanity test in llvm/test/
2021-11-25 07:21:42 -05:00
Arthur Eubanks
19867de9e7 [NewPM] Only invalidate modified functions' analyses in CGSCC passes + turn on eagerly invalidate analyses
Previously, any change in any function in an SCC would cause all
analyses for all functions in the SCC to be invalidated. With this
change, we now manually invalidate analyses for functions we modify,
then let the pass manager know that all function analyses should be
preserved since we've already handled function analysis invalidation.

So far this only touches the inliner, argpromotion, function-attrs, and
updateCGAndAnalysisManager(), since they are the most used.

This is part of an effort to investigate running the function
simplification pipeline less on functions we visit multiple times in the
inliner pipeline.

However, this causes major memory regressions especially on larger IR.
To counteract this, turn on the option to eagerly invalidate function
analyses. This invalidates analyses on functions immediately after
they're processed in a module or scc to function adaptor for specific
parts of the pipeline.

Within an SCC, if a pass only modifies one function, other functions in
the SCC do not have their analyses invalidated, so in later function
passes in the SCC pass manager the analyses may still be cached. It is
only after the function passes that the eager invalidation takes effect.
For the default pipelines this makes sense because the inliner pipeline
runs the function simplification pipeline after all other SCC passes
(except CoroSplit which doesn't request any analyses).

Overall this has mostly positive effects on compile time and positive effects on memory usage.
https://llvm-compile-time-tracker.com/compare.php?from=7f627596977624730f9298a1b69883af1555765e&to=39e824e0d3ca8a517502f13032dfa67304841c90&stat=instructions
https://llvm-compile-time-tracker.com/compare.php?from=7f627596977624730f9298a1b69883af1555765e&to=39e824e0d3ca8a517502f13032dfa67304841c90&stat=max-rss

D113196 shows that we slightly regressed compile times in exchange for
some memory improvements when turning on eager invalidation.  D100917
shows that we slightly improved compile times in exchange for major
memory regressions in some cases when invalidating less in SCC passes.
Turning these on at the same time keeps the memory improvements while
keeping compile times neutral/slightly positive.

Reviewed By: asbirlea, nikic

Differential Revision: https://reviews.llvm.org/D113304
2021-11-15 14:44:53 -08:00
Liqiang Tao
6cad45d5c6 [llvm][Inline] Add a module level inliner
Add module level inliner, which is a minimum viable product at this point.
Also add some tests for it.

RFC: https://lists.llvm.org/pipermail/llvm-dev/2021-August/152297.html

Reviewed By: kazu

Differential Revision: https://reviews.llvm.org/D106448
2021-11-09 11:03:29 +08:00
modimo
5caad9b5d3 [InlineAdvisor] Add fallback/format switches and negative remark processing to Replay Inliner
Adds the following switches:

1. --sample-profile-inline-replay-fallback/--cgscc-inline-replay-fallback: controls what the replay advisor does for inline sites that are not present in the replay. Options are:

 1. Original: defers to original advisor
 2. AlwaysInline: inline all sites not in replay
 3. NeverInline: inline no sites not in replay

2. --sample-profile-inline-replay-format/--cgscc-inline-replay-format: controls what format should be generated to match against the replay remarks. Options are:

  1. Line
  2. LineColumn
  3. LineDiscriminator
  4. LineColumnDiscriminator

Adds support for negative inlining decisions. These are denoted by "will not be inlined into" as compared to the positive "inlined into" in the remarks.

All of these together with the previous `--sample-profile-inline-replay-scope/--cgscc-inline-replay-scope` allow tweaking in how to apply replay. In my testing, I'm using:
1. --sample-profile-inline-replay-scope/--cgscc-inline-replay-scope = Function to only replay on a function
2. --sample-profile-inline-replay-fallback/--cgscc-inline-replay-fallback = NeverInline since I'm feeding in only positive remarks to the replay system
3. --sample-profile-inline-replay-format/--cgscc-inline-replay-format = Line since I'm generating the remarks from DWARF information from GCC which can conflict quite heavily in column number compared to Clang

An alternative configuration could be to do Function, AlwaysInline, Line fallback with negative remarks which closer matches the final call-sites. Note that this can lead to unbounded inlining if a negative remark doesn't match/exist for one reason or another.

Updated various tests to cover the new switches and negative remarks

Testing:
ninja check-all

Reviewed By: wenlei, mtrofin

Differential Revision: https://reviews.llvm.org/D112040
2021-10-29 12:32:03 -07:00
Steven Wan
57cb84f5a2 Point replay file to non-existent dummy
Operating systems such as AIX allow open and read on directories, passing in a direcotry as the replay file triggers `Invalid remark format` instead of `Could not open remarks file: Is a directory`. This patch substitutes the directory with a non-existent filename. The current filecheck should still work as-is.

Reviewed By: modimo, Whitney

Differential Revision: https://reviews.llvm.org/D112745
2021-10-29 11:58:40 -04:00
Arthur Eubanks
544a21566d [test] Make test added in D112473 check the IR
The test was intended to also check the IR to be empty.
2021-10-25 14:10:58 -07:00
Arthur Eubanks
4a9db7367d [AlwaysInliner] Invalidate analyses when we delete functions
Fixes PR52292.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D112473
2021-10-25 13:36:32 -07:00
Nikita Popov
1848525842 [CodeMetrics] Don't require speculatability for ephemeral values
As discussed in D112016, our current requirement of speculatability
for ephemeral is overly strict: What we really care about is that
the instruction will be DCEd once the assume is dropped. For that
it is sufficient that the instruction is side-effect free and not
a terminator.

In particular, this allows non-dereferenceable loads to be ephemeral
values.

Differential Revision: https://reviews.llvm.org/D112179
2021-10-21 20:30:01 +02:00
Nikita Popov
8e4ae603d6 [Tests] Add tests for non-speculatable ephemeral values
The loads in these examples are currently not considered ephemeral
because they are not speculatable.
2021-10-20 23:33:36 +02:00
Arthur Eubanks
15fefcb9eb [opt] Directly translate -O# to -passes='default<O#>'
Right now when we see -O# we add the corresponding 'default<O#>' into
the list of passes to run when translating legacy -pass-name. This has
the side effect of not using the default AA pipeline.

Instead, treat -O# as -passes='default<O#>', but don't allow any other
-passes or -pass-name. I think we can keep `opt -O#` as shorthand for
`opt -passes='default<O#>` but disallow anything more than just -O#.

Tests need to be updated to not use `opt -O# -pass-name`.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D112036
2021-10-18 16:48:10 -07:00