When --hot-functions-at-end is used in combination with --use-old-text,
allocate code at the highest possible addresses withing old .text.
This feature is mostly useful for HHVM, where it is beneficial to have
hot static code placed as close as possible to jitted code.
The MCSymbolRefExpr::create overload with the specifier parameter is
discouraged and being phased out. Expressions with relocation specifiers
should use MCSpecifierExpr instead.
Reapply "[NFC][DebugInfo][DWARF] Create new low-level dwarf library (#…
(#145959)
This reapplies cbf781f0bd, with fixes for
the shared-library build and the unconventional sanitizer-runtime build.
Original Description:
This is the culmination of a series of changes described in [1].
Although somewhat large by line count, it is almost entirely mechanical,
creating a new library in DebugInfo/DWARF/LowLevel. This new library has
very minimal dependencies, allowing it to be used from more places than
the normal DebugInfo/DWARF library--in particular from MC.
1.
https://discourse.llvm.org/t/rfc-debuginfo-dwarf-refactor-into-to-lower-and-higher-level-libraries/86665/2
When all section contents are updated in-place, we can skip creation of
new segment(s), save disk space, and free up low memory addresses.
Currently, this feature only works with --use-gnu-stack.
Refactor the code for NewTextSegmentAddress to correctly point at the
true start of the segment when PHDR table is placed at the beginning. We
used to offset NewTextSegmentAddress by PHDR table plus cache line
alignment.
NFC for proper binaries. Some YAML binaries from our tests will diverge
due to bad segment address/offset alignment.
This is the culmination of a series of changes described in [1].
Although somewhat large by line count, it is almost entirely mechanical,
creating a new library in DebugInfo/DWARF/LowLevel. This new library has
very minimal dependencies, allowing it to be used from more places than
the normal DebugInfo/DWARF library--in particular from MC.
I am happy to put it in another location, or to structure it differently
if that makes sense. Some have suggested in BinaryFormat, but it is not
a great fit there. But if that makes more sense to the reviewers, I can
do that.
Another possibility would be to use pass-through headers to allow
clients who don't care to depend only on DebugInfo/DWARF. This would be
a much less invasive change, and perhaps easier for clients. But also a
system that hides details.
Either way, I'm open.
1.
https://discourse.llvm.org/t/rfc-debuginfo-dwarf-refactor-into-to-lower-and-higher-level-libraries/86665/2
Implement the detection of tail calls performed with untrusted link
register, which violates the assumption made on entry to every function.
Unlike other pauth gadgets, detection of this one involves some amount
of guessing which branch instructions should be checked as tail calls.
Address the issue that stems from how the density is computed.
Binary *function* density is the ratio of its total dynamic number of
executed bytes over the static size in bytes. The meaning of it is the
amount of dynamic profile information relative to its static size.
Binary *profile* density is the minimum *function* density among *well-
-profiled* functions, taken as functions covering p99 samples, or, in
other words, excluding functions in the tail 1% of samples. p99 is an
arbitrary cutoff. The meaning of profile density is the *minimum amount
of profile information per function* to be able to optimize the program
well. The threshold for profile density is set empirically.
The dynamically executed bytes are taken directly from LBR fall-throughs
and for LBRs recorded in trampoline functions, such as
```
000000001a941ec0 <Sleef_expf8_u10>:
1a941ec0: jmpq *0x37b911fa(%rip) # <pnt_expf8_u10>
1a941ec6: nopw %cs:(%rax,%rax)
```
the fall-through has zero length:
```
# Branch Target NextBranch Count
T 1b171cf6 1a941ec0 1a941ec0 568562
```
But it's not correct to say this function has zero executed bytes, just
the size of the next branch is not included in the fall-through.
If such functions have non-trivial sample count, they will fall in p99
samples, and cause the profile density to be zero.
To solve this, we can either:
1. Include fall-through end jump size into executed bytes:
is logically sound but technically challenging: the size needs to
come from disassembly (expensive), and the threshold need to be
reevaluated with updated definition of binary function density.
2. Exclude pass-through functions from density computation:
follows the intent of profile density which is to set the amount of
profile information needed to optimize the function well. Single
instruction pass-through functions don't need samples many times
the size to be optimized well.
Go with option 2 as a reasonable compromise.
Test Plan: added bolt/test/X86/zero-density.s
After a label in a function without CFG information, use a reasonably
pessimistic estimation of register state (assume that any register that
can be clobbered in this function was actually clobbered) instead of the
most pessimistic "all registers are unsafe". This is the same estimation
as used by the dataflow variant of the analysis when the preceding
instruction is not known for sure.
Without this, leaf functions without CFG information are likely to have
false positive reports about non-protected return instructions, as
1) LR is unlikely to be signed and authenticated in a leaf function and
2) LR is likely to be used by a return instruction near the end of the
function and
3) the register state is likely to be reset at least once during the
linear scan through the function
Instead of refusing to analyze an instruction completely when it is
unreachable according to the CFG reconstructed by BOLT, use pessimistic
assumption of register state when possible. Nevertheless, unreachable
basic blocks found in optimized code likely means imprecise CFG
reconstruction, thus report a warning once per function.
Rename these relocation specifier constants, aligning with the naming
convention used by other targets (`S_` instead of `VK_`).
* ELF/COFF: AArch64MCExpr::VK_ => AArch64::S_ (VK_ABS/VK_PAGE_ABS are
also used by Mach-O as a hack)
* Mach-O: AArch64MCExpr::M_ => AArch64::S_MACHO_
* shared: AArch64MCExpr::None => AArch64::S_None
Apologies for the churn following the recent rename in #132595. This
change ensures consistency after introducing MCSpecifierExpr to replace
MCTargetSpecifier subclasses.
Pull Request: https://github.com/llvm/llvm-project/pull/144633
Replace AArch64MCExpr, which encodes expressions with relocation
specifiers, with the new generic MCSpecifierExpr interface, aligning
with other targets by phasing out target-specific XXXMCExpr classes.
Temporarily convert AArch64MCExpr to a namespace to avoid renaming
`AArch64MCExpr::VK_` constants in this PR. A follow-up patch will rename
these to `AArch64::S_` to match the convention used by other targets.
Move helper functions to AArch64MCAsmInfo.h, with the goal of eventually
removing AArch64MCExpr.h.
Pull Request: https://github.com/llvm/llvm-project/pull/144632
Remove SymbolToFileName mapping from every local symbol to its
containing FILE symbol name, and reuse FileSymbols to disambiguate
local symbols instead.
Also removes the check for `ld-temp.o` file symbol which was added to
prevent LTO build mode from affecting the disambiguated name. This may
cause incompatibility when using the profile collected on a binary built
in a different mode than the input binary.
Addresses #90661.
Speeds up discover file objects by 5-10% for large binaries:
- binary with ~1.2M symbols: 12.6422s -> 12.0297s
- binary with ~4.5M symbols: 48.8851s -> 43.7315s
This change speeds up fragment matching for large BOLTed binaries where
all fragments of global parent functions are put under `bolt-pseudo.o`
file symbol:
- before: iterating over symbols under `bolt-pseudo.o` only to fail
to find a parent,
- after: bail out immediately and use a global parent by name.
Test Plan: NFC, updated register-fragments-bolt-symbols.s
`BoltAddressTranslation::getFallthroughsInTrace` iterates over address
translation map entries and therefore has direct access to both original
and translated offsets. Return the translated offsets in fall-throughs
list to avoid duplicate address translation inside `doTrace`.
Test Plan: NFC
Intel's Architectural LBR supports capturing branch type information
as part of LBR stack (SDM Vol 3B, part 2, October 2024):
```
20.1.3.2 Branch Types
The IA32_LBR_x_INFO.BR_TYPE and IA32_LER_INFO.BR_TYPE fields encode
the branch types as shown in Table 20-3.
Table 20-3. IA32_LBR_x_INFO and IA32_LER_INFO Branch Type Encodings
Encoding | Branch Type
0000B | COND
0001B | NEAR_IND_JMP
0010B | NEAR_REL_JMP
0011B | NEAR_IND_CALL
0100B | NEAR_REL_CALL
0101B | NEAR_RET
011xB | Reserved
1xxxB | OTHER_BRANCH
For a list of branch operations that fall into the categories above,
see Table 20-2.
Table 20-2. Branch Type Filtering Details
Branch Type | Operations Recorded
COND | Jcc, J*CXZ, and LOOP*
NEAR_IND_JMP | JMP r/m*
NEAR_REL_JMP | JMP rel*
NEAR_IND_CALL | CALL r/m*
NEAR_REL_CALL | CALL rel* (excluding CALLs to the next sequential IP)
NEAR_RET | RET (0C3H)
OTHER_BRANCH | JMP/CALL ptr*, JMP/CALL m*, RET (0C8H), SYS*,
interrupts, exceptions (other than debug exceptions), IRET, INT3,
INTn, INTO, TSX Abort, EENTER, ERESUME, EEXIT, AEX, INIT, SIPI, RSM
```
Linux kernel can preserve branch type when `save_type` is enabled,
even if CPU does not support Architectural LBR:
f09079bd04/tools/perf/Documentation/perf-record.txt (L457-L460)
> - save_type: save branch type during sampling in case binary is not
available later.
For the platforms with Intel Arch LBR support (12th-Gen+ client or
4th-Gen Xeon+ server), the save branch type is unconditionally enabled
when the taken branch stack sampling is enabled.
Kernel-reported branch type values:
8c6bc74c7f/include/uapi/linux/perf_event.h (L251-L269)
This information is needed to disambiguate external returns (from
DSO/JIT) to an entry point or a landing pad, when BOLT can't
disassemble the branch source.
This patch adds new pre-aggregated types:
- return trace (R),
- external return fall-through (r).
For such types, the checks for fall-through start (not an entry or
a landing pad) are relaxed.
Depends on #143295.
Test Plan: updated callcont-fallthru.s
Since Linux 6.14, Perf gained the ability to report SPE branch events
using the `brstack` format, which matches the layout of LBR/BRBE.
This patch reuses the existing LBR parsing logic to support SPE.
Example SPE brstack format:
```bash
perf script -i perf.data -F pid,brstack --itrace=bl
```
```
PID FROM / TO / PREDICTED
16984 0x72e342e5f4/0x72e36192d0/M/-/-/11/RET/-
16984 0x72e7b8b3b4/0x72e7b8b3b8/PN/-/-/11/COND/-
16984 0x72e7b92b48/0x72e7b92b4c/PN/-/-/8/COND/-
16984 0x72eacc6b7c/0x760cc94b00/P/-/-/9/RET/-
16984 0x72e3f210fc/0x72e3f21068/P/-/-/4//-
16984 0x72e39b8c5c/0x72e3627b24/P/-/-/4//-
16984 0x72e7b89d20/0x72e7b92bbc/P/-/-/4/RET/-
```
SPE brstack flags can be two characters long: `PN` or `MN`:
- `P` = predicted branch
- `M` = mispredicted branch
- `N` = optionally appears when the branch is NOT-TAKEN
- flag is relevant only to conditional branches
Example of usage with BOLT:
1. Capture SPE branch events:
```bash
perf record -e 'arm_spe_0/branch_filter=1/u' -- binary
```
2. Convert profile for BOLT:
```bash
perf2bolt -p perf.data -o perf.fdata --spe binary
```
3. Run BOLT Optimization:
```bash
llvm-bolt binary -o binary.bolted --data perf.fdata ...
```
A unit test verifies the parsing of the 'SPE brstack format'.
---------
Co-authored-by: Paschalis Mpeis <paschalis.mpeis@arm.com>
Some instruction-printing code used under LLVM_DEBUG does not handle CFI
instructions well. While CFI instructions seem to be harmless for the
correctness of the analysis results, they do not convey any useful
information to the analysis either, so skip them early.
Implement the detection of authentication instructions whose results can
be inspected by an attacker to know whether authentication succeeded.
As the properties of output registers of authentication instructions are
inspected, add a second set of analysis-related classes to iterate over
the instructions in reverse order.
Call continuation logic relies on assumptions about fall-through origin:
- the branch is external to the function,
- fall-through start is at the beginning of the block,
- the block is not an entry point or a landing pad.
Leverage trace information to explicitly check whether the origin is a
return instruction, and defer to checks above only in case of
DSO-external branch source.
This covers both regular and BAT cases, addressing call continuation
fall-through undercounting in the latter mode, which improves BAT
profile quality metrics. For example, for one large binary:
- CFG discontinuity 21.83% -> 0.00%,
- CFG flow imbalance 10.77%/100.00% -> 3.40%/13.82% (weighted/worst)
- CG flow imbalance 8.49% —> 8.49%.
Depends on #143289.
Test Plan: updated callcont-fallthru.s
Consistently apply traces as defined in #127125 for branch profile
aggregation. This combines branches and fall-through records into one.
With large input binaries/profiles, the speed up in aggregation time
(`-time-aggr`, wall time):
- perf.data, pre-BOLT input: 154.5528s -> 144.0767s
- pre-aggregated data, pre-BOLT input: 15.1026s -> 9.0711s
- pre-aggregated data, BOLTed input: 15.4871s -> 10.0077s
Test Plan: NFC
The CMake flag LLVM_APPEND_VC_REV can be passed when building BOLT a
BOLT to prevent including a VC Revision. This patch enables this
functionality.
Usage: `-DLLVM_APPEND_VC_REV=OFF` when running CMake.
* Move getPCRelHiFixup closer to the only caller RISCVAsmBackend::evaluateTargetFixup.
* Declare getSpecifierForName in RISCVMCAsmInfo, in align with other
targets that have migrated to the new relocation specifier representation.
Introduce `parse-mem-profile` option to limit overheads processing
tracing data (Intel PT or ARM ETM). By default, it's enabled for
perf data (existing behavior), unless `itrace` is passed to parse
tracing data where it's extremely expensive. In this case, the flag
needs to be set explicitly if needed.
Record the number of function invocations from external code - code
outside the binary, which may include JIT code and DSOs. Accounting
external entry counts improves the fidelity of call graph flow
conservation analysis.
Test Plan: updated shrinkwrapping.test
Parsed branches and fall-throughs are validated in `doBranch` and
`doTrace` respectively. Simplify parseLBRSample by omitting the
validation. This also speeds up perf data processing as checks are only
done once for aggregated branches/fall-throughs and not individual LBR
entries.
Since invalid/external addresses are no longer sanitized during parsing,
sanitize them in `doBranch`.
Test Plan: updated X86/pre-aggregated-perf.test
Aggregated branch data has two containers: `Data` for local branches,
and `EntryData` for external branches. Fix the omission and sort
`EntryData` to ensure stable output fdata profiles.
Test Plan: updated pre-aggregated-perf.test
#140196 introduced UB by using uninitialized misprediction count for
pre-aggregated traces. Fix by zero initializing both counters.
Test Plan: updated entry-point-fallthru.s
Do not recommend the strict mode to the user when ADR relaxation fails
on a non-simple function, i.e. a function with unknown CFG.
We cannot rely on relocations to reconstruct compiler-generated jump
tables for AArch64, hence strict mode does not work as intended.
When we call setIgnored() on functions that already have CFG built,
these functions are not going to get emitted and we risk missing
external function references being updated.
To mitigate the potential issues, run scanExternalRefs() on such
functions to create patches/relocations.
Since scanExternalRefs() relies on function relocations, we have to
preserve relocations until the function is emitted. As a result, the
memory overhead without debug info update could reach up to 2%.
Define a pre-aggregated basic sample format:
```
E <event name>
S <location> <count>
```
`-nl` flag is required to use parsed basic samples.
Test Plan: update pre-aggregated-perf.test
Add a capability to produce multiple heatmaps with given bucket sizes.
The default heatmap block size (64B) could be too fine-grained for
large binaries. Extend the option `block-size` to accept a list of
bucket sizes for additional heatmaps with coarser granularity. The
heatmap is simply rescaled so provided sizes should be multiples of
each other. Human-readable suffixes can be used, e.g. 4K, 16kb, 1MiB.
New defaults: 64B (base bucket size), 4KB (default page size),
256KB (for large binaries).
Test Plan: updated heatmap-preagg.test
The linker may omit data markers for long absolute veneers causing BOLT
to treat data as code. Detect such veneers and introduce data markers
artificially before BOLT's disassembler kicks in.
Clarify the semantics of `getAuthenticatedReg` and remove a redundant
`isAuthenticationOfReg` method, as combined auth+something instructions
(such as `retaa` on AArch64) should be handled carefully, especially
when searching for authentication oracles: usually, such instructions
cannot be authentication oracles and only some of them actually write an
authenticated pointer to a register (such as "ldra x0, [x1]!").
Use `std::optional<MCPhysReg>` returned type instead of plain MCPhysReg
and returning `getNoRegister()` as a "not applicable" indication.
Document a few existing methods, add information about preconditions.
We should never call fixBranches() on a function with invalid CFG. E.g.,
ValidateInternalCalls modifies CFG for its internal analysis purposes.
At the same time, it marks the function as non-simple with an assumption
that fixBranches() will never run on that function.
However, calculateEmittedSize() by default calls fixBranches() which can
lead to all sorts of issues, including assertions firing in
fixBranches().
The fix is to use the original size for non-simple functions in
calculateEmittedSize() since we are supposed to emit the function
unmodified. Additionally, add an assertion at the start of
fixBranches().
Remove `getAffectedRegisters` and `setOverwritingInstrs` methods from
the base `Report` class. Instead, rename the `Report` class to
`Diagnostic` and make it always represent the brief version of the
report, which is kept unchanged since initially found. Throughout its
life-cycle, an instance of `Diagnostic` is first wrapped into
`PartialReport<ReqT>` together with an optional request for extra
details. Then, on the second run of the analysis, it is re-wrapped into
`FinalReport` together with the requested detailed information.
lookupTarget takes StringRef and internally creates an instance of
std::string with the StringRef as part of constructing Triple, so we
don't need to create a temporary instance of std::string on our own.