Embedded systems that do not use an ELF loader locate the
.ARM.exidx exception table via linker defined __exidx_start and
__exidx_end rather than use the PT_ARM_EXIDX program header. This
means that some linker scripts such as the picolibc C library's
linker script, do not have the .ARM.exidx sections at offset 0 in
the OutputSection. For example:
.except_unordered : {
. = ALIGN(8);
PROVIDE(__exidx_start = .);
*(.ARM.exidx*)
PROVIDE(__exidx_end = .);
} >flash AT>flash :text
This is within the specification of Arm exception tables, and is
handled correctly by ld.bfd.
This patch has 2 parts. The first updates the writing of the data
of the .ARM.exidx SyntheticSection to account for a non-zero
OutputSection offset. The second part makes the PT_ARM_EXIDX program
header generation a special case so that it covers only the
SyntheticSection and not the parent OutputSection. While not strictly
necessary for programs locating the exception tables via the symbols
it may cause ELF utilities that locate the exception tables via
the PT_ARM_EXIDX program header to fail. This does not seem to be the
case for GNU and LLVM readelf which seems to look for the
SHT_ARM_EXIDX section.
Differential Revision: https://reviews.llvm.org/D148033
This implements support for relaxing these relocations to use the GP
register to compute addresses of globals in the .sdata and .sbss
sections.
This feature is off by default and must be enabled by passing
--relax-gp to the linker.
The GP register might not always be the "global pointer". It can
be used for other purposes. See discussion here
https://github.com/riscv-non-isa/riscv-elf-psabi-doc/pull/371
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D143673
In issue 61250 https://github.com/llvm/llvm-project/issues/61250 there is
an example of a program that takes 17 passes to converge, which is 2 more
than the current limit of 15. Analysis of the program shows a particular
section that is made up of many roughly thunk sized chunks of code ending
in a call to a symbol that needs a thunk. Due to the positioning of the
section, at each pass a subset of the calls go out of range of their
original thunk, needing a new one created, which then pushes more thunks
out of range. This process eventually stops after 17 passes.
This patch is the simplest fix for the problem, which is to increase
the pass limit. I've chosen to double it which should comfortably
account for future cases like this, while only taking a few more
seconds to reach the limit in case of non-convergence.
As discussed in the issue, there could be some additional work done
to limit thunk reuse, this would potentially increase the number of
thunks in the program but should speed up convergence.
Differential Revision: https://reviews.llvm.org/D145786
GNU ld's internal linker scripts for RISC-V place .sdata and .sbss close.
This makes GP relaxation more profitable.
While here, when .sbss is present, set `__bss_start` to the start of
.sbss instead of .bss, to match GNU ld.
Note: GNU ld's internal linker scripts have symbol assignments and input
section descriptions which are not relevant for modern systems. We only
add things that make sense.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D145118
_TLS_MODULE_BASE_ is supposed to be defined by the final link. Defining it in a
relocatable link may render the final link value incorrect.
GNU ld i386/x86-64 have the same issue: https://sourceware.org/bugzilla/show_bug.cgi?id=29820
Add LLVM_LIBRARY_VISIBILITY to remove unneeded GOT and unique_ptr
indirection. We can move other global variables into ctx without
indirection concern. In the long term we may consider passing Ctx
as a parameter to various functions and eliminate global state as
much as possible and then remove `Ctx::reset`.
* Change `Symbol::flags` to a `std::atomic<uint16_t>`
* Add `llvm::parallel::threadIndex` as a thread-local non-negative integer
* Add `relocsVec` to part.relaDyn and part.relrDyn so that relative relocations can be added without a mutex
* Arbitrarily change -z nocombreloc to move relative relocations to the end. Disable parallelism for deterministic output.
MIPS and PPC64 use global states for relocation scanning. Keep serial scanning.
Speed-up with mimalloc and --threads=8 on an Intel Skylake machine:
* clang (Release): 1.27x as fast
* clang (Debug): 1.06x as fast
* chrome (default): 1.05x as fast
* scylladb (default): 1.04x as fast
Speed-up with glibc malloc and --threads=16 on a ThunderX2 (AArch64):
* clang (Release): 1.31x as fast
* scylladb (default): 1.06x as fast
Reviewed By: andrewng
Differential Revision: https://reviews.llvm.org/D133003
We currently process one OutputSection at a time and for each OutputSection
write contained input sections in parallel. This strategy does not leverage
multi-threading well. Instead, parallelize writes of different OutputSections.
The default TaskSize for parallelFor often leads to inferior sharding. We
prepare the task in the caller instead.
* Move llvm::parallel::detail::TaskGroup to llvm::parallel::TaskGroup
* Add llvm::parallel::TaskGroup::execute.
* Change writeSections to declare TaskGroup and pass it to writeTo.
Speed-up with --threads=8:
* clang -DCMAKE_BUILD_TYPE=Release: 1.11x as fast
* clang -DCMAKE_BUILD_TYPE=Debug: 1.10x as fast
* chrome -DCMAKE_BUILD_TYPE=Release: 1.04x as fast
* scylladb build/release: 1.09x as fast
On M1, many benchmarks are a small fraction of a percentage faster. Mozilla showed the largest difference with the patch being about 1.03x as fast.
Differential Revision: https://reviews.llvm.org/D131247
This change renames this method match its original name and the name
used in the wasm linker.
Back in d8f8abbd4a the ELF SymbolTable
method `getSymbols()` was replaced with `forEachSymbol`.
Then in a2fc964417 `forEachSymbol` was
replaced with a `llvm::iterator_range`.
Then in e9262edf0d we came full circle
and the `llvm::iterator_range` was replaced with a `symbols()` accessor
that was identical the original `getSymbols()`.
`getSymbols` also matches the name used elsewhere in the ELF linker as
well as in both COFF and wasm backend (e.g. `InputFiles.h` and
`SyntheticSections.h`)
Differential Revision: https://reviews.llvm.org/D130787
This was recently introduced in GNU linkers and it makes sense for
ld.lld to have the same support. This implementation omits checking if
the input string is valid json to reduce size bloat.
Differential Revision: https://reviews.llvm.org/D131439
inputSections temporarily contains EhInputSection objects mainly for
combineEhSections. Place EhInputSection objects into a new vector
ehInputSections instead of inputSections.
Use a format more similar to unresolved references from regular object
files. It's probably easier to read for people who are less familiar
with the linker diagnostics.
Reviewed By: ikudrin
Differential Revision: https://reviews.llvm.org/D129790
When `--symbol-ordering-file` is specified, the linker today will always put
hot contributions in the middle of cold ones when targeting RISC machine, so
to minimize the chances that branch thunks need be generated for hot code
calling into cold code. This is not necessary when user specifies an ordering
of read-only data (vs. function) symbols, or when output section is small such
that no branch thunk would ever be required. The latter is common for mobile
apps. For example, among all the native ARM64 libraries in Facebook Instagram
App for Android, 80% of them have text section smaller than 64KB and the
largest text section seen is less than 8MB, well below the distance that a
BRANCH26 can reach.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D128382
Alternative to D125036. Implement R_RISCV_ALIGN relaxation so that we can handle
-mrelax object files (i.e. -mno-relax is no longer needed) and creates a
framework for future relaxation.
`relaxAux` is placed in a union with InputSectionBase::jumpInstrMod, storing
auxiliary information for relaxation. In the first pass, `relaxAux` is allocated.
The main data structure is `relocDeltas`: when referencing `relocations[i]`, the
actual offset is `r_offset - (i ? relocDeltas[i-1] : 0)`.
`relaxOnce` performs one relaxation pass. It computes `relocDeltas` for all text
section. Then, adjust st_value/st_size for symbols relative to this section
based on `SymbolAnchor`. `bytesDropped` is set so that `assignAddresses` knows
that the size has changed.
Run `relaxOnce` in the `finalizeAddressDependentContent` loop to wait for
convergence of text sections and other address dependent sections (e.g.
SHT_RELR). Note: extrating `relaxOnce` into a separate loop works for many cases
but has issues in some linker script edge cases.
After convergence, compute section contents: shrink the NOP sequence of each
R_RISCV_ALIGN as appropriate. Instead of deleting bytes, we run a sequence of
memcpy on the content delimitered by relocation locations. For R_RISCV_ALIGN let
the next memcpy skip the desired number of bytes. Section content computation is
parallelizable, but let's ensure the implementation is mature before
optimizations. Technically we can save a copy if we interleave some code with
`OutputSection::writeTo`, but let's not pollute the generic code (we don't have
templated relocation resolving, so using conditions can impose overhead to
non-RISCV.)
Tested:
`make ARCH=riscv CROSS_COMPILE=riscv64-linux-gnu- LLVM=1 defconfig all` built Linux kernel using -mrelax is bootable.
FreeBSD RISCV64 system using -mrelax is bootable.
bash/curl/firefox/libevent/vim/tmux using -mrelax works.
Differential Revision: https://reviews.llvm.org/D127581
In the majority of cases (e.g. orphan sections), an OutputSection has at most
one InputSectionDescription (isd). By changing the return type to
ArrayRef<InputSection *> we can just reference the isd->sections. For
OutputSections with more than one InputSectionDescription we use a caller
provided SmallVector to copy the elements as before.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D129111
Patch created by running:
rg -l parallelForEachN | xargs sed -i '' -c 's/parallelForEachN/parallelFor/'
No behavior change.
Differential Revision: https://reviews.llvm.org/D128140
We picked common-page-size to match GNU ld. Recently, the resolution to GNU ld
https://sourceware.org/bugzilla/show_bug.cgi?id=28824 (milestone: 2.39) switched
to max-page-size so that the last page can be protected by RELRO in case the
system page size is larger than common-page-size.
Thanks to our two RW PT_LOAD scheme (D58892), switching to max-page-size does
not change file size (while GNU ld's scheme may increase file size).
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D125410
We currently hard code RELRO sections. When a custom section is between
DATA_SEGMENT_ALIGN and DATA_SEGMENT_RELRO_END, we may report a spurious
`error: section: ... is not contiguous with other relro sections`. GNU ld
makes such sections RELRO.
glibc recently switched to default --with-default-link=no. This configuration
places `__libc_atexit` and others between DATA_SEGMENT_ALIGN and
DATA_SEGMENT_RELRO_END. This patch allows such a ld.bfd --verbose
linker script to be fed into lld.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D124656
This reverts commit 764cd491b1, which I
incorrectly assumed NFC partly because there were no test coverage for the
non-relocatable non-emit-relocs case before 9d6d936243fe343abe89323a27c7241b395af541.
The interaction of {,-r,--emit-relocs} {,--discard-locals} {,--gc-sections} is
complex but without -r/--emit-relocs, --gc-sections does need to discard .L
symbols like --no-gc-sections. The behavior matches GNU ld.
This ELF note is aarch64 and Android-specific. It specifies to the
dynamic loader that specific work should be scheduled to enable MTE
protection of stack and heap regions.
Current synthesis of the ".note.android.memtag" ELF note is done in the
Android build system. We'd like to move that to the compiler. This patch
adds the --memtag-stack, --memtag-heap, and --memtag-mode={async, sync,
none} flags to the linker, which synthesises the note for us.
Future changes will add -fsanitize=memtag* flags to clang which will
pass these through to lld.
Depends on D119381.
Differential Revision: https://reviews.llvm.org/D119384
addSectionSymbols suppresses the STT_SECTION symbol if the first input section
is non-SHF_MERGE synthetic. This is incorrect when the first input section is synthetic
while a non-synthetic input section exists:
* `.bss : { *(COMMON) *(.bss) }`
(abc388ed3c regressed the case because
COMMON symbols precede .bss in the absence of a linker script)
* Place a synthetic section in another section: `.data : { *(.got) *(.data) }`
For `%t/a1` in the new test emit-relocs-synthetic.s, ld.lld produces incorrect
relocations with symbol index 0.
```
0000000000000000 <_start>:
0: 8b 05 33 00 00 00 movl 51(%rip), %eax # 0x39 <bss>
0000000000000002: R_X86_64_PC32 *ABS*+0xd
6: 8b 05 1c 00 00 00 movl 28(%rip), %eax # 0x28 <common>
0000000000000008: R_X86_64_PC32 common-0x4
c: 8b 05 06 00 00 00 movl 6(%rip), %eax # 0x18
000000000000000e: R_X86_64_GOTPCRELX *ABS*+0x4
```
Fix the issue by checking every input section.
Reviewed By: ikudrin
Differential Revision: https://reviews.llvm.org/D122463
--build-id was introduced as "approximation of true uniqueness across all
binaries that might be used by overlapping sets of people". It does not require
the some resistance mentioned below. In practice, people just use --build-id=md5
for 16-byte build ID and --build-id=sha1 for 20-byte build ID.
BLAKE3 has 256-bit key length, which provides 128-bit security against
(second-)preimage, collision, and differentiability attacks. Its portable
implementation is fast. It additionally provides Arm Neon/AVX2/AVX-512. Just
implement --build-id={md5,sha1} with truncated BLAKE3.
Linking clang 14 RelWithDebInfo with --threads=8 on a Skylake CPU:
* 1.13x as fast with --build-id=md5
* 1.15x as fast with --build-id=sha1
--threads=4 on Apple m1:
* 1.25x as fast with --build-id=md5
* 1.17x as fast with --build-id=sha1
Reviewed By: ikudrin
Differential Revision: https://reviews.llvm.org/D121531
Add an OutputDesc class inheriting from SectionCommand. An OutputDesc wraps an
OutputSection. This change allows InputSection::getParent to be inlined.
Differential Revision: https://reviews.llvm.org/D120650