Close #57618: currently we align the end of PT_GNU_RELRO to a common-page-size boundary, but do not align the end of the associated PT_LOAD. This is benign when runtime_page_size >= common-page-size. However, when runtime_page_size < common-page-size, it is possible that `alignUp(end(PT_LOAD), page_size) < alignDown(end(PT_GNU_RELRO), page_size)`. In this case, rtld's mprotect call for PT_GNU_RELRO will apply to unmapped regions and lead to an error, e.g. ``` error while loading shared libraries: cannot apply additional memory protection after relocation: Cannot allocate memory ``` To fix the issue, add a padding section .relro_padding like mold, which is contained in the PT_GNU_RELRO segment and the associated PT_LOAD segment. The section also prevents strip from corrupting PT_LOAD program headers. .relro_padding has the largest `sortRank` among RELRO sections. Therefore, it is naturally placed at the end of `PT_GNU_RELRO` segment in the absence of `PHDRS`/`SECTIONS` commands. In the presence of `SECTIONS` commands, we place .relro_padding immediately before a symbol assignment using DATA_SEGMENT_RELRO_END (see also https://reviews.llvm.org/D124656), if present. DATA_SEGMENT_RELRO_END is changed to align to max-page-size instead of common-page-size. Some edge cases worth mentioning: * ppc64-toc-addis-nop.s: when PHDRS is present, do not append .relro_padding * avoid-empty-program-headers.s: when the only RELRO section is .tbss, it is not part of PT_LOAD segment, therefore we do not append .relro_padding. --- Close #65002: GNU ld from 2.39 onwards aligns the end of PT_GNU_RELRO to a max-page-size boundary (https://sourceware.org/PR28824) so that the last page is protected even if runtime_page_size > common-page-size. In my opinion, losing protection for the last page when the runtime page size is larger than common-page-size is not really an issue. Double mapping a page of up to max-common-page for the protection could cause undesired VM waste. Internally we had users complaining about 2MiB max-page-size applying to shared objects. Therefore, the end of .relro_padding is padded to a common-page-size boundary. Users who are really anxious can set common-page-size to match their runtime page size. --- 17 tests need updating as there are lots of change detectors.
60 lines
1.5 KiB
ArmAsm
60 lines
1.5 KiB
ArmAsm
# REQUIRES: x86
|
|
# RUN: llvm-mc -filetype=obj -triple=x86_64-unknown-linux %s -o %t
|
|
# RUN: ld.lld %t -o %tout
|
|
# RUN: llvm-objdump --section-headers %tout | FileCheck %s
|
|
|
|
.global _start
|
|
.text
|
|
_start:
|
|
|
|
.section .text.a,"ax"
|
|
.byte 0
|
|
.section .text.,"ax"
|
|
.byte 0
|
|
.section .rodata.a,"a"
|
|
.byte 0
|
|
.section .rodata,"a"
|
|
.byte 0
|
|
.section .data.a,"aw"
|
|
.byte 0
|
|
.section .data,"aw"
|
|
.byte 0
|
|
.section .bss.a,"aw",@nobits
|
|
.byte 0
|
|
.section .bss,"aw",@nobits
|
|
.byte 0
|
|
.section .foo.a,"aw"
|
|
.byte 0
|
|
.section .foo,"aw"
|
|
.byte 0
|
|
.section .data.rel.ro,"aw",%progbits
|
|
.byte 0
|
|
.section .data.rel.ro.a,"aw",%progbits
|
|
.byte 0
|
|
.section .data.rel.ro.local,"aw",%progbits
|
|
.byte 0
|
|
.section .data.rel.ro.local.a,"aw",%progbits
|
|
.byte 0
|
|
.section .tbss.foo,"aGwT",@nobits,foo,comdat
|
|
.byte 0
|
|
.section .gcc_except_table.foo,"aG",@progbits,foo,comdat
|
|
.byte 0
|
|
.section .tdata.foo,"aGwT",@progbits,foo,comdat
|
|
.byte 0
|
|
|
|
// CHECK: .rodata 00000002
|
|
// CHECK-NEXT: .gcc_except_table 00000001
|
|
// CHECK-NEXT: .text 00000002
|
|
// CHECK-NEXT: .tdata 00000001
|
|
// CHECK-NEXT: .tbss 00000001
|
|
// CHECK-NEXT: .data.rel.ro 00000004
|
|
// CHECK-NEXT: .relro_padding 00000df5
|
|
// CHECK-NEXT: .data 00000002
|
|
// CHECK-NEXT: .foo.a 00000001
|
|
// CHECK-NEXT: .foo 00000001
|
|
// CHECK-NEXT: .bss 00000002
|
|
// CHECK-NEXT: .comment 00000008
|
|
// CHECK-NEXT: .symtab 00000030
|
|
// CHECK-NEXT: .shstrtab 00000084
|
|
// CHECK-NEXT: .strtab 00000008
|