Since libclang_rt.profile is added later in the command line, a
definition of __llvm_profile_raw_version is not included if it is
provided from an earlier object, e.g. from a shared dependency.
This causes an extra dependence edge where if libA.so depends on libB.so
and both are coverage-instrumented, libA.so uses libB.so's definition of
__llvm_profile_raw_version. This leads to a runtime link failure if the
libB.so available at runtime does not provide this symbol (but provides
the other dependent symbols). Such a scenario can occur in Android's
mainline modules.
E.g.:
ld -o libB.so libclang_rt.profile-x86_64.a
ld -o libA.so -l B libclang_rt.profile-x86_64.a
libB.so has a global definition of __llvm_profile_raw_version. libA.so
uses libB.so's definition of __llvm_profile_raw_version. At runtime,
libB.so may not be coverage-instrumented (i.e. not export
__llvm_profile_raw_version) so runtime linking of libA.so will fail.
Marking this symbol as hidden forces each binary to use the definition
of __llvm_profile_raw_version from libclang_rt.profile. The visiblity
is unchanged for Apple platforms where its presence is checked by the
TAPI tool.
Reviewed By: MaskRay, phosek, davidxl
Differential Revision: https://reviews.llvm.org/D111759
The new test started failing on bots with:
CHECK failed: tsan_rtl.cpp:327 "((addr + size)) <= ((TraceMemEnd()))"
(0xf06200e03010, 0xf06200000000) (tid=4073872)
https://lab.llvm.org/buildbot#builders/179/builds/1761
This is a latent bug in aarch64 virtual address space layout,
there is not enough address space to fit traces for all threads.
But since the trace space is going away with the new tsan runtime
(D112603), disable the test.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113990
Use of gethostent provokes caching of some resources inside of libc.
They are freed in __libc_thread_freeres very late in thread lifetime,
after our ThreadFinish. __libc_thread_freeres calls free which
previously crashed in malloc hooks.
Fix it by setting ignore_interceptors for finished threads,
which in turn prevents malloc hooks.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113989
Precisely specifying the unused parts of the bitfield is critical for
performance. If we don't specify them, compiler will generate code to load
the old value and shuffle it to extract the unused bits to apply to the new
value. If we specify the unused part and store 0 in there, all that
unnecessary code goes away (store of the 0 const is combined with other
constant parts).
I don't see a good way to ensure we cover all of u64 bits with fields.
So at least introduce named kUnusedBits consts and check that bits
sum up to 64.
Depends on D113978.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113979
In the old runtime we used to use different number of trace parts
for C++ and Go to reduce trace memory consumption for Go.
But now it's easier and better to use smaller parts because
we already use minimal possible number of parts for C++ (3).
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113978
man pthread_equal:
The pthread_equal() function is necessary because thread IDs
should be considered opaque: there is no portable way for
applications to directly compare two pthread_t values.
Depends on D113916.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D113919
pthread_setname_np does linear search over all thread descriptors
to map pthread_t to the thread descriptor. This has O(N^2) complexity
and becomes much worse in the new tsan runtime that keeps all ever
existed threads in the thread registry.
Replace linear search with direct access if pthread_setname_np
is called for the current thread (a very common case).
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D113916
We used to mmap C++ shadow stack as part of the trace region
before ed7f3f5bc9 ("tsan: move shadow stack into ThreadState"),
which moved the shadow stack into TLS. This started causing
timeouts and OOMs on some of our internal tests that repeatedly
create and destroy thousands of threads.
Allocate C++ shadow stack with mmap and small pages again.
This prevents the observed timeouts and OOMs.
But we now need to be more careful with interceptors that
run after thread finalization because FuncEntry/Exit and
TraceAddEvent all need the shadow stack.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D113786
Similar to how the other swift sections are registered by the ORC
runtime's macho platform, add the __swift5_types section, which contains
type metadata. Add a simple test that demonstrates that the swift
runtime recognized the registered types.
rdar://85358530
Differential Revision: https://reviews.llvm.org/D113811
Since glibc 2.34, dlsym does
1. malloc 1
2. malloc 2
3. free pointer from malloc 1
4. free pointer from malloc 2
These sequence was not handled by trivial dlsym hack.
This fixes https://bugs.llvm.org/show_bug.cgi?id=52278
Reviewed By: eugenis, morehouse
Differential Revision: https://reviews.llvm.org/D112588
The check was failing because it was matching against the end of the range, not
the start.
This bug wasn't causing the ORC-RT MachO TLV regression test to fail because
we were only logging deallocation errors (including TLV deregistration errors)
and not actually returning a failure code. This commit updates llvm-jitlink to
report the errors properly.
This change switches tsan to the new runtime which features:
- 2x smaller shadow memory (2x of app memory)
- faster fully vectorized race detection
- small fixed-size vector clocks (512b)
- fast vectorized vector clock operations
- unlimited number of alive threads/goroutimes
Depends on D112602.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D112603
Start the background thread only after fork, but not after clone.
For fork we did this always and it's known to work (or user code has adopted).
But if we do this for the new clone interceptor some code (sandbox2) fails.
So model we used to do for years and don't start the background thread after clone.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113744
The compiler does not recognize HACKY_CALL as a call
(we intentionally hide it from the compiler so that it can
compile non-leaf functions as leaf functions).
To compensate for that hacky call thunk saves and restores
all caller-saved registers. However, it saves only
general-purposes registers and does not save XMM registers.
This is a latent bug that was masked up until a recent "NFC" commit
d736002e90 ("tsan: move memory access functions to a separate file"),
which allowed more inlining and exposed the 10-year bug.
Save and restore caller-saved XMM registers (all) as well.
Currently the bug manifests as e.g. frexp interceptor messes the
return value and the added test fails with:
i=8177 y=0.000000 exp=4
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113742
Some compiler-rt tests are inherently incompatible with VE because..
* No consistent denormal support on VE. We skip denormal fp inputs in builtin tests.
* `madvise` unsupported on VE.
* Instruction alignment requirements.
Reviewed By: phosek
Differential Revision: https://reviews.llvm.org/D113093
Set the default memprof serialization format as binary. 9 tests are
updated to use print_text=true. Also fixed an issue with concatenation
of default and test specified options (missing separator).
Differential Revision: https://reviews.llvm.org/D113617
This change implements the raw binary format discussed in
https://lists.llvm.org/pipermail/llvm-dev/2021-September/153007.html
Summary of changes
* Add a new memprof option to choose binary or text (default) format.
* Add a rawprofile library which serializes the MIB map to profile.
* Add a unit test for rawprofile.
* Mark sanitizer procmaps methods as virtual to be able to mock them.
* Extend memprof_profile_dump regression test.
Differential Revision: https://reviews.llvm.org/D113317
The existing implementation uses a cache + eviction based scheme to
record heap profile information. This design was adopted to ensure a
constant memory overhead (due to fixed number of cache entries) along
with incremental write-to-disk for evictions. We find that since the
number to entries to track is O(unique-allocation-contexts) the overhead
of keeping all contexts in memory is not very high. On a clang workload,
the max number of unique allocation contexts was ~35K, median ~11K.
For each context, we (currently) store 64 bytes of data - this amounts
to 5.5MB (max). Given the low overheads for a complex workload, we can
simplify the implementation by using a hashmap without eviction.
Other changes:
* Memory map is dumped at the end rather than startup. The relative
order in the profile dump is unchanged since we no longer have evicted
entries at runtime.
* Added a test to check meminfoblocks are merged.
Differential Revision: https://reviews.llvm.org/D111676
Move the memprof MemInfoBlock struct to it's own header as requested
during the review of D111676.
Differential Revision: https://reviews.llvm.org/D113315
This change adds a ForEach method to the AddrHashMap class which can
then be used to iterate over all the key value pairs in the hash map.
I intend to use this in an upcoming change to the memprof runtime.
Added a unit test to cover basic insertion and the ForEach callback.
Differential Revision: https://reviews.llvm.org/D111368
Clone does not exist on Mac.
There are chances it will break on other OSes.
Enable it incrementally starting with Linux only,
other OSes can enable it later as needed.
Reviewed By: melver, thakis
Differential Revision: https://reviews.llvm.org/D113693
gtest uses clone for death tests and it needs the same
handling as fork to prevent deadlock (take runtime mutexes
before and release them after).
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113677
Currently, SANITIZER_COMMON_SUPPORTED_OS is being used to enable many libraries.
Unfortunately this makes it impossible to selectively disable a library based on the OS.
This patch removes this limitation by adding a separate list of supported OSs for the lsan, ubsan, ubsan_minimal, and stats libraries.
Reviewed By: delcypher
Differential Revision: https://reviews.llvm.org/D113444
Entropic scheduling with exec-time option can be misled, if inputs
on the right path to become crashing inputs accidentally take more
time to execute before it's added to the corpus. This patch, by letting
more of such inputs added to the corpus (four inputs of size 7 to 10,
instead of a single input of size 2), reduces possibilities of being
influenced by timing flakiness.
A longer-term fix could be to reduce timing flakiness in the fuzzer;
one way could be to execute inputs multiple times and take average of
their execution time before they are added to the corpus.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D113544