Include the complete list of threads of all running processes
in the FreeBSDKernel plugin. This makes it possible to inspect
the states (including partial register dumps from PCB) of all kernel
and userspace threads at the time of crash, or at the time of reading
/dev/mem first.
Differential Revision: https://reviews.llvm.org/D116255
This reverts commit 640beb38e7.
That commit caused performance degradtion in Quicksilver test QS:sGPU and a functional test failure in (rocPRIM rocprim.device_segmented_radix_sort).
Reverting until we have a better solution to s_cselect_b64 codegen cleanup
Change-Id: Ibf8e397df94001f248fba609f072088a46abae08
Reviewed By: kzhuravl
Differential Revision: https://reviews.llvm.org/D115960
Change-Id: Id169459ce4dfffa857d5645a0af50b0063ce1105
D116372, while fixing one kind of a race, ended up creating a new one.
The new issue could occur when one inferior thread exits while another
thread initiates termination of the entire process (exit_group(2)).
With some bad luck, we could start processing the exit notification
(PTRACE_EVENT_EXIT) only to have the become unresponsive (ESRCH) in the
middle of the MonitorCallback function. This function would then delete
the thread from our list even though it wasn't completely dead (it stays
zombified until we read the WIFEXITED event). The linux kernel will not
deliver the exited event for the entire process until we process
individual thread exits.
In a pre-D116372 world, this wouldn't be a problem because we would read
this event (even though we would not know what to do with it) with
waitpid(-1). Now, when we issue invididual waitpids, this event will
never be picked up, and we end up hanging.
The fix for this is actually quite simple -- don't delete the thread in
this situation. The thread will be deleted when the WIFEXITED event
comes.
This situation was kind of already tested by
TestCreateDuringInstructionStep (which is how I found this problem), but
it was mostly accidental, so I am also creating a dedicated test which
reproduces this situation.
It was being used only in some very old tests (which pass even without
it) and its implementation is highly questionable.
These days we have different mechanisms for requesting a build with a
particular kind of c++ library (USE_LIB(STD)CPP in the makefile).
Now that we are caching the dwarf index as well, we will always have
more than one cache file (when not using accelerator tables). I have
adjusted the test to check for the presence of one _symtab_ index.
This patch add the ability to cache the manual DWARF indexing results to disk for faster subsequent debug sessions. Manual DWARF indexing is time consuming and causes all DWARF to be fully parsed and indexed each time you debug a binary that doesn't have an acceptable accelerator table. Acceptable accelerator tables include .debug_names in DWARF5 or Apple accelerator tables.
This patch breaks up testing by testing all of the encoding and decoding of required C++ objects in a gtest unit test, and then has a test to verify the debug info cache is generated correctly.
This patch also adds the ability to track when a symbol table or DWARF index is loaded or saved to the cache in the "statistics dump" command. This is essential to know in statistics as it can help explain why a debug session was slower or faster than expected.
Reviewed By: labath, wallace
Differential Revision: https://reviews.llvm.org/D115951
Introduce initial support for using libkvm on FreeBSD. The library
can be used as an alternate implementation for processing kernel
coredumps but it can also be used to access live kernel memory through
specifying "/dev/mem" as the core file, i.e.:
lldb --core /dev/mem /boot/kernel/kernel
Differential Revision: https://reviews.llvm.org/D116005
The test was attempting to make a universal x86_64/arm64 binary, but some older bots don't have a macOS SDK that can handle this. Switching over to using a yaml file instead should solve the problem.
They were being applied too narrowly (they didn't cover signed char *,
for instance), and too broadly (they covered SomeTemplate<char[6]>) at
the same time.
Differential Revision: https://reviews.llvm.org/D112709
This is an updated version of the https://reviews.llvm.org/D113789 patch with the following changes:
- We no longer modify modification times of the cache files
- Use LLVM caching and cache pruning instead of making a new cache mechanism (See DataFileCache.h/.cpp)
- Add signature to start of each file since we are not using modification times so we can tell when caches are stale and remove and re-create the cache file as files are changed
- Add settings to control the cache size, disk percentage and expiration in days to keep cache size under control
This patch enables symbol tables to be cached in the LLDB index cache directory. All cache files are in a single directory and the files use unique names to ensure that files from the same path will re-use the same file as files get modified. This means as files change, their cache files will be deleted and updated. The modification time of each of the cache files is not modified so that access based pruning of the cache can be implemented.
The symbol table cache files start with a signature that uniquely identifies a file on disk and contains one or more of the following items:
- object file UUID if available
- object file mod time if available
- object name for BSD archive .o files that are in .a files if available
If none of these signature items are available, then the file will not be cached. This keeps temporary object files from expressions from being cached.
When the cache files are loaded on subsequent debug sessions, the signature is compare and if the file has been modified (uuid changes, mod time changes, or object file mod time changes) then the cache file is deleted and re-created.
Module caching must be enabled by the user before this can be used:
symbols.enable-lldb-index-cache (boolean) = false
(lldb) settings set symbols.enable-lldb-index-cache true
There is also a setting that allows the user to specify a module cache directory that defaults to a directory that defaults to being next to the symbols.clang-modules-cache-path directory in a temp directory:
(lldb) settings show symbols.lldb-index-cache-path
/var/folders/9p/472sr0c55l9b20x2zg36b91h0000gn/C/lldb/IndexCache
If this setting is enabled, the finalized symbol tables will be serialized and saved to disc so they can be quickly loaded next time you debug.
Each module can cache one or more files in the index cache directory. The cache file names must be unique to a file on disk and its architecture and object name for .o files in BSD archives. This allows universal mach-o files to support caching multuple architectures in the same module cache directory. Making the file based on the this info allows this cache file to be deleted and replaced when the file gets updated on disk. This keeps the cache from growing over time during the compile/edit/debug cycle and prevents out of space issues.
If the cache is enabled, the symbol table will be loaded from the cache the next time you debug if the module has not changed.
The cache also has settings to control the size of the cache on disk. Each time LLDB starts up with the index cache enable, the cache will be pruned to ensure it stays within the user defined settings:
(lldb) settings set symbols.lldb-index-cache-expiration-days <days>
A value of zero will disable cache files from expiring when the cache is pruned. The default value is 7 currently.
(lldb) settings set symbols.lldb-index-cache-max-byte-size <size>
A value of zero will disable pruning based on a total byte size. The default value is zero currently.
(lldb) settings set symbols.lldb-index-cache-max-percent <percentage-of-disk-space>
A value of 100 will allow the disc to be filled to the max, a value of zero will disable percentage pruning. The default value is zero.
Reviewed By: labath, wallace
Differential Revision: https://reviews.llvm.org/D115324
Introduce a FreeBSDKernel plugin that provides the ability to read
FreeBSD kernel core dumps. The plugin utilizes libfbsdvmcore to provide
support for both "full memory dump" and minidump formats across variety
of architectures supported by FreeBSD. It provides the ability to read
kernel memory, as well as the crashed thread status with registers
on arm64, i386 and x86_64.
Differential Revision: https://reviews.llvm.org/D114911
Introduce a FreeBSDKernel plugin that provides the ability to read
FreeBSD kernel core dumps. The plugin utilizes libfbsdvmcore to provide
support for both "full memory dump" and minidump formats across variety
of architectures supported by FreeBSD. It provides the ability to read
kernel memory, as well as the crashed thread status with registers
on arm64, i386 and x86_64.
Differential Revision: https://reviews.llvm.org/D114911
This commit should fix a heap-use-after-free bug that was caught by the
sanitizer bot.
The issue is that we were reading memory from a second target into a
`SBData` object in Python, that was passed to lldb's internal
`ScriptedProcess::DoReadMemory` C++ method.
The ScriptedPythonInterface then extracts the underlying `DataExtractor`
from the `SBData` object, and is used to read the memory with the
appropriate address size and byte order.
Unfortunately, it seems that even though the DataExtractor object was
still valid, it pointed to invalid, possibly garbage-collected memory
from Python.
To mitigate this, the patch uses `SBData::SetDataWithOwnership` to copy
the pointed buffer to lldb's heap memory which prevents the
use-after-free error.
rdar://84511405
Differential Revision: https://reviews.llvm.org/D115654
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
Test is using "next" commands to make progress in the process. D115137
added an additional statement to the program, without adding a command
to step over it. This only seemed to matter for the libc++ flavour of
the test, possibly because libstdc++ list is "empty" in its
uninitialized state.
Since moving with step commands is a treacherous, this patch adds a
run-to-breakpoint command to the test. It only does this for the
affected step, but one may consider doing it elsewhere too.
This patch should fix a Windows test failure for the
InvalidScriptedThread test:
https://lab.llvm.org/buildbot/#/builders/83/builds/12571
This refactors the test to stop using python `tempfile` since it's not
supported on Windows and creates a logfile at runtime in the test folder.
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
This patch adds support for arm64(e) targets to ScriptedProcess, by
providing the `DynamicRegisterInfo` to the base `lldb.ScriptedThread` class.
This allows create and debugging ScriptedProcess on Apple Silicon
hardware as well as Apple mobile devices.
It also replace the C++ asserts on `ScriptedThread::GetDynamicRegisterInfo`
by some error logging, re-enables `TestScriptedProcess` for arm64
Darwin platforms and adds a new invalid Scripted Thread test.
rdar://85892451
Differential Revision: https://reviews.llvm.org/D114923
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
This adds extra tests for libstdcpp and libcxx list and forward_list formatters to check whether formatter behaves correctly when applied on pointer and reference values.
Reviewed By: wallace
Differential Revision: https://reviews.llvm.org/D115137
This adds the formatters for libstdcpp's deque as a python
implementation. It adds comprehensive tests for the two different
storage strategies deque uses. Besides that, this fixes a couple of bugs
in the libcxx implementation. Finally, both implementation run against
the same tests.
This is a minor improvement on top of Danil Stefaniuc's formatter.
These tests work fine with VS2017, but become more flaky with VS2019 and the buildbot is about to get upgraded.
Differential Revision: https://reviews.llvm.org/D114907
This diff is adding the capping_size determination for the list and forward list, to limit the number of children to be displayed. Also it modifies and unifies tests for libcxx and libstdcpp list data formatter.
Reviewed By: wallace
Differential Revision: https://reviews.llvm.org/D114433
This diff is avoiding the size limitation introduced by the capping size for the libcxx and libcpp bitset data formatters.
Reviewed By: wallace
Differential Revision: https://reviews.llvm.org/D114461
We need to add checks that ensure that some core variables are valid, so
that we avoid printing out garbage data. The worst that could happen is
that an non-initialized variable is being printed as something with
123123432 children instead of 0.
Differential Revision: https://reviews.llvm.org/D114458
As suggested by @labath in https://reviews.llvm.org/D114403, we should
make the formatter more resilient to corrupted data. The Libcxx version
explicitly checks for engaged = 1, so we can do that as well for safety.
Differential Revision: https://reviews.llvm.org/D114450
The test flaked on bots:
http://green.lab.llvm.org/green/job/lldb-cmake/38666/
The test expects that tsan will detect a single race
with concurrent memory accesses. TSan doesn't do this reliably.
Run 100 iterations of the racing threads, which should
make the race much more likely to be detected.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D114444
This diff adds a data formatter and tests for libstdcpp's unordered_map, unordered_set, unordered_multimap, unordered_multiset
Reviewed By: wallace
Differential Revision: https://reviews.llvm.org/D113760
We were using the client socket close as a way to terminate the handler
thread. But this kind of concurrent access to the same socket is not
safe. It also complicates running the handler without a dedicated thread
(next patch).
Instead, here I add an explicit way for a packet handler to request
termination. Waiting for lldb to terminate the connection would almost
be sufficient, but in the pty test we want to keep the pty open so we
can examine its state. Ability to disconnect at an arbitrary point may
be useful for testing other aspects of lldb functionality as well.
The way this works is that now each packet handler can optionally return
a list of responses (instead of just one). One of those responses (it
only makes sense for it to be the last one) can be a special
RESPONSE_DISCONNECT object, which triggers a disconnection (via a new
TerminateConnectionException).
As the mock server now cleans up the connection whenever it disconnects,
the pty test needs to explicitly dup(2) the descriptors in order to
inspect the post-disconnect state.
Differential Revision: https://reviews.llvm.org/D114156
The reworking of the gdb client tests into the PlatformClientTestBase broke
the test for this. I did the mutatis mutandis for the move, but the test
still fails. Reverting till I have time to figure out why.
This reverts commit b715b79d54.
We don't actually need a local copy of the main executable to debug
a remote process. So instead of treating "no local module" as an error,
see if the LaunchInfo has an executable it wants lldb to use, and if so
use it. Then report whatever error the remote server returns.
Differential Revision: https://reviews.llvm.org/D113521
Apparently "{sys.prefix}/bin/python3" isn't where you find the
python interpreter on windows, so the test I wrote for
-print-script-interpreter-info is failing.
We can't rely on sys.executable at runtime, because that will point
to lldb.exe not python.exe.
We can't just record sys.executable from build time, because python
could have been moved to a different location.
But it should be OK to apply relative path from sys.prefix to sys.executable
from build-time to the sys.prefix at runtime.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D113650
This infrastructure has proven proven its worth, so give it a more
prominent place.
My immediate motivation for this is the desire to reuse this
infrastructure for qemu platform testing, but I believe this move makes
sense independently of that. Moving this code to the packages tree will
allow as to add more structure to the gdb client tests -- currently they
are all crammed into the same test folder as that was the only way they
could access this code.
I'm splitting the code into two parts while moving it. The first once
contains just the generic gdb protocol wrappers, while the other one
contains the unit test glue. The reason for that is that for qemu
testing, I need to run the gdb code in a separate process, so I will
only be using the first part there.
Differential Revision: https://reviews.llvm.org/D113893