As specified in the docs,
1) raw_string_ostream is always unbuffered and
2) the underlying buffer may be used directly
( 65b13610a5 for further reference )
* Don't call raw_string_ostream::flush(), which is essentially a no-op.
* Avoid unneeded calls to raw_string_ostream::str(), to avoid excess
indirection.
xusheng added support for swbreak/hwbreak a month ago, and no special
support was needed in ProcessGDBRemote when they're received because
lldb already marks a thread as having hit a breakpoint when it stops at
a breakpoint site. However, with changes I am working on, we need to
know the real stop reason a thread stopped or the breakpoint hit will
not be recognized.
This is similar to how lldb processes the "watch/rwatch/awatch" keys in
a thread stop packet -- we set the `reason` to `watchpoint`, and these
set it to `breakpoint` so we set the stop reason correctly later in
these methods.
lldb-server built with NativeProcessLinux.cpp and
NativeProcessFreeBSD.cpp can use breakpoints to implement instruction
stepping on cores where there is no native instruction-step primitive.
Currently these set a breakpoint, continue, and if we hit the breakpoint
with the original thread, set the stop reason to be "trace".
I am wrapping up a change to lldb's breakpoint algorithm where I change
its current behavior of
"if a thread stops at a breakpoint site, we set
the thread's stop reason to breakpoint-hit, even if the breakpoint
hasn't been executed" +
"when resuming any thread at a breakpoint site, instruction-step past
the breakpoint before resuming"
to a behavior of
"when a thread executes a breakpoint, set the stop reason to
breakpoint-hit" +
"when a thread has hit a breakpoint, when the thread resumes, we
silently step past the breakpoint and then resume the thread".
For these lldb-server targets doing breakpoint stepping, this means that
if we are sitting on a breakpoint that has not yet executed, and
instruction-step the thread, we will execute the breakpoint instruction
at $pc (instead of $next-pc where it meant to go), and stop again -- at
the same pc value. Then we will rewrite the stop reason to 'trace'. The
higher level logic will see that we haven't hit the breakpoint
instruction again, so it will try to instruction step again, hitting the
breakpoint again forever.
To fix this, I'm checking that the thread matches the one we are
instruction-stepping-by-breakpoint AND that we've stopped at the
breakpoint address we are stepping to. Only in that case will the stop
reason be rewritten to "trace" hiding the implementation detail that the
step was done by breakpoints.
Currently, LLDB assumes all minidumps will have unique sections. This is
intuitive because almost all of the minidump sections are themselves
lists. Exceptions including Signals are unique in that they are all
individual sections with their own directory.
This means LLDB fails to load minidumps with multiple exceptions due to
them not being unique. This behavior is erroneous and this PR introduces
support for an arbitrary number of exception streams. Additionally, stop
info was calculated only for a single thread before, and now we properly
support mapping exceptions to threads.
~~This PR is starting in DRAFT because implementing testing is still
required.~~
(this is lldb part)
Without these explicit includes, removing other headers, who implicitly
include llvm-config.h, may have non-trivial side effects. For example,
`clangd` may report even `llvm-config.h` as "no used" in case it defines
a macro, that is explicitly used with #ifdef. It is actually amplified
with different build configs which use different set of macros.
A follow up to #106473 Minidump wasn't collecting fs or gs_base. This
patch extends the x86_64 register context and gated reading it behind an
lldb specific flag. Additionally these registers are explicitly checked
in the tests.
The PR adds the support optionally enabled/disabled FP-registers to LLDB
`RegisterInfoPOSIX_riscv64`. This situation might take place for RISC-V
builds having no FP-registers, like RV64IMAC or RV64IMACV.
To aim this, patch adds `opt_regsets` flags mechanism. It re-works
RegisterInfo class to work with flexibly allocated (depending on
`opt_regsets` flag) `m_register_sets` and `m_register_infos` vectors
instead of statically defined structures. The registration of regsets is
being arranged by `m_per_regset_regnum_range` map.
The patch flows are spread to `NativeRegisterContextLinux_riscv64` and
`RegisterContextCorePOSIX_riscv64` classes, that were tested on:
- x86_64 host working with coredumps
- RV64GC and RV64IMAC targets working with coredumps and natively in
run-time with binaries
`EmulateInstructionRISCV` is out of scope of this patch, and its
behavior did not change, using maximum set of registers.
According testcase built for RV64IMAC (no-FPR) was added to
`TestLinuxCore.py`.
ReadProcessMemory will not perform the read if part of the memory is
unreadable (and even though the API has a `number_of_bytes_read`
argument). To make this work, I explicitly inspect the memory region
being read and only read the accessible part.
Previously, we were returning an error if we couldn't read the whole
region. This doesn't matter most of the time, because lldb caches memory
reads, and in that process it aligns them to cache line boundaries. As
(LLDB) cache lines are smaller than pages, the reads are unlikely to
cross page boundaries.
Nonetheless, this can cause a problem for large reads (which bypass the
cache), where we're unable to read anything even if just a single byte
of the memory is unreadable. This patch fixes the lldb-server to do
that, and also changes the linux implementation, to reuse any partial
results it got from the process_vm_readv call (to avoid having to
re-read everything again using ptrace, only to find that it stopped at
the same place).
This matches debugserver behavior. It is also consistent with the gdb
remote protocol documentation, but -- notably -- not with actual
gdbserver behavior (which returns errors instead of partial results). We
filed a
[clarification
bug](https://sourceware.org/bugzilla/show_bug.cgi?id=24751) several
years ago. Though we did not really reach a conclusion there, I think
this is the most logical behavior.
The associated test does not currently pass on windows, because the
windows memory read APIs don't support partial reads (I have a WIP patch
to work around that).
[D156118](https://reviews.llvm.org/D156118) states that this note is
always present, but it is better to check it explicitly, as otherwise
`lldb` may crash when trying to read registers.
This patch removes all of the Set.* methods from Status.
This cleanup is part of a series of patches that make it harder use the
anti-pattern of keeping a long-lives Status object around and updating
it while dropping any errors it contains on the floor.
This patch is largely NFC, the more interesting next steps this enables
is to:
1. remove Status.Clear()
2. assert that Status::operator=() never overwrites an error
3. remove Status::operator=()
Note that step (2) will bring 90% of the benefits for users, and step
(3) will dramatically clean up the error handling code in various
places. In the end my goal is to convert all APIs that are of the form
` ResultTy DoFoo(Status& error)
`
to
` llvm::Expected<ResultTy> DoFoo()
`
How to read this patch?
The interesting changes are in Status.h and Status.cpp, all other
changes are mostly
` perl -pi -e 's/\.SetErrorString/ = Status::FromErrorString/g' $(git
grep -l SetErrorString lldb/source)
`
plus the occasional manual cleanup.
The existing algorithm was performing the following comparisons for an
`aaa,bbb,ccc,ddd`:
aaa\0bbb,ccc,ddd == "stack"
aaa\0bbb\0ccc,ddd == "stack"
aaa\0bbb\0ccc\0ddd == "stack"
Which wouldn't work. This commit just dispatches to a known algorithm
implementation.
In f9f3316, Adrian fixed an issue where LLDB wouldn't update the
target's architecture when the process reported a different triple that
only differed in its sub-architecture.
This unintentionally regressed core file debugging when the core file
reports the base architecture (e.g. armv7) while the main binary knows
the correct CPU subtype (e.g. armv7em). After the aforementioned change,
we update the target architecture from armv7em to armv7. Fix the issue
by trusting the target architecture over the ProcessMachCore process.
rdar://133834304
This PR is in response to a bug my coworker @mbucko discovered where on
MacOS Minidumps were being created where the 64b memory regions were
readable, but were not being listed in
`SBProcess.GetMemoryRegionList()`. This went unnoticed in #95312 due to
all the linux testing including /proc/pid maps. On MacOS generated dumps
(or any dump without access to /proc/pid) we would fail to properly map
Memory Regions due to there being two independent methods for 32b and
64b mapping.
In this PR I addressed this minor bug and merged the methods, but in
order to add test coverage required additions to `obj2yaml` and
`yaml2obj` which make up the bulk of this patch.
Lastly, there are some non-required changes such as the addition of the
`Memory64ListHeader` type, to make writing/reading the header section of
the Memory64List easier.
This PR is in reference to porting LLDB on AIX.
Link to discussions on llvm discourse and github:
1. https://discourse.llvm.org/t/port-lldb-to-ibm-aix/80640
2. #101657
The complete changes for porting are present in this draft PR:
https://github.com/llvm/llvm-project/pull/102601
The changes on this PR are intended to avoid namespace collision for
certain typedefs between lldb and other platforms:
1. tid_t --> lldb::tid_t
2. offset_t --> lldb::offset_t
To create elf core files there are multiple notes in the core file that
contain these structs as the note. These populate methods take a Process
and produce fully specified structures that can be used to fill these
note sections. The pr also adds tests to ensure these structs are
correctly populated.
This fixes https://github.com/llvm/llvm-project/issues/56125 and
https://github.com/vadimcn/codelldb/issues/666, as well as the
downstream issue in our binary ninja debugger:
https://github.com/Vector35/debugger/issues/535
Basically, lldb does not claim to support the `swbreak` packet so the
gdbserver would not use it. As a result, the gdbserver always sends the
unmodified program counter value which, on systems like x86, causes the
program counter to be off-by-one (or otherwise wrong). For reference,
the lldb-server always sends the modified program counter value so it
works perfectly with lldb.
https://sourceware.org/gdb/current/onlinedocs/gdb.html/Stop-Reply-Packets.html#swbreak-stop-reason
No new code is added to add support `swbreak`, since the way lldb works
already expects the remote to have adjusted the program counter. The
change just lets the gdbserver know that lldb supports it, so that it
will send the adjusted program counter.
To test this PR, you can use lldb to connect to a gdbserver running on
e.g., Ubuntu 22.04, and see the program counter is off-by-one without
the patch. With the patch, things work as expected
This fixes the following warning, when building with Clang ToT on
Windows:
```
[6668/7618] Building CXX object
tools\lldb\source\Plugins\Process\Windows\Common\CMakeFiles\lldbPluginProcessWindowsCommon.dir\TargetThreadWindows.cpp.obj
C:\src\git\llvm-project\lldb\source\Plugins\Process\Windows\Common\TargetThreadWindows.cpp(182,22):
warning: cast from 'FARPROC' (aka 'long long (*)()') to
'GetThreadDescriptionFunctionPtr' (aka 'long (*)(void *, wchar_t **)')
converts to incompatible function type [-Wcast-function-type-mismatch]
```
This is similar to: https://github.com/llvm/llvm-project/pull/97905
This PR introduces a new `ThreadPlanSingleThreadTimeout` that will be
used to address potential deadlock during single-thread stepping.
While debugging a target with a non-trivial number of threads (around
5000 threads in one example target), we noticed that a simple step over
can take as long as 10 seconds. Enabling single-thread stepping mode
significantly reduces the stepping time to around 3 seconds. However,
this can introduce deadlock if we try to step over a method that depends
on other threads to release a lock.
To address this issue, we introduce a new
`ThreadPlanSingleThreadTimeout` that can be controlled by the
`target.process.thread.single-thread-plan-timeout` setting during
single-thread stepping mode. The concept involves counting the elapsed
time since the last internal stop to detect overall stepping progress.
Once a timeout occurs, we assume the target is not making progress due
to a potential deadlock, as mentioned above. We then send a new async
interrupt, resume all threads, and `ThreadPlanSingleThreadTimeout`
completes its task.
To support this design, the major changes made in this PR are:
1. `ThreadPlanSingleThreadTimeout` is popped during every internal stop
and reset (re-pushed) to the top of the stack (as a leaf node) during
resume. This is achieved by always returning `true` from
`ThreadPlanSingleThreadTimeout::DoPlanExplainsStop()` and
`ThreadPlanSingleThreadTimeout::MischiefManaged()`.
2. A new thread-specific async interrupt stop is introduced, which can
be detected/consumed by `ThreadPlanSingleThreadTimeout`.
3. The clearing of branch breakpoints in the range thread plan has been
moved from `DoPlanExplainsStop()` to `ShouldStop()`, as it is not
guaranteed that it will be called.
The detailed design is discussed in the RFC below:
[https://discourse.llvm.org/t/improve-single-thread-stepping/74599](https://discourse.llvm.org/t/improve-single-thread-stepping/74599)
---------
Co-authored-by: jeffreytan81 <jeffreytan@fb.com>
Similar to #97796, fix the type of the `native_thread` parameter for the
arm, mips64 and powerpc variants of `NativeRegisterContextFreeBSD_*`.
Otherwise, this leads to compile errors similar to:
```
lldb/source/Plugins/Process/FreeBSD/NativeRegisterContextFreeBSD_powerpc.cpp:85:39: error: out-of-line definition of 'NativeRegisterContextFreeBSD_powerpc' does not match any declaration in 'lldb_private::process_freebsd::NativeRegisterContextFreeBSD_powerpc'
85 | NativeRegisterContextFreeBSD_powerpc::NativeRegisterContextFreeBSD_powerpc(
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
MAX_PATH is definitely larger than 6 bytes we are expecting for this
message, and could be rather large depending on the target OS (4K for
some Linux OSs).
Since the buffer gets allocated on the stack we better be conservative
and allocate what we actually need.
This reverts commit 05f0e86cc8.
The debuginfo dexter tests are failing, probably because the way
stepping over breakpoints has changed with my patches. And there
are two API tests fails on the ubuntu-arm (32-bit) bot. I'll need
to investigate both of these, neither has an obvious failure reason.
lldb today has two rules: When a thread stops at a BreakpointSite, we
set the thread's StopReason to be "breakpoint hit" (regardless if we've
actually hit the breakpoint, or if we've merely stopped *at* the
breakpoint instruction/point and haven't tripped it yet). And second,
when resuming a process, any thread sitting at a BreakpointSite is
silently stepped over the BreakpointSite -- because we've already
flagged the breakpoint hit when we stopped there originally.
In this patch, I change lldb to only set a thread's stop reason to
breakpoint-hit when we've actually executed the instruction/triggered
the breakpoint. When we resume, we only silently step past a
BreakpointSite that we've registered as hit. We preserve this state
across inferior function calls that the user may do while stopped, etc.
Also, when a user adds a new breakpoint at $pc while stopped, or changes
$pc to be the address of a BreakpointSite, we will silently step past
that breakpoint when the process resumes. This is purely a UX call, I
don't think there's any person who wants to set a breakpoint at $pc and
then hit it immediately on resuming.
One non-intuitive UX from this change, but I'm convinced it is
necessary: If you're stopped at a BreakpointSite that has not yet
executed, you `stepi`, you will hit the breakpoint and the pc will not
yet advance. This thread has not completed its stepi, and the thread
plan is still on the stack. If you then `continue` the thread, lldb will
now stop and say, "instruction step completed", one instruction past the
BreakpointSite. You can continue a second time to resume execution. I
discussed this with Jim, and trying to paper over this behavior will
lead to more complicated scenarios behaving non-intuitively. And mostly
it's the testsuite that was trying to instruction step past a breakpoint
and getting thrown off -- and I changed those tests to expect the new
behavior.
The bugs driving this change are all from lldb dropping the real stop
reason for a thread and setting it to breakpoint-hit when that was not
the case. Jim hit one where we have an aarch64 watchpoint that triggers
one instruction before a BreakpointSite. On this arch we are notified of
the watchpoint hit after the instruction has been unrolled -- we disable
the watchpoint, instruction step, re-enable the watchpoint and collect
the new value. But now we're on a BreakpointSite so the watchpoint-hit
stop reason is lost.
Another was reported by ZequanWu in
https://discourse.llvm.org/t/lldb-unable-to-break-at-start/78282 we
attach to/launch a process with the pc at a BreakpointSite and
misbehave. Caroline Tice mentioned it is also a problem they've had with
putting a breakpoint on _dl_debug_state.
The change to each Process plugin that does execution control is that
1. If we've stopped at a BreakpointSite that has not been executed yet,
we will call Thread::SetThreadStoppedAtUnexecutedBP(pc) to record
that. When the thread resumes, if the pc is still at the same site, we
will continue, hit the breakpoint, and stop again.
2. When we've actually hit a breakpoint (enabled for this thread or not),
the Process plugin should call Thread::SetThreadHitBreakpointSite().
When we go to resume the thread, we will push a step-over-breakpoint
ThreadPlan before resuming.
The biggest set of changes is to StopInfoMachException where we
translate a Mach Exception into a stop reason. The Mach exception codes
differ in a few places depending on the target (unambiguously), and I
didn't want to duplicate the new code for each target so I've tested
what mach exceptions we get for each action on each target, and
reorganized StopInfoMachException::CreateStopReasonWithMachException to
document these possible values, and handle them without specializing
based on the target arch.
rdar://123942164
TestPlatformLaunchGDBServer.py runs `ldb-server` w/o parameters
`--min-gdbserver-port`, `--max-gdbserver-port` or `--gdbserver-port`. So
`gdbserver_portmap` is empty and
`gdbserver_portmap.GetNextAvailablePort()` will return 0. Do not call
`portmap_for_child.AllowPort(0)` in this case. Otherwise
`portmap_for_child.GetNextAvailablePort()` will allocate and never free
the port 0 and next call `portmap_for_child.GetNextAvailablePort()` will
fail.
Added few asserts in `GDBRemoteCommunicationServerPlatform::PortMap` to
avoid such issue in the future.
This patch fixes a bug added in #88845. The behaviour is very close to
#97537 w/o parameters `--min-gdbserver-port`, `--max-gdbserver-port` and
`--gdbserver-port`.
This patch adds an interface GetLastInstrSize to get information about
the size of last tried to be decoded instruction and uses it to set
software breakpoint if the memory can be decoded as instruction.
RISC-V architecture instruction format specifies the length of
instruction in first bits, so we can set a breakpoint for these cases.
This is needed as RISCV have a lot of extensions, that are not supported
by `EmulateInstructionRISCV`.
This enables XML output for enums and adds enums for 2 fields on AArch64
Linux:
* mte_ctrl.tcf, which controls how tag faults are delivered.
* fpcr.rmode, which sets the rounding mode for floating point
operations.
The other one we could do is cpsr.btype, but it is not clear what would
be useful here so I'm not including it in this change.
This extends the existing register fields support from AArch64 Linux
to AArch64 FreeBSD. So you will now see output like this:
```
(lldb) register read cpsr
cpsr = 0x60000200
= (N = 0, Z = 1, C = 1, V = 0, DIT = 0, SS = 0, IL = 0, SSBS = 0, D = 1, A = 0, I = 0, F = 0, nRW = 0, EL = 0, SP = 0)
```
Linux and FreeBSD both have HWCAP/HWCAP2 so the detection mechanism
is the same and I've renamed the detector class to reflect that.
I have confirmed that FreeBSD's treatment of CPSR (spsr as the kernel
calls it) is similair enough that we can use the same field information.
(see `sys/arm64/include/armreg.h` and `PSR_SETTABLE_64`)
For testing I've enabled the same live process test as Linux
and added a shell test using an existing FreeBSD core file.
Note that the latter does not need XML support because when reading
a core file we are not sending the information via target.xml,
it's just internal to LLDB.
This reverts commit d9e659c538.
I could not reproduce the Mac OS ASAN failure locally but I narrowed it
down to the test `test_many_fields_same_enum`. This test shares an enum
between x0, which is 64 bit, and cpsr, which is 32 bit.
My theory is that when it does `register read x0`, an enum type is
created where the undlerying enumerators are 64 bit, matching the
register size.
Then it does `register read cpsr` which used the cached enum type, but
this register is 32 bit. This caused lldb to try to read an 8 byte value
out of a 4 byte allocation:
READ of size 8 at 0x60200014b874 thread T0
<...>
=>0x60200014b800: fa fa fd fa fa fa fd fa fa fa fd fa fa fa[04]fa
To fix this I've added the register's size in bytes to the constructed
enum type's name. This means that x0 uses:
__lldb_register_fields_enum_some_enum_8
And cpsr uses:
__lldb_register_fields_enum_some_enum_4
If any other registers use this enum and are read, they will use the
cached type as long as their size matches, otherwise we make a new type.
Fixes#92541
When e69a3d18f4 added fallback register
layouts, it assumed that the choices were target XML with registers, or
no target XML at all.
In the linked issue, a user has a debug stub that does have target XML,
but it's missing register information.
This caused us to finalize the register information using an empty set
of registers got from target XML, then fail an assert when we attempted
to add the fallback set. Since we think we've already completed the
register information.
This change adds a check to prevent that first call and expands the
existing tests to check each architecture without target XML and with
target XML missing register information.
This teaches lldb to parse the enum XML elements sent by lldb-server,
and make use of the information in `register read` and `register info`.
The format is described in
https://sourceware.org/gdb/current/onlinedocs/gdb.html/Enum-Target-Types.html.
The target XML parser will drop any invalid enum or evalue. If we find
multiple evalue for the same value, we will use the last one we find.
The order of evalues from the XML is preserved as there may be good
reason they are not in numerical order.