debugserver isn't saving and restoring the SVE/SME register state around
inferior function calls.
Making arbitrary function calls while in Streaming SVE mode is generally
a poor idea because a NEON instruction can be hit and crash the
expression execution, which is how I missed this, but they should be
handled correctly if the user knows it is safe to do.
Re-landing this change after fixing an incorrect behavior on systems
without SME support.
rdar://146886210
This reverts commit 4e40c7c4bd.
arm64 CI is getting a failure in
lldb-api.tools/lldb-server.TestGdbRemoteRegisterState.py
with this commit, need to investigate and re-land.
debugserver isn't saving and restoring the SVE/SME register state around
inferior function calls.
Making arbitrary function calls while in Streaming SVE mode is generally
a poor idea because a NEON instruction can be hit and crash the
expression execution, which is how I missed this, but they should be
handled correctly if the user knows it is safe to do.
rdar://146886210
We've been dealing with UBSAN issues around this code for some time now
(see `9c36859b33b386fbfa9599646de1e2ae01158180` and
`1a2122e9e9d1d495fdf337a4a9445b61ca56df6f`). On recent macOS versions, a
UBSAN-enabled debugserver will crash when performing a `memcpy` of the
input `mach_exception_data_t`. The pointer to the beginning of the
exception data may not be aligned on a doubleword boundary, leading to
UBSAN failures such as:
```
$ ./bin/debugserver 0.0.0.0:5555 /Volumes/SSD/llvm-builds/llvm-worktrees/clang-work/build-sanitized-release/tools/lldb/test/Shell/Recognizer/Output/verbose_trap.test.tmp.out
/Volumes/SSD/llvm-builds/llvm-worktrees/clang-work/lldb/tools/debugserver/source/MacOSX/MachException.cpp:35:12: runtime error: store to misaligned address 0x00016ddfa634 for type 'mach_exception_data_type_t *' (aka 'long long *'), which requires 8 byte alignment
0x00016ddfa634: note: pointer points here
02 00 00 00 03 00 01 00 00 00 00 00 11 00 00 00 00 00 00 00 00 00 00 00 08 00 00 00 00 00 00 00
^
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /Volumes/SSD/llvm-builds/llvm-worktrees/clang-work/lldb/tools/debugserver/source/MacOSX/MachException.cpp:35:12
```
Work around these failures by pretending the input data is a `char*`
buffer.
Drive-by changes:
* I factored out some duplicated code into a static
`AppendExceptionData` and made the types consistent
---------
Co-authored-by: Jonas Devlieghere <jonas@devlieghere.com>
In 2013 we added the QSaveRegisterState and QRestoreRegisterState
packets to checkpoint a thread's register state while executing an
inferior function call, instead of using the g packet to read all
registers into lldb, then the G packet to set them again after the func
call.
Since then, lldb has not sent g/G (except as a bug) - it either asks for
registers individually (p/P) or or asks debugserver to save and restore
the entire register set with these lldb extensions.
Felipe recently had a codepath that fell back to using g/G and found
that it does not work with the modern signed fp/sp/pc/lr registers that
we can get -- it sidesteps around the clearing of the non-addressable
bits that we do when reading/writing them, and results in a crash. (
https://github.com/llvm/llvm-project/pull/132079 )
Instead of fixing that issue, I am removing g/G from debugserver because
it's not needed by lldb, and it will would be easy for future bugs to
creep in to this packet that lldb should not use, but it can
accidentally fall back to and result in subtle bugs.
This does mean that a debugger using debugserver on darwin which doesn't
use QSaveRegisterState/QRestoreRegisterState will need to fall back to
reading & writing each register individually. I'm open to re-evaluating
this decision if this proves to be needed, and supporting these lldb
extensions is onerous.
This fixes an uncommon bug with debugserver controlling an inferior
process that is hitting an internal breakpoint & continuing when
multiple interrupts are sent by SB API to lldb -- with the result being
that lldb never stops the inferior process, ignoring the interrupt/stops
being sent by the driver layer (Xcode, in this case).
In the reproducing setup (which required a machine with unique timing
characteristics), lldb is sent SBProcess::Stop and then shortly after,
SBProcess::SendAsyncInterrupt. The driver process only sees that the
inferior is publicly running at this point, even though it's hitting an
internal breakpoint (new dylib being loaded), disabling the bp, step
instructioning, re-enabling the breakpoint, then continuing.
The packet sequence lldb sends to debugserver looks like
1. vCont;s // instruction step
2. ^c // async interrupt
3. Z.... // re-enable breakpoint
4. c // resume inferior execution
5. ^c // async interrupt
When debugserver needs to interrupt a running process
(`MachProcess::Interrupt`), the main thread in debugserver sends a
SIGSTOP posix signal to the inferior process, and notes that it has sent
this signal by setting `m_sent_interrupt_signo`.
When we send the first async interrupt while instruction stepping, the
signal is sent (probably after the inferior has already stopped) but
lldb can only *receive* the mach exception that includes the SIGSTOP
when the process is running. So at the point of step (3), we have a
SIGSTOP outstanding in the kernel, and
`m_sent_interrupt_signo` is set to SIGSTOP.
When we resume the inferior (`c` in step 4), debugserver sees that
`m_sent_interrupt_signo` is still set for an outstanding SIGSTOP, but at
this point we've already stopped so it's an unnecessary stop. It records
that (1) we've got a SIGSTOP still coming that debugserver sent and (2)
we should ignore it by also setting `m_auto_resume_signo` to the same
signal value.
Once we've resumed the process, the mach exception thread
(`MachTask::ExceptionThread`) receives the outstanding mach exception,
adds it to a queue to be processed
(`MachProcess::ExceptionMessageReceived`) and when we've collected all
outstanding mach exceptions, it calls
`MachProcess::ExceptionMessageBundleComplete` top evaluate them.
`MachProcess::ExceptionMessageBundleComplete` halts the process (without
updating the MachProcess `m_state`) while evaluating them. It sees that
this incoming SIGSTOP was meant to be ignored (`m_auto_resume_signo` is
set), so it `MachProcess::PrivateResume`'s the process again.
At the same time `MachTask::ExceptionThread` is receiving and processing
the ME, `MachProcess::Interrupt` is called with another interrupt that
debugserver received. This method checks that we're still eStateRunning
(we are) but then sees that we have an outstanding SIGSTOP already
(`m_sent_interrupt_signo`) and does nothing, assuming that we will stop
shortly from that one. It then returns to call
`RNBRemote::HandlePacket_last_signal` to print the status -- but because
the process is still `eStateRunning`, this does nothing.
So the first ^c (resulting in a pending SIGSTOP) is received and we
resume the process silently. And the second ^c is ignored because we've
got one interrupt already being processed.
The fix was very simple. In `MachProcess::Interrupt` when we detect that
we have a SIGSTOP out in the wild (`m_sent_interrupt_signo`), we need to
clear `m_auto_resume_signo` which is used to indicate that this SIGSTOP
is meant to be ignored, because it was from before our most recent
resume.
MachProcess::Interrupt holds the `m_exception_and_signal_mutex` mutex
already (after Jonas's commit last week), and all of
`MachProcess::ExceptionMessageBundleComplete` holds that same mutex, so
we know we can modify `m_auto_resume_signo` here and it will be handled
correctly when the outstanding mach exception is finally processed.
rdar://145872120
The mutex in RNBRemote::CommDataReceived protects m_rx_packets and
should extend to the end of the function to cover the read where we
check if the list is empty.
This PR fixes a race condition in debugserver where the main thread
calls MachProcess::Interrupt, setting `m_sent_interrupt_signo` while the
exception monitoring thread is checking the value of the variable.
I was on the fence between introducing a new mutex and reusing the
existing exception mutex. With the notable exception of
MachProcess::Interrupt, all the other places where we were already
locking this mutex before accessing the variable. I renamed the mutex to
make it clear that it's now protecting more than the exception messages.
Jason, while investigating a real issue, had a suspicion there was race
condition related to interrupts and I was able to narrow it down by
building debugserver with TSan.
Recognize the visionOS Triple::OSType::XROS os type. Some of these have
already been landed on main, but I reviewed the downstream sources and
there were a few that still needed to be landed upstream.
**Note:** The register reading and writing depends on new register
flavor support in thread_get_state/thread_set_state in the kernel, which
will be first available in macOS 15.4.
The Apple M4 line of cores includes the Scalable Matrix Extension (SME)
feature. The M4s do not implement Scalable Vector Extension (SVE),
although the processor is in Streaming SVE Mode when the SME is being
used. The most obvious side effects of being in SSVE Mode are that (on
the M4 cores) NEON instructions cannot be used, and watchpoints may get
false positives, the address comparisons are done at a lowered
granularity.
When SSVE mode is enabled, the kernel will provide the Streaming Vector
Length register, which is a maximum of 64 bytes with the M4. Also
provided are SVCR (with bits indicating if SSVE mode and SME mode are
enabled), TPIDR2, SVL. Then the SVE registers Z0..31 (SVL bytes long),
P0..15 (SVL/8 bytes), the ZA matrix register (SVL*SVL bytes), and the M4
supports SME2, so the ZT0 register (64 bytes).
When SSVE/SME are disabled, none of these registers are provided by the
kernel - reads and writes of them will fail.
Unlike Linux, lldb cannot modify the SVL through a thread_set_state
call, or change the processor state's SSVE/SME status. There is also no
way for a process to request a lowered SVL size today, so the work that
David did to handle VL/SVL changing while stepping through a process is
not an issue on Darwin today. But debugserver should be providing
everything necessary so we can reuse all of David's work on resizing the
register contexts in lldb if it happens in the future. debugbserver
sends svl, svcr, and tpidr2 in the expedited registers when a thread
stops, if SSVE|SME mode are enabled (if the kernel allows it to read the
ARM_SME_STATE register set).
While the maximum SVL is 64 bytes on M4, the AArch64 maximum possible
SVL is 256; this would give us a 64k ZA register. If debugserver sized
all of its register contexts assuming the largest possible SVL, we could
easily use 2MB more memory for the register contexts of all threads in a
process -- and on iOS et al, processes must run within a small memory
allotment and this would push us over that.
Much of the work in debugserver was changing the arm64 register context
from being a static compile-time array of register sets, to being
initialized at runtime if debugserver is running on a machine with SME.
The ZA is only created to the machine's actual maximum SVL. The size of
the 32 SVE Z registers is less significant so I am statically allocating
those to the architecturally largest possible SVL value today.
Also, debugserver includes information about registers that share the
same part of the register file. e.g. S0 and D0 are the lower parts of
the NEON 128-bit V0 register. And when running on an SME machine, v0 is
the lower 128 bits of the SVE Z0 register. So the register maps used
when defining the VFP registers must differ depending on the
capabilities of the cpu at runtime.
I also changed register reading in debugserver, where formerly when
debugserver was asked to read a register, and the thread_get_state read
of that register failed, it would return all zero's. This is necessary
when constructing a `g` packet that gets all registers - because there
is no separation between register bytes, the offsets are fixed. But when
we are asking for a single register (e.g. Z0) when not in SSVE/SME mode,
this should return an error.
This does mean that when you're running on an SME capabable machine, but
not in SME mode, and do `register read -a`, lldb will report that 48 SVE
registers were unavailable and 5 SME registers were unavailable. But
that's only when `-a` is used.
The register reading and writing depends on new register flavor support
in thread_get_state/thread_set_state in the kernel, which is not yet in
a release. The test case I wrote is skipped on current OSes. I pilfered
the SME register setup from some of David's existing SME test files;
there were a few Linux specific details in those tests that they weren't
easy to reuse on Darwin.
rdar://121608074
This file is covered under the Apple open source license rather than the
LLVM license. Presumably this was an oversight, but it doesn't really
matter as this file is unused. Remove it altogether.
Remove support for ASL (Apple System Log) which has been deprecated
since macOS 10.12. Fixes the following warnings:
warning: 'asl_new' is deprecated: first deprecated in macOS 10.12 -
os_log(3) has replaced asl(3)
warning: 'asl_set' is deprecated: first deprecated in macOS 10.12 -
os_log(3) has replaced asl(3)
warning: 'asl_vlog' is deprecated: first deprecated in macOS 10.12 -
os_log(3) has replaced asl(3)
If lldb tries to attach to a process that is marked 'Translated' with
debugserver, it will exec the Rosetta debugserver to handle the debug
session without checking if it is present. If there is a configuration
that is somehow missing this, it will fail poorly.
rdar://135641680
macOS 10.15 added a "full" x86_64 GPR thread state flavor, equivalent to
the normal one but with DS, ES, SS, and GSbase added. This flavor can
only be used with processes that install a custom LDT (functionality
that was also added in 10.15 and is used by apps like Wine to execute
32-bit code).
Along with allowing DS, ES, SS, and GSbase to be viewed/modified, using
the full flavor is necessary when debugging a thread executing 32-bit
code.
If thread_set_state() is used with the regular thread state flavor, the
kernel resets CS to the 64-bit code segment (see
[set_thread_state64()](94d3b45284/osfmk/i386/pcb.c (L723)),
which makes debugging impossible.
There's no way to detect whether the full flavor is available, try to
use it and fall back to the regular one if it's not available.
A downside is that this patch exposes the DS, ES, SS, and GSbase
registers for all x86_64 processes, even though they are not populated
unless the full thread state is available.
I'm not sure if there's a way to tell LLDB that a register is
unavailable. The classic GDB `g` command [allows returning
`x`](https://sourceware.org/gdb/current/onlinedocs/gdb.html/Packets.html#Packets)
to denote unavailable registers, but it seems like the debug server uses
newer commands like `jThreadsInfo` and I'm not sure if those have the
same support.
Fixes#57591
(also filed as Apple FB11464104)
@jasonmolenda
Unify the implementations of WaitForSetEvents and WaitForEventsToReset.
The former deals with the possibility of a race between the timeout and
the predicate while the latter does not. The functions were also
inconsistent in when they would recompute the mask. This patch unifies
the two implementations and make them behave exactly the same modulo the
predicate.
rdar://130562344
Passing the result of c_str() to a stream is slow and redundant. This
change removes unnecessary c_str() calls and uses the string object
directly.
Caught by cppcheck -
lldb/tools/debugserver/source/JSON.cpp:398:19: performance: Passing the
result of c_str() to a stream is slow and redundant. [stlcstrStream]
lldb/tools/debugserver/source/JSON.cpp:408:64: performance: Passing the
result of c_str() to a stream is slow and redundant. [stlcstrStream]
lldb/tools/debugserver/source/JSON.cpp:420:54: performance: Passing the
result of c_str() to a stream is slow and redundant. [stlcstrStream]
lldb/tools/debugserver/source/JSON.cpp:46:13: performance: Passing the
result of c_str() to a stream is slow and redundant. [stlcstrStream]
Fix#91212
In DNBArchImplARM64.cpp I'm doing
```
#if __has_feature(ptrauth_calls) && defined(__LP64__)
```
And the preprocessor warns that this is not defined behavior. This
checks if ptrauth_calls is available and if this is being compiled
64-bit (i.e. arm64e), and defines a single DEBUGSERVER_IS_ARM64E when
those are both true.
I did have to duplicate one DNBLogThreaded() call which itself is a
macro, and using an ifdef in the middle of macro arguments also got me a
warning from the preprocessor.
While testing this for all the different targets, I found a DNBError
initialization that accepts a c-string but I'm passing in a printf-style
formatter c-string and an argument. Create the string before the call
and pass in the constructed string.
rdar://127129242
The first half of this patch is a long-standing annoyance, if I attach
to debugserver with lldb while it is waiting for an lldb connection, the
syscall is interrupted and it doesn't retry, debugserver exits
immediately.
The second half is a request from another tool that is communicating
with debugserver, that we retry reads on our sockets in the same way. I
haven't dug in to the details of how they're communicating that this is
necessary, but the best I've been able to find reading the POSIX API
docs, this is fine.
rdar://117113298
Pavel added an extension to lldb's gdb remote serial protocol that
allows the debug stub to append an error message (ascii hex encoded)
after an error response packet Exx. This was added in 2017 in
https://reviews.llvm.org/D34945 . lldb sends the
QErrorStringInPacketSupported packet and then the remote stub may add
these error strings.
debugserver has two bugs in its use of extended error messages: the
vAttach family would send the extended error string without checking if
the mode had been enabled. And qLaunchSuccess would not properly format
its error response packet (missing the hex digits, did not asciihex
encode the string).
There is also a bug in the HandlePacket_D (detach) packet where the
error packets did not include hex digits, but this one does not append
an error string.
I'm adding a new RNBRemote::SendErrorPacket() and routing all error
packet returns though this one method. It takes an optional second
string which is the longer error message; it now handles appending it to
the Exx response or not, depending on the QErrorStringInPacketSupported
state. I updated all packets to send their errors via this method.
debugserver will not compress small packets; the overhead of the
compression header makes this useful for larger packets. I default to
384 bytes in debugserver, and added an option for lldb to request a
different cutoff. This option has never been used in lldb in the past
nine years, so I don't think there's any point to keeping it around.
debugserver on arm64 devices can manage both Byte Address Select
watchpoints (1-8 bytes) and MASK watchpoints (8 bytes-2 gigabytes). This
adds a SupportedWatchpointTypes key to the QSupported response from
debugserver with a list of these, so lldb can take full advantage of
them when creating larger regions with a single hardware watchpoint.
Also add documentation for this, and two other lldb extensions, to the
lldb-gdb-remote.txt documentation.
Re-enable TestLargeWatchpoint.py on Darwin systems when testing with the
in-tree built debugserver. I can remove the "in-tree built debugserver"
in the future when this new key is handled by an Xcode debugserver.
This patch is the next piece of work in my Large Watchpoint proposal,
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
This patch breaks a user's watchpoint into one or more
WatchpointResources which reflect what the hardware registers can cover.
This means we can watch objects larger than 8 bytes, and we can watched
unaligned address ranges. On a typical 64-bit target with 4 watchpoint
registers you can watch 32 bytes of memory if the start address is
doubleword aligned.
Additionally, if the remote stub implements AArch64 MASK style
watchpoints (e.g. debugserver on Darwin), we can watch any power-of-2
size region of memory up to 2GB, aligned to that same size.
I updated the Watchpoint constructor and CommandObjectWatchpoint to
create a CompilerType of Array<UInt8> when the size of the watched
region is greater than pointer-size and we don't have a variable type to
use. For pointer-size and smaller, we can display the watched granule as
an integer value; for larger-than-pointer-size we will display as an
array of bytes.
I have `watchpoint list` now print the WatchpointResources used to
implement the watchpoint.
I added a WatchpointAlgorithm class which has a top-level static method
that takes an enum flag mask WatchpointHardwareFeature and a user
address and size, and returns a vector of WatchpointResources covering
the request. It does not take into account the number of watchpoint
registers the target has, or the number still available for use. Right
now there is only one algorithm, which monitors power-of-2 regions of
memory. For up to pointer-size, this is what Intel hardware supports.
AArch64 Byte Address Select watchpoints can watch any number of
contiguous bytes in a pointer-size memory granule, that is not currently
supported so if you ask to watch bytes 3-5, the algorithm will watch the
entire doubleword (8 bytes). The newly default "modify" style means we
will silently ignore modifications to bytes outside the watched range.
I've temporarily skipped TestLargeWatchpoint.py for all targets. It was
only run on Darwin when using the in-tree debugserver, which was a proxy
for "debugserver supports MASK watchpoints". I'll be adding the
aforementioned feature flag from the stub and enabling full mask
watchpoints when a debugserver with that feature is enabled, and
re-enable this test.
I added a new TestUnalignedLargeWatchpoint.py which only has one test
but it's a great one, watching a 22-byte range that is unaligned and
requires four 8-byte watchpoints to cover.
I also added a unit test, WatchpointAlgorithmsTests, which has a number
of simple tests against WatchpointAlgorithms::PowerOf2Watchpoints. I
think there's interesting possible different approaches to how we cover
these; I note in the unit test that a user requesting a watch on address
0x12e0 of 120 bytes will be covered by two watchpoints today, a
128-bytes at 0x1280 and at 0x1300. But it could be done with a 16-byte
watchpoint at 0x12e0 and a 128-byte at 0x1300, which would have fewer
false positives/private stops. As we try refining this one, it's helpful
to have a collection of tests to make sure things don't regress.
I tested this on arm64 macOS, (genuine) x86_64 macOS, and AArch64
Ubuntu. I have not modifed the Windows process plugins yet, I might try
that as a standalone patch, I'd be making the change blind, but the
necessary changes (see ProcessGDBRemote::EnableWatchpoint) are pretty
small so it might be obvious enough that I can change it and see what
the Windows CI thinks.
There isn't yet a packet (or a qSupported feature query) for the gdb
remote serial protocol stub to communicate its watchpoint capabilities
to lldb. I'll be doing that in a patch right after this is landed,
having debugserver advertise its capability of AArch64 MASK watchpoints,
and have ProcessGDBRemote add eWatchpointHardwareArmMASK to
WatchpointAlgorithms so we can watch larger than 32-byte requests on
Darwin.
I haven't yet tackled WatchpointResource *sharing* by multiple
Watchpoints. This is all part of the goal, especially when we may be
watching a larger memory range than the user requested, if they then add
another watchpoint next to their first request, it may be covered by the
same WatchpointResource (hardware watchpoint register). Also one "read"
watchpoint and one "write" watchpoint on the same memory granule need to
be handled, making the WatchpointResource cover all requests.
As WatchpointResources aren't shared among multiple Watchpoints yet,
there's no handling of running the conditions/commands/etc on multiple
Watchpoints when their shared WatchpointResource is hit. The goal beyond
"large watchpoint" is to unify (much more) the Watchpoint and Breakpoint
behavior and commands. I have a feeling I may be slowly chipping away at
this for a while.
Re-landing this patch after fixing two undefined behaviors in
WatchpointAlgorithms found by UBSan and by failures on different
CI bots.
rdar://108234227
There's a bad interaction between the macOS 14 dyld and the "dyld_sim"
shim that comes from older (iOS 15) simulator downloads that results in
dyld reporting some modules twice in the return from the dyld callback
to list modules. The records were identical, but lldb wasn't happy with
seeing the duplicates...
Since it's not possible to load two different modules at the same
address, this change just picks the first instance of any entries that
have the same load address.
There really isn't a good way to test this patch.
Currently when you interrupt a:
(lldb) process attach -w -n some_process
lldb just closes the connection to the stub and kills the
lldb_private::Process it made for the attach. The stub at the other end
notices the connection go down and exits because of that. But when
communication to a device is handled through some kind of proxy server
which isn't as well behaved as one would wish, that signal might not be
reliable, causing debugserver to persist on the machine, waiting to
steal the next instance of that process.
We can work around those failures by sending an explicit interrupt
before closing down the connection. The stub will also have to be
waiting for the interrupt for this to make any difference. I changed
debugserver to do that.
I didn't make the equivalent change in lldb-server. So long as you
aren't faced with a flakey connection, this should not be necessary.
MachProcess has a MachTask as an ivar. In the MachProcess dtor, we call
MachTask::Clear() to clear its state, before running the dtor of all our
ivars, including the MachTask one.
When we attach on darwin, MachProcess calls
MachTask::StartExceptionThread which does the task_for_pid and then
starts a thread to listen for mach messages. Then MachProcess calls
ptrace(PT_ATTACHEXC). If that ptrace() fails, MachProcess will call
MachTask::Clear. But the exception thread is now up & running and is not
stopped; its ivars will be reset by the Clear() method, and its object
will be freed after the dtor runs.
Actually eliciting a crash in this scenario is very timing sensitive; I
hand-modified debugserver to fail to PT_ATTACHEXC trying to simulate it
on my desktop and was unable. But looking at the source, and an
occasional crash report we've received, it's clear that this is
possible.
rdar://117521198
In https://reviews.llvm.org/D136620 I changed debugserver to stop using
the kernel-provided functions
arm_thread_state64_get_{pc,lr,sp,fp} to postprocess those four registers
on aarch64 systems after we thread_get_state() them. The kernel stores
these four registers with signing internally, either from the inferior
process' actual signing, or its own.
When a program had crashed by doing an authenticated BL to an address
with improper signing, the inferior process would crash and that
improperly signed pc would be given to debugserver via thread_get_state.
debugserver would run that through arm_thread_state64_get_pc() and then
debugserver would crash when authenticating & stripping the value, on
newer Mac hardware.
To avoid debugserver crashing on a crashed inferior process, I switched
from using these system functions to strip the values, to simply
clearing the bits outright in debugserver.
However, lr is a special case where the inferior may have signed this
value (against the stack pointer value at the time). Or it may not yet
have any authentication bits, right after a BL. In the latter case, the
kernel will add its own auth bits for while it is stored inside the
kernel. In the case of a user lr value, we cannot authenticate it in
debugserver without knowing the sp value it was signed against (and the
way it is signed is not specified by the ABI) so an "improperly" signed
lr (whatever that means) won't cause debugserver to crash.
debugserver can thread_get_state the inferior's lr, run it through
arm_thread_state64_get_lr(), and get the actual signed 64-bit value that
the inferior process is using. And the specifics of how that lr is
signed may be important for debugging the process, instead of how I am
currently clearing the auth bits outright.
This patch reverts that change for lr only, and also adds a new logging
to debugserver specifically for the four sp/fp/lr/pc values that
thread_get_state hands to us, before we process them at all.
The mach exception data may not be doubleword aligned when we receive
it. We use memcpy to align it later in this method when we save
the data, but for printing the value at the top, we need to do the
same or ubsan can trigger when LOG_EXCEPTIONS is enabled in
debugserver.
Differential Revision: https://reviews.llvm.org/D158312
The mach exception data may not be doubleword aligned when we receive
it. We use memcpy to align it later in this method when we save
the data, but for printing the value at the top, we need to do the
same or ubsan can trigger when LOG_EXCEPTIONS is enabled in
debugserver.
On the Swift forums, people are disabling SIP in order to debug process
that are missing the get-task-allow entitlement. Improve the error to
give developers a hint at the potential issues.
rdar://113704200
Differential revision: https://reviews.llvm.org/D157640
Just about every file in the lldb project is built with C++ enabled.
Unless I've missed something, these macro guards don't really accomplish
very much.
Differential Revision: https://reviews.llvm.org/D157538
When we attach to a process, we task_for_pid(), ptrace(), and then
we try to halt execution of the process and time out after thirty
seconds if we cannot interrupt it. At this point, we must assume
we have no control of the inferior process and the attach has failed.
The error message we return currently is "operation timed out"; this
change improves on that error message to make it more clear what is
going on. Thanks to Jim for pointing this out.
rdar://101152233
When debugserver is attaching to a process, it first task_for_pid()'s
and then ptrace(PT_ATTACHEXC)'s. When that ptrace() fails to complete,
we are in a semi-attached state that we need to give up from, and
our error reporting isn't ideal -- we can even claim that the process
is already being debugged (by ourselves).
Differential Revision: https://reviews.llvm.org/D155037
rdar://101152233
Two new entitlements, com.apple.private.thread-set-state and
com.apple.private.set-exception-port should be included in the
debugserver entitlement plist when built internally at Apple.
The desktop builds of debugserver don't need these entitlements;
don't add them to debugserver-macosx-entitlements.plist.
rdar://108912676
rdar://103032208
DNBGetDeploymentInfo was calling GetPlatformString w/o checking that
the load command it was processing actually provided a platform string.
That caused a bunch of worrisome looking error messages in the debugserver
log output.
Differential Revision: https://reviews.llvm.org/D151861