The first half of this patch is a long-standing annoyance, if I attach
to debugserver with lldb while it is waiting for an lldb connection, the
syscall is interrupted and it doesn't retry, debugserver exits
immediately.
The second half is a request from another tool that is communicating
with debugserver, that we retry reads on our sockets in the same way. I
haven't dug in to the details of how they're communicating that this is
necessary, but the best I've been able to find reading the POSIX API
docs, this is fine.
rdar://117113298
When terminating the debugger, we wait for all background tasks to
complete. Given that there's no way to interrupt those treads, this can
take a while. When that happens, the debugger appears to hang at exit.
The above situation is unfortunately not uncommon when background
downloading of dSYMs is enabled (`symbols.auto-download background`).
Even when calling dsymForUUID with a reasonable timeout, it can take a
while to complete.
This patch improves the user experience by printing a message from the
driver when it takes more than one (1) second to terminate the debugger.
Pavel added an extension to lldb's gdb remote serial protocol that
allows the debug stub to append an error message (ascii hex encoded)
after an error response packet Exx. This was added in 2017 in
https://reviews.llvm.org/D34945 . lldb sends the
QErrorStringInPacketSupported packet and then the remote stub may add
these error strings.
debugserver has two bugs in its use of extended error messages: the
vAttach family would send the extended error string without checking if
the mode had been enabled. And qLaunchSuccess would not properly format
its error response packet (missing the hex digits, did not asciihex
encode the string).
There is also a bug in the HandlePacket_D (detach) packet where the
error packets did not include hex digits, but this one does not append
an error string.
I'm adding a new RNBRemote::SendErrorPacket() and routing all error
packet returns though this one method. It takes an optional second
string which is the longer error message; it now handles appending it to
the Exx response or not, depending on the QErrorStringInPacketSupported
state. I updated all packets to send their errors via this method.
The -d(ebug) option broke 5 years ago when I migrated the driver to
libOption. Since then, we were never check if the option is set. We were
incorrectly toggling the internal variable (m_debug_mode) based on
OPT_no_use_colors instead.
Given that the functionality doesn't seem particularly useful and nobody
noticed it has been broken for 5 years, I'm just removing the flag.
If a SetDataBreakpointsRequest contains a list data breakpoints which
have duplicate starting addresses, the current behaviour is returning
`{verified: true}` to both watchpoints with duplicated starting
addresses. This confuses the client and what actually happens in lldb is
the second one overwrite the first one.
This fixes it by letting the last watchpoint at given address have
`{verified: true}` and all previous watchpoints at the same address
should have `{verfied: false}` at response.
The `EventThreadFunction` can end up calling `HandleCommand`
concurrently with the main request processing thread. The underlying API
does not appear to be thread safe, so add a narrowly scoped mutex lock
to prevent calling it in this place from more than one thread.
Fixes#81686. Prior to this, TestDAP_launch.py is 4% flaky. After, it
passes in 1000 runs.
This implements functionality to handle DataBreakpointInfo request and
SetDataBreakpoints request.
Previous commit
8c56e78ec5
was reverted because setting 1 byte watchpoint failed in the new test on
ARM64. So, I changed the test to setting 4 byte watchpoint instead, and
hope this won't break it again. It also adds the fixes from
https://github.com/llvm/llvm-project/pull/81680.
This adds a layer between `SounceBreakpoint`/`FunctionBreakpoint` and
`BreakpointBase` to have better separation and encapsulation so we are
not directly operating on `SBBreakpoint`.
I basically moved the `SBBreakpoint` and the methods that requires it
from `BreakpointBase` to `Breakpoint`. This allows adding support for
data watchpoint easier by sharing the logic inside `BreakpointBase`.
debugserver will not compress small packets; the overhead of the
compression header makes this useful for larger packets. I default to
384 bytes in debugserver, and added an option for lldb to request a
different cutoff. This option has never been used in lldb in the past
nine years, so I don't think there's any point to keeping it around.
debugserver on arm64 devices can manage both Byte Address Select
watchpoints (1-8 bytes) and MASK watchpoints (8 bytes-2 gigabytes). This
adds a SupportedWatchpointTypes key to the QSupported response from
debugserver with a list of these, so lldb can take full advantage of
them when creating larger regions with a single hardware watchpoint.
Also add documentation for this, and two other lldb extensions, to the
lldb-gdb-remote.txt documentation.
Re-enable TestLargeWatchpoint.py on Darwin systems when testing with the
in-tree built debugserver. I can remove the "in-tree built debugserver"
in the future when this new key is handled by an Xcode debugserver.
This patch is the next piece of work in my Large Watchpoint proposal,
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
This patch breaks a user's watchpoint into one or more
WatchpointResources which reflect what the hardware registers can cover.
This means we can watch objects larger than 8 bytes, and we can watched
unaligned address ranges. On a typical 64-bit target with 4 watchpoint
registers you can watch 32 bytes of memory if the start address is
doubleword aligned.
Additionally, if the remote stub implements AArch64 MASK style
watchpoints (e.g. debugserver on Darwin), we can watch any power-of-2
size region of memory up to 2GB, aligned to that same size.
I updated the Watchpoint constructor and CommandObjectWatchpoint to
create a CompilerType of Array<UInt8> when the size of the watched
region is greater than pointer-size and we don't have a variable type to
use. For pointer-size and smaller, we can display the watched granule as
an integer value; for larger-than-pointer-size we will display as an
array of bytes.
I have `watchpoint list` now print the WatchpointResources used to
implement the watchpoint.
I added a WatchpointAlgorithm class which has a top-level static method
that takes an enum flag mask WatchpointHardwareFeature and a user
address and size, and returns a vector of WatchpointResources covering
the request. It does not take into account the number of watchpoint
registers the target has, or the number still available for use. Right
now there is only one algorithm, which monitors power-of-2 regions of
memory. For up to pointer-size, this is what Intel hardware supports.
AArch64 Byte Address Select watchpoints can watch any number of
contiguous bytes in a pointer-size memory granule, that is not currently
supported so if you ask to watch bytes 3-5, the algorithm will watch the
entire doubleword (8 bytes). The newly default "modify" style means we
will silently ignore modifications to bytes outside the watched range.
I've temporarily skipped TestLargeWatchpoint.py for all targets. It was
only run on Darwin when using the in-tree debugserver, which was a proxy
for "debugserver supports MASK watchpoints". I'll be adding the
aforementioned feature flag from the stub and enabling full mask
watchpoints when a debugserver with that feature is enabled, and
re-enable this test.
I added a new TestUnalignedLargeWatchpoint.py which only has one test
but it's a great one, watching a 22-byte range that is unaligned and
requires four 8-byte watchpoints to cover.
I also added a unit test, WatchpointAlgorithmsTests, which has a number
of simple tests against WatchpointAlgorithms::PowerOf2Watchpoints. I
think there's interesting possible different approaches to how we cover
these; I note in the unit test that a user requesting a watch on address
0x12e0 of 120 bytes will be covered by two watchpoints today, a
128-bytes at 0x1280 and at 0x1300. But it could be done with a 16-byte
watchpoint at 0x12e0 and a 128-byte at 0x1300, which would have fewer
false positives/private stops. As we try refining this one, it's helpful
to have a collection of tests to make sure things don't regress.
I tested this on arm64 macOS, (genuine) x86_64 macOS, and AArch64
Ubuntu. I have not modifed the Windows process plugins yet, I might try
that as a standalone patch, I'd be making the change blind, but the
necessary changes (see ProcessGDBRemote::EnableWatchpoint) are pretty
small so it might be obvious enough that I can change it and see what
the Windows CI thinks.
There isn't yet a packet (or a qSupported feature query) for the gdb
remote serial protocol stub to communicate its watchpoint capabilities
to lldb. I'll be doing that in a patch right after this is landed,
having debugserver advertise its capability of AArch64 MASK watchpoints,
and have ProcessGDBRemote add eWatchpointHardwareArmMASK to
WatchpointAlgorithms so we can watch larger than 32-byte requests on
Darwin.
I haven't yet tackled WatchpointResource *sharing* by multiple
Watchpoints. This is all part of the goal, especially when we may be
watching a larger memory range than the user requested, if they then add
another watchpoint next to their first request, it may be covered by the
same WatchpointResource (hardware watchpoint register). Also one "read"
watchpoint and one "write" watchpoint on the same memory granule need to
be handled, making the WatchpointResource cover all requests.
As WatchpointResources aren't shared among multiple Watchpoints yet,
there's no handling of running the conditions/commands/etc on multiple
Watchpoints when their shared WatchpointResource is hit. The goal beyond
"large watchpoint" is to unify (much more) the Watchpoint and Breakpoint
behavior and commands. I have a feeling I may be slowly chipping away at
this for a while.
Re-landing this patch after fixing two undefined behaviors in
WatchpointAlgorithms found by UBSan and by failures on different
CI bots.
rdar://108234227
lldb-dap instances managed by other extensions benefit from having a
welcome message with, for example, a basic user guide or a
troubleshooting message.
This PR adds a cmake variable for defining such message in a simple way.
This message appears upon initialization but before initCommands are
executed, as they might cause a failure and prevent the message from
being displayed.
The previous logic for determining if an expression was a command or
variable expression in the repl would incorrectly identify the context
in many common cases where a local variable name partially overlaps with
the repl input.
For example:
```
int foo() {
int var = 1; // break point, evaluating "p var", previously emitted a warning
}
```
Instead of checking potentially multiple conflicting values against the
expression input, I updated the heuristic to only consider the first
term. This is much more reliable at eliminating false positives when the
input does not actually hide a local variable.
Additionally, I updated the warning on conflicts to occur anytime the
conflict is detected since the specific conflict can change based on the
current input. This also includes additional details on how users can
change the behavior.
Example Debug Console output from
lldb/test/API/tools/lldb-dap/evaluate/main.cpp:11 breakpoint 3.
```
lldb-dap> var + 3
Warning: Expression 'var' is both an LLDB command and variable. It will be evaluated as a variable. To evaluate the expression as an LLDB command, use '`' as a prefix.
45
lldb-dap> var + 1
Warning: Expression 'var' is both an LLDB command and variable. It will be evaluated as a variable. To evaluate the expression as an LLDB command, use '`' as a prefix.
43
```
There's a bad interaction between the macOS 14 dyld and the "dyld_sim"
shim that comes from older (iOS 15) simulator downloads that results in
dyld reporting some modules twice in the return from the dyld callback
to list modules. The records were identical, but lldb wasn't happy with
seeing the duplicates...
Since it's not possible to load two different modules at the same
address, this change just picks the first instance of any entries that
have the same load address.
There really isn't a good way to test this patch.
This allows release teams to customize the bug report url for lldb. It
also removes unnecessary constructions of
`llvm::PrettyStackTraceProgram` as it's already constructed inside
`llvm::InitLLVM`.
When generating a `display_value` for a variable the current approach
calls `SBValue::GetValue()` and `SBValue::GetSummary()` to generate a
`display_value` for the `SBValue`. However, there are cases where both
of these return an empty string and the fallback is to print a pointer
and type name instead (e.g. `FooBarType @ 0x00321`).
For swift types, lldb includes a langauge runtime plugin that can
generate a description of the object but this is only used with
`SBValue::GetDescription()`.
For example:
```
$ lldb swift-binary
... stop at breakpoint ...
lldb> script
>>> event = lldb.frame.GetValueForVariablePath("event")
>>> print("Value", event.GetValue())
Value None
>>> print("Summary", event.GetSummary())
Summary None
>>> print("Description", event) # __str__ calls SBValue::GetDescription()
Description (main.Event) event = (name = "Greetings", time = 2024-01-04 23:38:06 UTC)
```
With this change, if GetValue and GetSummary return empty then we try
`SBValue::GetDescription()` as a fallback before using the previous
logic of printing `<type> @ <addr>`.
The main motivations behind this are two:
- Allow different companies developing their own vscode extensions for
LLDB to have a single contribution point, thus sharing resources and
working as a virtual large team.
- Allow for visual ways to configure the debugger, which currently has
to be done through launch.json files.
In terms of implementation, this is very straightforward and these are
the most important details:
- All the cpp code has been moved to a subfolder for cleanness. There's
a specific commit in the list of commits of this PR that just does that,
in case that helps reviewing this.
- A new folder `src-ts` has been created for the typescript code
- The ts extension can be used in two ways: as a regular vscode
extension and as a library. There file `extension.ts` explains which
entry point to use.
- The README has been updated the mention how to install the extension,
which is simpler than before. There are two additional sections for
rebuilding and formatting.
- The ts code I added merely sets up the debug adapter using two
possible options: reading the lldb-dap path from vscode settings or from
a config object passed by users of the extension is used as a library. I
did this to show how we can support easily both worlds.
…ntext
Following the specification chain seems to be clearly the expected
behavior of GetDeclContext(). Otherwise C++ methods have an empty
CompilerContext instead of being nested in their struct/class.
Theprimary motivation for this functionality is the Swift plugin. In
order to test the change I added a proof-of-concept implementation of a
Module::FindFunction() variant that takes a CompilerContext, expesed via
lldb-test.
rdar://120553412
In order to allow smarter vscode extensions, it's useful to send
additional structured information of SBValues to the client.
Specifically, I'm now sending error, summary, autoSummary and
inMemoryValue in addition to the existing properties being sent. This is
cheap because these properties have to be calculated anyway to generate
the display value of the variable, but they are now available for
extensions to better analyze variables. For example, if the error field
is not present, the extension might be able to provide cool features,
and the current way to do that is to look for the `"<error: "` prefix,
which is error-prone.
This also incorporates a tiny feedback from
https://github.com/llvm/llvm-project/pull/74865#issuecomment-1850695477
This patch replaces uses of StringRef::{starts,ends}with with
StringRef::{starts,ends}_with for consistency with
std::{string,string_view}::{starts,ends}_with in C++20.
I'm planning to deprecate and eventually remove
StringRef::{starts,ends}with.
This adds support for optionally prefixing any command with `?` and/or
`!`.
- `?` prevents the output of a commands to be printed to the console
unless it fails.
- `!` aborts the dap if the command fails.
They come in handy when programmatically running commands on behalf of
the user without wanting them to know unless they fail, or when a
critical setup is required as part of launchCommands and it's better to
abort on failures than to silently skip.
This patch revives the effort to get this Phabricator patch into
upstream:
https://reviews.llvm.org/D137900
This patch was accepted before in Phabricator but I found some
-gsimple-template-names issues that are fixed in this patch.
A fixed up version of the description from the original patch starts
now.
This patch started off trying to fix Module::FindFirstType() as it
sometimes didn't work. The issue was the SymbolFile plug-ins didn't do
any filtering of the matching types they produced, and they only looked
up types using the type basename. This means if you have two types with
the same basename, your type lookup can fail when only looking up a
single type. We would ask the Module::FindFirstType to lookup "Foo::Bar"
and it would ask the symbol file to find only 1 type matching the
basename "Bar", and then we would filter out any matches that didn't
match "Foo::Bar". So if the SymbolFile found "Foo::Bar" first, then it
would work, but if it found "Baz::Bar" first, it would return only that
type and it would be filtered out.
Discovering this issue lead me to think of the patch Alex Langford did a
few months ago that was done for finding functions, where he allowed
SymbolFile objects to make sure something fully matched before parsing
the debug information into an AST type and other LLDB types. So this
patch aimed to allow type lookups to also be much more efficient.
As LLDB has been developed over the years, we added more ways to to type
lookups. These functions have lots of arguments. This patch aims to make
one API that needs to be implemented that serves all previous lookups:
- Find a single type
- Find all types
- Find types in a namespace
This patch introduces a `TypeQuery` class that contains all of the state
needed to perform the lookup which is powerful enough to perform all of
the type searches that used to be in our API. It contain a vector of
CompilerContext objects that can fully or partially specify the lookup
that needs to take place.
If you just want to lookup all types with a matching basename,
regardless of the containing context, you can specify just a single
CompilerContext entry that has a name and a CompilerContextKind mask of
CompilerContextKind::AnyType.
Or you can fully specify the exact context to use when doing lookups
like: CompilerContextKind::Namespace "std"
CompilerContextKind::Class "foo"
CompilerContextKind::Typedef "size_type"
This change expands on the clang modules code that already used a
vector<CompilerContext> items, but it modifies it to work with
expression type lookups which have contexts, or user lookups where users
query for types. The clang modules type lookup is still an option that
can be enabled on the `TypeQuery` objects.
This mirrors the most recent addition of type lookups that took a
vector<CompilerContext> that allowed lookups to happen for the
expression parser in certain places.
Prior to this we had the following APIs in Module:
```
void
Module::FindTypes(ConstString type_name, bool exact_match, size_t max_matches,
llvm::DenseSet<lldb_private::SymbolFile *> &searched_symbol_files,
TypeList &types);
void
Module::FindTypes(llvm::ArrayRef<CompilerContext> pattern, LanguageSet languages,
llvm::DenseSet<lldb_private::SymbolFile *> &searched_symbol_files,
TypeMap &types);
void Module::FindTypesInNamespace(ConstString type_name,
const CompilerDeclContext &parent_decl_ctx,
size_t max_matches, TypeList &type_list);
```
The new Module API is much simpler. It gets rid of all three above
functions and replaces them with:
```
void FindTypes(const TypeQuery &query, TypeResults &results);
```
The `TypeQuery` class contains all of the needed settings:
- The vector<CompilerContext> that allow efficient lookups in the symbol
file classes since they can look at basename matches only realize fully
matching types. Before this any basename that matched was fully realized
only to be removed later by code outside of the SymbolFile layer which
could cause many types to be realized when they didn't need to.
- If the lookup is exact or not. If not exact, then the compiler context
must match the bottom most items that match the compiler context,
otherwise it must match exactly
- If the compiler context match is for clang modules or not. Clang
modules matches include a Module compiler context kind that allows types
to be matched only from certain modules and these matches are not needed
when d oing user type lookups.
- An optional list of languages to use to limit the search to only
certain languages
The `TypeResults` object contains all state required to do the lookup
and store the results:
- The max number of matches
- The set of SymbolFile objects that have already been searched
- The matching type list for any matches that are found
The benefits of this approach are:
- Simpler API, and only one API to implement in SymbolFile classes
- Replaces the FindTypesInNamespace that used a CompilerDeclContext as a
way to limit the search, but this only worked if the TypeSystem matched
the current symbol file's type system, so you couldn't use it to lookup
a type in another module
- Fixes a serious bug in our FindFirstType functions where if we were
searching for "foo::bar", and we found a "baz::bar" first, the basename
would match and we would only fetch 1 type using the basename, only to
drop it from the matching list and returning no results
This is an extension to the protocol that emits the declaration
information along with the metadata of each variable. This can be used
by vscode extensions to implement, for example, a "goToDefinition"
action in the debug tab, or for showing the value of a variable right
next to where it's declared during a debug session.
As this is cheap, I'm not gating this information under any setting.
Currently there's an include in which `[opt]` might be emitted twice if
the frame format also asks for it. As a trivial fix, we should manually
emit `[opt]` only if a custom frame format is not specified.
Previously the type of the breakpoint id in the Stopped event was a
uint64_t, however thats the wrong type for a breakpoint id, which can
cause encoding issues when internal breakpoints are hit.
Currently when you interrupt a:
(lldb) process attach -w -n some_process
lldb just closes the connection to the stub and kills the
lldb_private::Process it made for the attach. The stub at the other end
notices the connection go down and exits because of that. But when
communication to a device is handled through some kind of proxy server
which isn't as well behaved as one would wish, that signal might not be
reliable, causing debugserver to persist on the machine, waiting to
steal the next instance of that process.
We can work around those failures by sending an explicit interrupt
before closing down the connection. The stub will also have to be
waiting for the interrupt for this to make any difference. I changed
debugserver to do that.
I didn't make the equivalent change in lldb-server. So long as you
aren't faced with a flakey connection, this should not be necessary.
https://github.com/llvm/llvm-project/pull/69238 caused breakage in
VSCode debug console usage -- the user's input is always treated as
commands instead of expressions (the same behavior as if empty command
escape prefix is specified).
The bug is in one overload of `GetString()` which did not respect the
default value of "\`". But more important, I am puzzled to find out why
the regression is not caught by lldb test (testdap_evaluate). Turns out
https://github.com/llvm/llvm-project/pull/69238 specifies
commandEscapePrefix default value in test framework to be "\`" while
VSCode will default not specify any commandEscapePrefix at all. Changing
it to None will fail `testdap_evaluate`. We should align the default
behavior between DAP client and testcase.
This patches fixes the bug in `GetString()` and changed the default
value of commandEscapePrefix in testcases to be None (be consistent with
IDE).
Co-authored-by: jeffreytan81 <jeffreytan@fb.com>
When this option gets enabled, descriptions of threads will be generated
using the format provided in the launch configuration instead of
generating it manually in the dap code. This allows lldb-dap to show an
output similar to the one in the CLI.
This is very similar to https://github.com/llvm/llvm-project/pull/71843
When this option gets enabled, descriptions of stack frames will be
generated using the format provided in the launch configuration instead
of simply calling `SBFrame::GetDisplayFunctionName`. This allows
lldb-dap to show an output similar to the one in the CLI.
Currently VSCode logpoint uses `SBValue::GetValue` to get the value for
printing. This is not providing an intuitive result for std::string or
char * -- it shows the pointer value instead of the string content.
This patch improves by prefers `SBValue::GetSummary()` before using
`SBValue::GetValue()`.
---------
Co-authored-by: jeffreytan81 <jeffreytan@fb.com>
MachProcess has a MachTask as an ivar. In the MachProcess dtor, we call
MachTask::Clear() to clear its state, before running the dtor of all our
ivars, including the MachTask one.
When we attach on darwin, MachProcess calls
MachTask::StartExceptionThread which does the task_for_pid and then
starts a thread to listen for mach messages. Then MachProcess calls
ptrace(PT_ATTACHEXC). If that ptrace() fails, MachProcess will call
MachTask::Clear. But the exception thread is now up & running and is not
stopped; its ivars will be reset by the Clear() method, and its object
will be freed after the dtor runs.
Actually eliciting a crash in this scenario is very timing sensitive; I
hand-modified debugserver to fail to PT_ATTACHEXC trying to simulate it
on my desktop and was unable. But looking at the source, and an
occasional crash report we've received, it's clear that this is
possible.
rdar://117521198
We've been using the backtick as our escape character, however that
leads to a weird experience on VS Code, because on most hosts, as soon
as you type the backtick on VS Code, the IDE will introduce another
backtick. As changing the default escape character might be out of
question because other plugins might rely on it, we can instead
introduce an option to change this variable upon lldb-vscode
initialization.
FWIW, my users will be using : instead ot the backtick.
There are some uses of "vscode" in
`lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py` where
I'm not sure if it's referring to the adapter or VS Code itself, so
those remain.
Rename lldb-vscode to lldb-dap. This change is largely mechanical. The
following substitutions cover the majority of the changes in this
commit:
s/VSCODE/DAP/
s/VSCode/DAP/
s/vscode/dap/
s/g_vsc/g_dap/
Discourse RFC:
https://discourse.llvm.org/t/rfc-rename-lldb-vscode-to-lldb-dap/74075/
This can be used to have VS Code debug various emulators, remote
systems, hardware probes, etc.
In my case I was doing this for the Gameboy Advance,
https://github.com/stuij/gba-llvm-devkit/blob/main/docs/Debugging.md#debugging-using-visual-studio-code.
It's not very complex if you know LLDB well, but when using another
plugin, CodeLLDB, I was very glad that they had an example for it. So we
should have one too.
lldb-vscode had installation instructions based on creating a folder
inside ~/.vscode/extensions, which no longer works. A different
installation mechanism is needed based on a VSCode command. More can be
read in the contents of this patch.
Closes https://github.com/llvm/llvm-project/issues/63655
In https://reviews.llvm.org/D136620 I changed debugserver to stop using
the kernel-provided functions
arm_thread_state64_get_{pc,lr,sp,fp} to postprocess those four registers
on aarch64 systems after we thread_get_state() them. The kernel stores
these four registers with signing internally, either from the inferior
process' actual signing, or its own.
When a program had crashed by doing an authenticated BL to an address
with improper signing, the inferior process would crash and that
improperly signed pc would be given to debugserver via thread_get_state.
debugserver would run that through arm_thread_state64_get_pc() and then
debugserver would crash when authenticating & stripping the value, on
newer Mac hardware.
To avoid debugserver crashing on a crashed inferior process, I switched
from using these system functions to strip the values, to simply
clearing the bits outright in debugserver.
However, lr is a special case where the inferior may have signed this
value (against the stack pointer value at the time). Or it may not yet
have any authentication bits, right after a BL. In the latter case, the
kernel will add its own auth bits for while it is stored inside the
kernel. In the case of a user lr value, we cannot authenticate it in
debugserver without knowing the sp value it was signed against (and the
way it is signed is not specified by the ABI) so an "improperly" signed
lr (whatever that means) won't cause debugserver to crash.
debugserver can thread_get_state the inferior's lr, run it through
arm_thread_state64_get_lr(), and get the actual signed 64-bit value that
the inferior process is using. And the specifics of how that lr is
signed may be important for debugging the process, instead of how I am
currently clearing the auth bits outright.
This patch reverts that change for lr only, and also adds a new logging
to debugserver specifically for the four sp/fp/lr/pc values that
thread_get_state hands to us, before we process them at all.