TestGlobalModuleCache.py, a recently added test, tries to update a
source file in the build directory, but it assumes the file is writable.
In our distributed build and test system, this is not always true, so
the test often fails with a write permissions error.
This change fixes that by setting the permissions on the file to be
writable before attempting to write to it.
If adding a user commands fails because a command with the same name
already exists, we only say that "force replace is not set" without
telling the user _how_ to set it. There are two ways to do so; this
commit changes the error message to mention both.
This reverts commit 01c4ecb7ae,
d14d52158b and
a756dc4724.
This removes the logging and workaround I added earlier,
and puts back the skip for Arm/AArch64 Linux.
I've not seen it fail on AArch64 since, but let's not create
more noise if it does.
I've written up the issue as https://github.com/llvm/llvm-project/issues/76057.
It's something to do with trying to destroy a process while
a thread is doing a single sep. So my workaround wouldn't have
worked in any case. It needs a more involved fix.
This has been flaky for a while, for example
https://lab.llvm.org/buildbot/#/builders/96/builds/50350
```
Command Output (stdout):
--
lldb version 18.0.0git (https://github.com/llvm/llvm-project.git revision 3974d89bde)
clang revision 3974d89bde
llvm revision 3974d89bde
"can't evaluate expressions when the process is running."
```
```
PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace.
#0 0x0000ffffa46191a0 llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) (/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/build/lib/python3.8/site-packages/lldb/_lldb.cpython-38-aarch64-linux-gnu.so+0x529a1a0)
#1 0x0000ffffa4617144 llvm::sys::RunSignalHandlers() (/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/build/lib/python3.8/site-packages/lldb/_lldb.cpython-38-aarch64-linux-gnu.so+0x5298144)
#2 0x0000ffffa46198d0 SignalHandler(int) (/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/build/lib/python3.8/site-packages/lldb/_lldb.cpython-38-aarch64-linux-gnu.so+0x529a8d0)
#3 0x0000ffffab25b7dc (linux-vdso.so.1+0x7dc)
#4 0x0000ffffab13d050 /build/glibc-Q8DG8B/glibc-2.31/string/../sysdeps/aarch64/multiarch/memcpy_advsimd.S:92:0
#5 0x0000ffffa446f420 lldb_private::process_gdb_remote::GDBRemoteRegisterContext::PrivateSetRegisterValue(unsigned int, llvm::ArrayRef<unsigned char>) (/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/build/lib/python3.8/site-packages/lldb/_lldb.cpython-38-aarch64-linux-gnu.so+0x50f0420)
#6 0x0000ffffa446f7b8 lldb_private::process_gdb_remote::GDBRemoteRegisterContext::GetPrimordialRegister(lldb_private::RegisterInfo const*, lldb_private::process_gdb_remote::GDBRemoteCommunicationClient&) (/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/build/lib/python3.8/site-packages/lldb/_lldb.cpython-38-aarch64-linux-gnu.so+0x50f07b8)
#7 0x0000ffffa446f308 lldb_private::process_gdb_remote::GDBRemoteRegisterContext::ReadRegisterBytes(lldb_private::RegisterInfo const*) (/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/build/lib/python3.8/site-packages/lldb/_lldb.cpython-38-aarch64-linux-gnu.so+0x50f0308)
#8 0x0000ffffa446ec1c lldb_private::process_gdb_remote::GDBRemoteRegisterContext::ReadRegister(lldb_private::RegisterInfo const*, lldb_private::RegisterValue&) (/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/build/lib/python3.8/site-packages/lldb/_lldb.cpython-38-aarch64-linux-gnu.so+0x50efc1c)
#9 0x0000ffffa412eaa4 lldb_private::RegisterContext::ReadRegisterAsUnsigned(lldb_private::RegisterInfo const*, unsigned long) (/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/build/lib/python3.8/site-packages/lldb/_lldb.cpython-38-aarch64-linux-gnu.so+0x4dafaa4)
#10 0x0000ffffa420861c ReadLinuxProcessAddressMask(std::shared_ptr<lldb_private::Process>, llvm::StringRef) (/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/build/lib/python3.8/site-packages/lldb/_lldb.cpython-38-aarch64-linux-gnu.so+0x4e8961c)
#11 0x0000ffffa4208430 ABISysV_arm64::FixCodeAddress(unsigned long) (/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/build/lib/python3.8/site-packages/lldb/_lldb.cpython-38-aarch64-linux-gnu.so+0x4e89430)
```
Judging by the backtrace something is trying to read the pointer authentication address/code mask
registers. This explains why I've not seen this issue locally, as the buildbot runs on Graviton
3 with has the pointer authentication extension.
I will try to reproduce, fix and re-enable the test.
And remove the workaround I was trying, as this logging may prove what
the actual issue is.
Which I think is that the thread plan map in Process is cleared before
the threads are destroyed. So Thread::ShouldStop could be getting
the current plan, then the plan map is cleared, then Thread::ShouldStop
is deciding based on that plan to pop a plan from the now empty stack.
The function was using the default version of ValueObject::Dump, which
has a default of using the synthetic-ness of the top-level value for
determining whether to print _all_ values as synthetic. This resulted in
some unusual behavior, where e.g. a std::vector is stringified as
synthetic if its dumped as the top level object, but in its raw form if
it is a member of a struct without a pretty printer.
The SBValue class already has properties which determine whether one
should be looking at the synthetic view of the object (and also whether
to use dynamic types), so it seems more natural to use that.
This patch fixes the SymbolFilePDBTests::TestMaxMatches(...) by making
it test what it was testing before, see comments in the test case for
details.
It also disables TestUniqueTypes4.py for now until we can debug or fix
why it isn't working.
This adds support for optionally prefixing any command with `?` and/or
`!`.
- `?` prevents the output of a commands to be printed to the console
unless it fails.
- `!` aborts the dap if the command fails.
They come in handy when programmatically running commands on behalf of
the user without wanting them to know unless they fail, or when a
critical setup is required as part of launchCommands and it's better to
abort on failures than to silently skip.
This reverts commit 481bb62e50 and
71b4d7498f, along with the logging
and assert I had added to the test previously.
Now that I've caught it failing on Arm:
https://lab.llvm.org/buildbot/#/builders/17/builds/46598
Now I have enough to investigate, skip the test on the effected
platforms while I do that.
This reverts commit 35dacf2f51.
And relands the original change with two additions so I can debug the failure on Arm/AArch64:
* Enable lldb step logging in the tests.
* Assert that the current plan is not the base plan at the spot I believe is calling PopPlan.
These will be removed and replaced with a proper fix once I see some failures on the bots,
I couldn't reproduce it locally.
(also, no sign of it on the x86_64 bot)
This patch revives the effort to get this Phabricator patch into
upstream:
https://reviews.llvm.org/D137900
This patch was accepted before in Phabricator but I found some
-gsimple-template-names issues that are fixed in this patch.
A fixed up version of the description from the original patch starts
now.
This patch started off trying to fix Module::FindFirstType() as it
sometimes didn't work. The issue was the SymbolFile plug-ins didn't do
any filtering of the matching types they produced, and they only looked
up types using the type basename. This means if you have two types with
the same basename, your type lookup can fail when only looking up a
single type. We would ask the Module::FindFirstType to lookup "Foo::Bar"
and it would ask the symbol file to find only 1 type matching the
basename "Bar", and then we would filter out any matches that didn't
match "Foo::Bar". So if the SymbolFile found "Foo::Bar" first, then it
would work, but if it found "Baz::Bar" first, it would return only that
type and it would be filtered out.
Discovering this issue lead me to think of the patch Alex Langford did a
few months ago that was done for finding functions, where he allowed
SymbolFile objects to make sure something fully matched before parsing
the debug information into an AST type and other LLDB types. So this
patch aimed to allow type lookups to also be much more efficient.
As LLDB has been developed over the years, we added more ways to to type
lookups. These functions have lots of arguments. This patch aims to make
one API that needs to be implemented that serves all previous lookups:
- Find a single type
- Find all types
- Find types in a namespace
This patch introduces a `TypeQuery` class that contains all of the state
needed to perform the lookup which is powerful enough to perform all of
the type searches that used to be in our API. It contain a vector of
CompilerContext objects that can fully or partially specify the lookup
that needs to take place.
If you just want to lookup all types with a matching basename,
regardless of the containing context, you can specify just a single
CompilerContext entry that has a name and a CompilerContextKind mask of
CompilerContextKind::AnyType.
Or you can fully specify the exact context to use when doing lookups
like: CompilerContextKind::Namespace "std"
CompilerContextKind::Class "foo"
CompilerContextKind::Typedef "size_type"
This change expands on the clang modules code that already used a
vector<CompilerContext> items, but it modifies it to work with
expression type lookups which have contexts, or user lookups where users
query for types. The clang modules type lookup is still an option that
can be enabled on the `TypeQuery` objects.
This mirrors the most recent addition of type lookups that took a
vector<CompilerContext> that allowed lookups to happen for the
expression parser in certain places.
Prior to this we had the following APIs in Module:
```
void
Module::FindTypes(ConstString type_name, bool exact_match, size_t max_matches,
llvm::DenseSet<lldb_private::SymbolFile *> &searched_symbol_files,
TypeList &types);
void
Module::FindTypes(llvm::ArrayRef<CompilerContext> pattern, LanguageSet languages,
llvm::DenseSet<lldb_private::SymbolFile *> &searched_symbol_files,
TypeMap &types);
void Module::FindTypesInNamespace(ConstString type_name,
const CompilerDeclContext &parent_decl_ctx,
size_t max_matches, TypeList &type_list);
```
The new Module API is much simpler. It gets rid of all three above
functions and replaces them with:
```
void FindTypes(const TypeQuery &query, TypeResults &results);
```
The `TypeQuery` class contains all of the needed settings:
- The vector<CompilerContext> that allow efficient lookups in the symbol
file classes since they can look at basename matches only realize fully
matching types. Before this any basename that matched was fully realized
only to be removed later by code outside of the SymbolFile layer which
could cause many types to be realized when they didn't need to.
- If the lookup is exact or not. If not exact, then the compiler context
must match the bottom most items that match the compiler context,
otherwise it must match exactly
- If the compiler context match is for clang modules or not. Clang
modules matches include a Module compiler context kind that allows types
to be matched only from certain modules and these matches are not needed
when d oing user type lookups.
- An optional list of languages to use to limit the search to only
certain languages
The `TypeResults` object contains all state required to do the lookup
and store the results:
- The max number of matches
- The set of SymbolFile objects that have already been searched
- The matching type list for any matches that are found
The benefits of this approach are:
- Simpler API, and only one API to implement in SymbolFile classes
- Replaces the FindTypesInNamespace that used a CompilerDeclContext as a
way to limit the search, but this only worked if the TypeSystem matched
the current symbol file's type system, so you couldn't use it to lookup
a type in another module
- Fixes a serious bug in our FindFirstType functions where if we were
searching for "foo::bar", and we found a "baz::bar" first, the basename
would match and we would only fetch 1 type using the basename, only to
drop it from the matching list and returning no results
When you debug a binary and the change & rebuild and then rerun in lldb
w/o quitting lldb, the Modules in the Global Module Cache for the old
binary & .o files if used are now "unreachable". Nothing in lldb is
holding them alive, and they've already been unlinked. lldb will
properly discard them if there's not another Target referencing them.
However, this only works in simple cases at present. If you have several
Targets that reference the same modules, it's pretty easy to end up
stranding Modules that are no longer reachable, and if you use a
sequence of SBDebuggers unreachable modules can also get stranded. If
you run a long-lived lldb process and are iteratively developing on a
large code base, lldb's memory gets filled with useless Modules.
This patch adds a test for the mode that currently works:
(lldb) target create foo
(lldb) run
<rebuild foo outside lldb>
(lldb) run
In that case, we do delete the unreachable Modules.
The next step will be to add tests for the cases where we fail to do
this, then see how to safely/efficiently evict unreachable modules in
those cases as well.
This is an extension to the protocol that emits the declaration
information along with the metadata of each variable. This can be used
by vscode extensions to implement, for example, a "goToDefinition"
action in the debug tab, or for showing the value of a variable right
next to where it's declared during a debug session.
As this is cheap, I'm not gating this information under any setting.
We need to generate events when finalizing, or we won't know that we
succeeded in stopping the process to detach/kill. Instead, we stall and
then after our 20 interrupt timeout, we kill the process (even if we
were supposed to detach) and exit.
OTOH, we have to not generate events when the Process is being
destructed because shared_from_this has already been torn down, and
using it will cause crashes.
This commit reverts the changes in
https://github.com/llvm/llvm-project/pull/71780 and all of its follow-up
patches.
We got reports of the `.debug_names/.debug_gnu_pubnames/gdb_index/etc.`
sections growing by a non-trivial amount for some large projects. While
GCC emits definitions for static data member constants into the Names
index, they do so *only* for explicitly `constexpr` members. We were
indexing *all* constant-initialized const-static members, which is
likely where the significant size difference comes from. However, only
emitting explicitly `constexpr` variables into the index doesn't seem
like a good way forward, since from clang's perspective `const`-static
integrals are `constexpr` too, and that shouldn't be any different in
the debug-info component. Also, as new code moves to `constexpr` instead
of `const` static for constants, such solution would just delay the
growth of the Names index.
To prevent the size regression we revert to not emitting definitions for
static data-members that have no location.
To support access to such constants from LLDB we'll most likely have to
have to make LLDB find the constants by looking at the containing class
first.
This patch is rearranging code a bit to add WatchpointResources to
Process. A WatchpointResource is meant to represent a hardware
watchpoint register in the inferior process. It has an address, a size,
a type, and a list of Watchpoints that are using this
WatchpointResource.
This current patch doesn't add any of the features of
WatchpointResources that make them interesting -- a user asking to watch
a 24 byte object could watch this with three 8 byte WatchpointResources.
Or a Watchpoint on 1 byte at 0x1002 and a second watchpoint on 1 byte at
0x1003, these must both be served by a single WatchpointResource on that
doubleword at 0x1000 on a 64-bit target, if two hardware watchpoint
registers were used to track these separately, one of them may not be
hit. Or if you have one Watchpoint on a variable with a condition set,
and another Watchpoint on that same variable with a command defined or
different condition, or ignorecount, both of those Watchpoints need to
evaluate their criteria/commands when their WatchpointResource has been
hit.
There's a bit of code movement to rearrange things in the direction I'll
need for implementing this feature, so I want to start with reviewing &
landing this mostly NFC patch and we can focus on the algorithmic
choices about how WatchpointResources are shared and handled as they're
triggeed, separately.
This patch also stops printing "Watchpoint <n> hit: old value: <x>, new
vlaue: <y>" for Read watchpoints. I could make an argument for print
"Watchpoint <n> hit: current value <x>" but the current output doesn't
make any sense, and the user can print the value if they are
particularly interested. Read watchpoints are used primarily to
understand what code is reading a variable.
This patch adds more fallbacks for how to print the objects being
watched if we have types, instead of assuming they are all integral
values, so a struct will print its elements. As large watchpoints are
added, we'll be doing a lot more of those.
To track the WatchpointSP in the WatchpointResources, I changed the
internal API which took a WatchpointSP and devolved it to a Watchpoint*,
which meant touching several different Process files. I removed the
watchpoint code in ProcessKDP which only reported that watchpoints
aren't supported, the base class does that already.
I haven't yet changed how we receive a watchpoint to identify the
WatchpointResource responsible for the trigger, and identify all
Watchpoints that are using this Resource to evaluate their conditions
etc. This is the same work that a BreakpointSite needs to do when it has
been tiggered, where multiple Breakpoints may be at the same address.
There is not yet any printing of the Resources that a Watchpoint is
implemented in terms of ("watchpoint list", or
SBWatchpoint::GetDescription).
"watchpoint set var" and "watchpoint set expression" take a size
argument which was previously 1, 2, 4, or 8 (an enum). I've changed this
to an unsigned int. Most hardware implementations can only watch 1, 2,
4, 8 byte ranges, but with Resources we'll allow a user to ask for
different sized watchpoints and set them in hardware-expressble terms
soon.
I've annotated areas where I know there is work still needed with
LWP_TODO that I'll be working on once this is landed.
I've tested this on aarch64 macOS, aarch64 Linux, and Intel macOS.
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
(cherry picked from commit fc6b72523f)
Currently when you interrupt a:
(lldb) process attach -w -n some_process
lldb just closes the connection to the stub and kills the
lldb_private::Process it made for the attach. The stub at the other end
notices the connection go down and exits because of that. But when
communication to a device is handled through some kind of proxy server
which isn't as well behaved as one would wish, that signal might not be
reliable, causing debugserver to persist on the machine, waiting to
steal the next instance of that process.
We can work around those failures by sending an explicit interrupt
before closing down the connection. The stub will also have to be
waiting for the interrupt for this to make any difference. I changed
debugserver to do that.
I didn't make the equivalent change in lldb-server. So long as you
aren't faced with a flakey connection, this should not be necessary.
In https://github.com/llvm/llvm-project/pull/73626 we started attaching
`DW_AT_const_value`s on a static data-member's declaration again. In
DWARFv5, those static members are represented with a `DW_TAG_variable`.
When LLDB builds the `ManualDWARFIndex`, it simply iterates over all
DIEs in a CU and puts *any* `DW_TAG_variable` with a constant or
location into the index. So when using the manual index, we can end up
having 2 entries for a static data member in the index, one for the
declaration and one for the definition.
This caused a test failure on Linux (where DWARFv5 is the default and
the tests use the manual index).
This patch loosens the restriction that we find exactly 1 variable.
Part of fixes for #72913.
clang emits `DW_AT_alignment` attribute, however LLDB didn't respect it,
resulting in incorrect RecordDecls built by lldb.
This only fixes non-inheritance cases. The inheritance case will be
handled in a follow-up patch.
This patch is rearranging code a bit to add WatchpointResources to
Process. A WatchpointResource is meant to represent a hardware
watchpoint register in the inferior process. It has an address, a size,
a type, and a list of Watchpoints that are using this
WatchpointResource.
This current patch doesn't add any of the features of
WatchpointResources that make them interesting -- a user asking to watch
a 24 byte object could watch this with three 8 byte WatchpointResources.
Or a Watchpoint on 1 byte at 0x1002 and a second watchpoint on 1 byte at
0x1003, these must both be served by a single WatchpointResource on that
doubleword at 0x1000 on a 64-bit target, if two hardware watchpoint
registers were used to track these separately, one of them may not be
hit. Or if you have one Watchpoint on a variable with a condition set,
and another Watchpoint on that same variable with a command defined or
different condition, or ignorecount, both of those Watchpoints need to
evaluate their criteria/commands when their WatchpointResource has been
hit.
There's a bit of code movement to rearrange things in the direction I'll
need for implementing this feature, so I want to start with reviewing &
landing this mostly NFC patch and we can focus on the algorithmic
choices about how WatchpointResources are shared and handled as they're
triggeed, separately.
This patch also stops printing "Watchpoint <n> hit: old value: <x>, new
vlaue: <y>" for Read watchpoints. I could make an argument for print
"Watchpoint <n> hit: current value <x>" but the current output doesn't
make any sense, and the user can print the value if they are
particularly interested. Read watchpoints are used primarily to
understand what code is reading a variable.
This patch adds more fallbacks for how to print the objects being
watched if we have types, instead of assuming they are all integral
values, so a struct will print its elements. As large watchpoints are
added, we'll be doing a lot more of those.
To track the WatchpointSP in the WatchpointResources, I changed the
internal API which took a WatchpointSP and devolved it to a Watchpoint*,
which meant touching several different Process files. I removed the
watchpoint code in ProcessKDP which only reported that watchpoints
aren't supported, the base class does that already.
I haven't yet changed how we receive a watchpoint to identify the
WatchpointResource responsible for the trigger, and identify all
Watchpoints that are using this Resource to evaluate their conditions
etc. This is the same work that a BreakpointSite needs to do when it has
been tiggered, where multiple Breakpoints may be at the same address.
There is not yet any printing of the Resources that a Watchpoint is
implemented in terms of ("watchpoint list", or
SBWatchpoint::GetDescription).
"watchpoint set var" and "watchpoint set expression" take a size
argument which was previously 1, 2, 4, or 8 (an enum). I've changed this
to an unsigned int. Most hardware implementations can only watch 1, 2,
4, 8 byte ranges, but with Resources we'll allow a user to ask for
different sized watchpoints and set them in hardware-expressble terms
soon.
I've annotated areas where I know there is work still needed with
LWP_TODO that I'll be working on once this is landed.
I've tested this on aarch64 macOS, aarch64 Linux, and Intel macOS.
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
Small change to get `image dump separate-debug-info` working when using
`symbols.load-on-demand`.
Added tests to `TestDumpDwo`, and enabled the test for all platforms. If we fail to build, we skip the test, so this shouldn't cause the test to fail on unsupported platforms.
```
bin/lldb-dotest -p TestDumpDwo
```
It's easy to verify this manually by running
```
lldb --one-line-before-file "settings set symbols.load-on-demand true" <some_target>
(lldb) image dump separate-debug-info
...
```
---------
Co-authored-by: Tom Yang <toyang@fb.com>
Add a new API in SBTarget to Load Core from a SBFile.
This will enable a target to load core from a file descriptor.
So that in coredumper, we don't need to write core file to disk, instead
we can pass the input file descriptor to lldb directly.
Test:
```
(lldb) script
Python Interactive Interpreter. To exit, type 'quit()', 'exit()' or Ctrl-D.
>>> file_object = open("/home/hyubo/210hda79ms32sr0h", "r")
>>> fd=file_object.fileno()
>>> file = lldb.SBFile(fd,'r', True)
>>> error = lldb.SBError()
>>> target = lldb.debugger.CreateTarget(None)
>>> target.LoadCore(file,error)
SBProcess: pid = 56415, state = stopped, threads = 1
```
These error messages are written in a way that makes sense to an lldb
developer, but not to an end user who asks lldb to run on a compressed
corefile or whatever. Simplfy the messages.
The Watchpoint and Breakpoint objects try to track the hardware index
that was used for them, if they are hardware wp/bp's. The majority of
our debugging goes over the gdb remote serial protocol, and when we set
the watchpoint/breakpoint, there is no (standard) way for the remote
stub to communicate to lldb which hardware index was used. We have an
lldb-extension packet to query the total number of watchpoint registers.
When a watchpoint is hit, there is an lldb extension to the stop reply
packet (documented in lldb-gdb-remote.txt) to describe the watchpoint
including its actual hardware index,
<addr within wp range> <wp hw index> <actual accessed address>
(the third field is specifically needed for MIPS). At this point, if the
stub reported these three fields (the stub is only required to provide
the first), we can know the actual hardware index for this watchpoint.
Breakpoints are worse; there's never any way for us to be notified about
which hardware index was used. Breakpoints got this as a side effect of
inherting from StoppointSite with Watchpoints.
We expose the watchpoint hardware index through "watchpoint list -v" and
through SBWatchpoint::GetHardwareIndex.
With my large watchpoint support, there is no *single* hardware index
that may be used for a watchpoint, it may need multiple resources. Also
I don't see what a user is supposed to do with this information, or an
IDE. Knowing the total number of watchpoint registers on the target, and
knowing how many Watchpoint Resources are currently in use, is helpful.
Knowing how many Watchpoint Resources
a single user-specified watchpoint needed to be implemented is useful.
But knowing which registers were used is an implementation detail and
not available until we hit the watchpoint when using gdb remote serial
protocol.
So given all that, I'm removing watchpoint hardware index numbers. I'm
changing the SB API to always return -1.
When this option gets enabled, descriptions of threads will be generated
using the format provided in the launch configuration instead of
generating it manually in the dap code. This allows lldb-dap to show an
output similar to the one in the CLI.
This is very similar to https://github.com/llvm/llvm-project/pull/71843
`638a8393615e911b729d5662096f60ef49f1c65e` removed the `dsym`
condition for older compiler versions which caused the `dwarf`
variants tests to XPASS. This patch reverts to only XFAIL-ing
the `dsym` variant.
`15c80852028ff4020b3f85ee13ad3a2ed4bce3be` added
`test_shadowed_static_inline_members` which isn't supported
on older compiler versions.
When this option gets enabled, descriptions of stack frames will be
generated using the format provided in the launch configuration instead
of simply calling `SBFrame::GetDisplayFunctionName`. This allows
lldb-dap to show an output similar to the one in the CLI.