This reverts commit 01c4ecb7ae,
d14d52158b and
a756dc4724.
This removes the logging and workaround I added earlier,
and puts back the skip for Arm/AArch64 Linux.
I've not seen it fail on AArch64 since, but let's not create
more noise if it does.
I've written up the issue as https://github.com/llvm/llvm-project/issues/76057.
It's something to do with trying to destroy a process while
a thread is doing a single sep. So my workaround wouldn't have
worked in any case. It needs a more involved fix.
And remove the workaround I was trying, as this logging may prove what
the actual issue is.
Which I think is that the thread plan map in Process is cleared before
the threads are destroyed. So Thread::ShouldStop could be getting
the current plan, then the plan map is cleared, then Thread::ShouldStop
is deciding based on that plan to pop a plan from the now empty stack.
This patch replaces uses of StringRef::{starts,ends}with with
StringRef::{starts,ends}_with for consistency with
std::{string,string_view}::{starts,ends}_with in C++20.
I'm planning to deprecate and eventually remove
StringRef::{starts,ends}with.
This reverts commit 481bb62e50 and
71b4d7498f, along with the logging
and assert I had added to the test previously.
Now that I've caught it failing on Arm:
https://lab.llvm.org/buildbot/#/builders/17/builds/46598
Now I have enough to investigate, skip the test on the effected
platforms while I do that.
This is part of ongoing attempts to catch the test from
2684281d20 failing on Arm and AArch64.
I did get logs for the failure but only on Arm, where the backtrace is
truncated. So, let's do the assert that PopPlan was going to do,
before we call it.
Then I should know exactly which PopPlan is asserting.
Technically I should take a mutex here, but technically I shouldn't
be debugging via buildbot, so I'm going to take the risk temporarily.
This reverts commit 35dacf2f51.
And relands the original change with two additions so I can debug the failure on Arm/AArch64:
* Enable lldb step logging in the tests.
* Assert that the current plan is not the base plan at the spot I believe is calling PopPlan.
These will be removed and replaced with a proper fix once I see some failures on the bots,
I couldn't reproduce it locally.
(also, no sign of it on the x86_64 bot)
This patch revives the effort to get this Phabricator patch into
upstream:
https://reviews.llvm.org/D137900
This patch was accepted before in Phabricator but I found some
-gsimple-template-names issues that are fixed in this patch.
A fixed up version of the description from the original patch starts
now.
This patch started off trying to fix Module::FindFirstType() as it
sometimes didn't work. The issue was the SymbolFile plug-ins didn't do
any filtering of the matching types they produced, and they only looked
up types using the type basename. This means if you have two types with
the same basename, your type lookup can fail when only looking up a
single type. We would ask the Module::FindFirstType to lookup "Foo::Bar"
and it would ask the symbol file to find only 1 type matching the
basename "Bar", and then we would filter out any matches that didn't
match "Foo::Bar". So if the SymbolFile found "Foo::Bar" first, then it
would work, but if it found "Baz::Bar" first, it would return only that
type and it would be filtered out.
Discovering this issue lead me to think of the patch Alex Langford did a
few months ago that was done for finding functions, where he allowed
SymbolFile objects to make sure something fully matched before parsing
the debug information into an AST type and other LLDB types. So this
patch aimed to allow type lookups to also be much more efficient.
As LLDB has been developed over the years, we added more ways to to type
lookups. These functions have lots of arguments. This patch aims to make
one API that needs to be implemented that serves all previous lookups:
- Find a single type
- Find all types
- Find types in a namespace
This patch introduces a `TypeQuery` class that contains all of the state
needed to perform the lookup which is powerful enough to perform all of
the type searches that used to be in our API. It contain a vector of
CompilerContext objects that can fully or partially specify the lookup
that needs to take place.
If you just want to lookup all types with a matching basename,
regardless of the containing context, you can specify just a single
CompilerContext entry that has a name and a CompilerContextKind mask of
CompilerContextKind::AnyType.
Or you can fully specify the exact context to use when doing lookups
like: CompilerContextKind::Namespace "std"
CompilerContextKind::Class "foo"
CompilerContextKind::Typedef "size_type"
This change expands on the clang modules code that already used a
vector<CompilerContext> items, but it modifies it to work with
expression type lookups which have contexts, or user lookups where users
query for types. The clang modules type lookup is still an option that
can be enabled on the `TypeQuery` objects.
This mirrors the most recent addition of type lookups that took a
vector<CompilerContext> that allowed lookups to happen for the
expression parser in certain places.
Prior to this we had the following APIs in Module:
```
void
Module::FindTypes(ConstString type_name, bool exact_match, size_t max_matches,
llvm::DenseSet<lldb_private::SymbolFile *> &searched_symbol_files,
TypeList &types);
void
Module::FindTypes(llvm::ArrayRef<CompilerContext> pattern, LanguageSet languages,
llvm::DenseSet<lldb_private::SymbolFile *> &searched_symbol_files,
TypeMap &types);
void Module::FindTypesInNamespace(ConstString type_name,
const CompilerDeclContext &parent_decl_ctx,
size_t max_matches, TypeList &type_list);
```
The new Module API is much simpler. It gets rid of all three above
functions and replaces them with:
```
void FindTypes(const TypeQuery &query, TypeResults &results);
```
The `TypeQuery` class contains all of the needed settings:
- The vector<CompilerContext> that allow efficient lookups in the symbol
file classes since they can look at basename matches only realize fully
matching types. Before this any basename that matched was fully realized
only to be removed later by code outside of the SymbolFile layer which
could cause many types to be realized when they didn't need to.
- If the lookup is exact or not. If not exact, then the compiler context
must match the bottom most items that match the compiler context,
otherwise it must match exactly
- If the compiler context match is for clang modules or not. Clang
modules matches include a Module compiler context kind that allows types
to be matched only from certain modules and these matches are not needed
when d oing user type lookups.
- An optional list of languages to use to limit the search to only
certain languages
The `TypeResults` object contains all state required to do the lookup
and store the results:
- The max number of matches
- The set of SymbolFile objects that have already been searched
- The matching type list for any matches that are found
The benefits of this approach are:
- Simpler API, and only one API to implement in SymbolFile classes
- Replaces the FindTypesInNamespace that used a CompilerDeclContext as a
way to limit the search, but this only worked if the TypeSystem matched
the current symbol file's type system, so you couldn't use it to lookup
a type in another module
- Fixes a serious bug in our FindFirstType functions where if we were
searching for "foo::bar", and we found a "baz::bar" first, the basename
would match and we would only fetch 1 type using the basename, only to
drop it from the matching list and returning no results
This commits fixes a few subtle bugs where the method:
1. Declares a local `Status error` which eclipses the method's parameter
`Status &error`.
- The method then sets the error state to the local `error` and returns
without ever touching the parameter `&error`.
- This effectively traps the error state and its message from ever
reaching the caller.
- I also threw in a null pointer check in case the callee doesn't set
its `Status` parameter but returns `0`/`nullptr`.
2. Declares a local `Status deref_error` (good), passes it to the
`Dereference` method (also good), but then checks the status of the
method's `Status &error` parameter (not good).
- The fix checks `deref_error` instead and also checks for a `nullptr`
return value.
- There's a good opportunity here for a future PR that changes the
`Dereference` method to fold an error state into the `ValueObject`
return value's `m_error` instead of using a parameter.
3. Declares another local `Status error`, which it doesn't pass to a
method (because there isn't a parameter for it), and then checks for an
error condition that never happens.
- The fix just checks the callee's return value, because that's all it
has to go on.
- This likely comes from a copy/paste from issue 1 above.
rdar://119155810
We need to generate events when finalizing, or we won't know that we
succeeded in stopping the process to detach/kill. Instead, we stall and
then after our 20 interrupt timeout, we kill the process (even if we
were supposed to detach) and exit.
OTOH, we have to not generate events when the Process is being
destructed because shared_from_this has already been torn down, and
using it will cause crashes.
This patch is rearranging code a bit to add WatchpointResources to
Process. A WatchpointResource is meant to represent a hardware
watchpoint register in the inferior process. It has an address, a size,
a type, and a list of Watchpoints that are using this
WatchpointResource.
This current patch doesn't add any of the features of
WatchpointResources that make them interesting -- a user asking to watch
a 24 byte object could watch this with three 8 byte WatchpointResources.
Or a Watchpoint on 1 byte at 0x1002 and a second watchpoint on 1 byte at
0x1003, these must both be served by a single WatchpointResource on that
doubleword at 0x1000 on a 64-bit target, if two hardware watchpoint
registers were used to track these separately, one of them may not be
hit. Or if you have one Watchpoint on a variable with a condition set,
and another Watchpoint on that same variable with a command defined or
different condition, or ignorecount, both of those Watchpoints need to
evaluate their criteria/commands when their WatchpointResource has been
hit.
There's a bit of code movement to rearrange things in the direction I'll
need for implementing this feature, so I want to start with reviewing &
landing this mostly NFC patch and we can focus on the algorithmic
choices about how WatchpointResources are shared and handled as they're
triggeed, separately.
This patch also stops printing "Watchpoint <n> hit: old value: <x>, new
vlaue: <y>" for Read watchpoints. I could make an argument for print
"Watchpoint <n> hit: current value <x>" but the current output doesn't
make any sense, and the user can print the value if they are
particularly interested. Read watchpoints are used primarily to
understand what code is reading a variable.
This patch adds more fallbacks for how to print the objects being
watched if we have types, instead of assuming they are all integral
values, so a struct will print its elements. As large watchpoints are
added, we'll be doing a lot more of those.
To track the WatchpointSP in the WatchpointResources, I changed the
internal API which took a WatchpointSP and devolved it to a Watchpoint*,
which meant touching several different Process files. I removed the
watchpoint code in ProcessKDP which only reported that watchpoints
aren't supported, the base class does that already.
I haven't yet changed how we receive a watchpoint to identify the
WatchpointResource responsible for the trigger, and identify all
Watchpoints that are using this Resource to evaluate their conditions
etc. This is the same work that a BreakpointSite needs to do when it has
been tiggered, where multiple Breakpoints may be at the same address.
There is not yet any printing of the Resources that a Watchpoint is
implemented in terms of ("watchpoint list", or
SBWatchpoint::GetDescription).
"watchpoint set var" and "watchpoint set expression" take a size
argument which was previously 1, 2, 4, or 8 (an enum). I've changed this
to an unsigned int. Most hardware implementations can only watch 1, 2,
4, 8 byte ranges, but with Resources we'll allow a user to ask for
different sized watchpoints and set them in hardware-expressble terms
soon.
I've annotated areas where I know there is work still needed with
LWP_TODO that I'll be working on once this is landed.
I've tested this on aarch64 macOS, aarch64 Linux, and Intel macOS.
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
(cherry picked from commit fc6b72523f)
Currently when you interrupt a:
(lldb) process attach -w -n some_process
lldb just closes the connection to the stub and kills the
lldb_private::Process it made for the attach. The stub at the other end
notices the connection go down and exits because of that. But when
communication to a device is handled through some kind of proxy server
which isn't as well behaved as one would wish, that signal might not be
reliable, causing debugserver to persist on the machine, waiting to
steal the next instance of that process.
We can work around those failures by sending an explicit interrupt
before closing down the connection. The stub will also have to be
waiting for the interrupt for this to make any difference. I changed
debugserver to do that.
I didn't make the equivalent change in lldb-server. So long as you
aren't faced with a flakey connection, this should not be necessary.
This patch is rearranging code a bit to add WatchpointResources to
Process. A WatchpointResource is meant to represent a hardware
watchpoint register in the inferior process. It has an address, a size,
a type, and a list of Watchpoints that are using this
WatchpointResource.
This current patch doesn't add any of the features of
WatchpointResources that make them interesting -- a user asking to watch
a 24 byte object could watch this with three 8 byte WatchpointResources.
Or a Watchpoint on 1 byte at 0x1002 and a second watchpoint on 1 byte at
0x1003, these must both be served by a single WatchpointResource on that
doubleword at 0x1000 on a 64-bit target, if two hardware watchpoint
registers were used to track these separately, one of them may not be
hit. Or if you have one Watchpoint on a variable with a condition set,
and another Watchpoint on that same variable with a command defined or
different condition, or ignorecount, both of those Watchpoints need to
evaluate their criteria/commands when their WatchpointResource has been
hit.
There's a bit of code movement to rearrange things in the direction I'll
need for implementing this feature, so I want to start with reviewing &
landing this mostly NFC patch and we can focus on the algorithmic
choices about how WatchpointResources are shared and handled as they're
triggeed, separately.
This patch also stops printing "Watchpoint <n> hit: old value: <x>, new
vlaue: <y>" for Read watchpoints. I could make an argument for print
"Watchpoint <n> hit: current value <x>" but the current output doesn't
make any sense, and the user can print the value if they are
particularly interested. Read watchpoints are used primarily to
understand what code is reading a variable.
This patch adds more fallbacks for how to print the objects being
watched if we have types, instead of assuming they are all integral
values, so a struct will print its elements. As large watchpoints are
added, we'll be doing a lot more of those.
To track the WatchpointSP in the WatchpointResources, I changed the
internal API which took a WatchpointSP and devolved it to a Watchpoint*,
which meant touching several different Process files. I removed the
watchpoint code in ProcessKDP which only reported that watchpoints
aren't supported, the base class does that already.
I haven't yet changed how we receive a watchpoint to identify the
WatchpointResource responsible for the trigger, and identify all
Watchpoints that are using this Resource to evaluate their conditions
etc. This is the same work that a BreakpointSite needs to do when it has
been tiggered, where multiple Breakpoints may be at the same address.
There is not yet any printing of the Resources that a Watchpoint is
implemented in terms of ("watchpoint list", or
SBWatchpoint::GetDescription).
"watchpoint set var" and "watchpoint set expression" take a size
argument which was previously 1, 2, 4, or 8 (an enum). I've changed this
to an unsigned int. Most hardware implementations can only watch 1, 2,
4, 8 byte ranges, but with Resources we'll allow a user to ask for
different sized watchpoints and set them in hardware-expressble terms
soon.
I've annotated areas where I know there is work still needed with
LWP_TODO that I'll be working on once this is landed.
I've tested this on aarch64 macOS, aarch64 Linux, and Intel macOS.
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
The Watchpoint and Breakpoint objects try to track the hardware index
that was used for them, if they are hardware wp/bp's. The majority of
our debugging goes over the gdb remote serial protocol, and when we set
the watchpoint/breakpoint, there is no (standard) way for the remote
stub to communicate to lldb which hardware index was used. We have an
lldb-extension packet to query the total number of watchpoint registers.
When a watchpoint is hit, there is an lldb extension to the stop reply
packet (documented in lldb-gdb-remote.txt) to describe the watchpoint
including its actual hardware index,
<addr within wp range> <wp hw index> <actual accessed address>
(the third field is specifically needed for MIPS). At this point, if the
stub reported these three fields (the stub is only required to provide
the first), we can know the actual hardware index for this watchpoint.
Breakpoints are worse; there's never any way for us to be notified about
which hardware index was used. Breakpoints got this as a side effect of
inherting from StoppointSite with Watchpoints.
We expose the watchpoint hardware index through "watchpoint list -v" and
through SBWatchpoint::GetHardwareIndex.
With my large watchpoint support, there is no *single* hardware index
that may be used for a watchpoint, it may need multiple resources. Also
I don't see what a user is supposed to do with this information, or an
IDE. Knowing the total number of watchpoint registers on the target, and
knowing how many Watchpoint Resources are currently in use, is helpful.
Knowing how many Watchpoint Resources
a single user-specified watchpoint needed to be implemented is useful.
But knowing which registers were used is an implementation detail and
not available until we hit the watchpoint when using gdb remote serial
protocol.
So given all that, I'm removing watchpoint hardware index numbers. I'm
changing the SB API to always return -1.
When this option gets enabled, descriptions of threads will be generated
using the format provided in the launch configuration instead of
generating it manually in the dap code. This allows lldb-dap to show an
output similar to the one in the CLI.
This is very similar to https://github.com/llvm/llvm-project/pull/71843
When this option gets enabled, descriptions of stack frames will be
generated using the format provided in the launch configuration instead
of simply calling `SBFrame::GetDisplayFunctionName`. This allows
lldb-dap to show an output similar to the one in the CLI.
Prior to this patch, each core file plugin (ObjectFileMachO.cpp and
ObjectFileMinindump.cpp) would calculate the address ranges to save in
different ways. This patch adds a new function to Process.h/.cpp:
```
Status Process::CalculateCoreFileSaveRanges(lldb::SaveCoreStyle core_style, CoreFileMemoryRanges &ranges);
```
The patch updates the ObjectFileMachO::SaveCore(...) and
ObjectFileMinindump::SaveCore(...) to use same code. This will allow
core files to be consistent with the lldb::SaveCoreStyle across
different core file creators and will allow us to add new core file
saving features that do more complex things in future patches.
Similar to my previous patch (#71613) where I changed
`GetItemAtIndexAsString`, this patch makes the same change to
`GetItemAtIndexAsDictionary`.
`GetItemAtIndexAsDictionary` now returns a std::optional that is either
`std::nullopt` or is a valid pointer. Therefore, if the optional is
populated, we consider the pointer to always be valid (i.e. no need to
check pointer validity).
This patch changes the interface of
StructuredData::Array::GetItemAtIndexAsString to return a
`std::optional<llvm::StringRef>` instead of taking an out parameter.
More generally, this commit serves as proposal that we change all of the
sibling APIs (`GetItemAtIndexAs`) to do the same thing. The reason this
isn't one giant patch is because it is rather unwieldy changing just one
of these, so if this is approved, I will do all of the other ones as
individual follow-ups.
The contents of which are mostly SPSR_EL1 as shown in the Arm manual,
with a few adjustments for things Linux says userspace shouldn't concern
itself with.
```
(lldb) register read cpsr
cpsr = 0x80001000
= (N = 1, Z = 0, C = 0, V = 0, SS = 0, IL = 0, ...
```
Some fields are always present, some depend on extensions. I've checked
for those extensions using HWCAP and HWCAP2.
To provide this for core files and live processes I've added a new class
LinuxArm64RegisterFlags. This is a container for all the registers we'll
want to have fields and handles detecting fields and updating register
info.
This is used by the native process as follows:
* There is a global LinuxArm64RegisterFlags object.
* The first thread takes a mutex on it, and updates the fields.
* Subsequent threads see that detection is already done, and skip it.
* All threads then update their own copy of the register information
with pointers to the field information contained in the global object.
This means that even though every thread will have the same fields, we
only detect them once and have one copy of the information.
Core files instead have a LinuxArm64RegisterFlags as a member, because
each core file could have different saved capabilities. The logic from
there is the same but we get HWACP values from the corefile note.
This handler class is Linux specific right now, but it can easily be
made more generic if needed. For example by using LLVM's FeatureBitset
instead of HWCAPs.
Updating register info is done with string comparison, which isn't
ideal. For CPSR, we do know the register number ahead of time but we do
not for other registers in dynamic register sets. So in the interest of
consistency, I'm going to use string comparison for all registers
including cpsr.
I've added tests with a core file and live process. Only checking for
fields that are always present to account for CPU variance.
This patch makes ScriptedThreadPlan conforming to the ScriptedInterface
& ScriptedPythonInterface facilities by introducing 2
ScriptedThreadPlanInterface & ScriptedThreadPlanPythonInterface classes.
This allows us to get rid of every ScriptedThreadPlan-specific SWIG
method and re-use the same affordances as other scripting offordances,
like Scripted{Process,Thread,Platform} & OperatingSystem.
To do so, this adds new transformer methods for `ThreadPlan`, `Stream` &
`Event`, to allow the bijection between C++ objects and their python
counterparts.
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
This adds ToXML methods to encode RegisterFlags and its fields into XML
according to GDB's target XML format:
https://sourceware.org/gdb/onlinedocs/gdb/Target-Description-Format.html#Target-Description-Format
lldb-server does not use libXML to build XML, so this follows the
existing code that uses strings. Indentation is used so the result is
still human readable.
```
<flags id=\"Foo\" size=\"4\">
<field name=\"abc\" start=\"0\" end=\"0\"/>
</flags>
```
This is used by lldb-server when building target XML, though no one sets
any fields yet. That'll come in a later commit.
## Description
This pull request adds a new `stop-at-user-entry` option to LLDB
`process launch` command, allowing users to launch a process and pause
execution at the entry point of the program (for C-based languages,
`main` function).
## Motivation
This option provides a convenient way to begin debugging a program by
launching it and breaking at the desired entry point.
## Changes Made
- Added `stop-at-user-entry` option to `Options.td` and the
corresponding case in `CommandOptionsProcessLaunch.cpp` (short option is
'm')
- Implemented `GetUserEntryPointName` method in the Language plugins
available at the moment.
- Declared the `CreateBreakpointAtUserEntry` method in the Target API.
- Create Shell test for the command
`command-process-launch-user-entry.test`.
## Usage
`process launch --stop-at-user-entry` or `process launch -m` launches
the process and pauses execution at the entry point of the program.
This patch implements the thread local storage support for linux
(https://github.com/llvm/llvm-project/issues/28766).
TLS feature is originally only implemented for Mac. With my previous
patch to enable `fs_base` register for Linux
(https://reviews.llvm.org/D155256), now it is feasible to implement this
feature for Linux.
The major changes are:
* Track the main module's link address during launch
* Fetch thread pointer from `fs_base` register
* Create register alias for thread pointer
* Read pthread metadata from target memory instead of process so that it
works for coredump
With the patch the failing test is passing now. Note: I am only enabling
this test for Mac and Linux because I do not have machine to test for
FreeBSD/NetBSD.
---------
Co-authored-by: jeffreytan81 <jeffreytan@fb.com>
This reverts commit a7b78cac9a.
With updates to the tests.
TestWatchTaggedAddress.py: Updated the expected watchpoint types,
though I'm not sure there should be a differnt default for the two
ways of setting them, that needs to be confirmed.
TestStepOverWatchpoint.py: Skipped this everywhere because I think
what used to happen is you couldn't put 2 watchpoints on the same
address (after alignment). I guess that this is now allowed because
modify watchpoints aren't accounted for, but likely should be.
Needs investigating.
Watchpoints in lldb can be either 'read', 'write', or 'read/write'. This
is exposing the actual behavior of hardware watchpoints. gdb has a
different behavior: a "write" type watchpoint only stops when the
watched memory region *changes*.
A user is using a watchpoint for one of three reasons:
1. Want to find what is changing/corrupting this memory.
2. Want to find what is writing to this memory.
3. Want to find what is reading from this memory.
I believe (1) is the most common use case for watchpoints, and it
currently can't be done in lldb -- the user needs to continue every time
the same value is written to the watched-memory manually. I think gdb's
behavior is the correct one. There are some use cases where a developer
wants to find every function that writes/reads to/from a memory region,
regardless of value, I want to still allow that functionality.
This is also a bit of groundwork for my large watchpoint support
proposal
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
where I will be adding support for AArch64 MASK watchpoints which watch
power-of-2 memory regions. A user might ask to watch 24 bytes, and a
MASK watchpoint stub can do this with a 32-byte MASK watchpoint if it is
properly aligned. And we need to ignore writes to the final 8 bytes of
that watched region, and not show those hits to the user.
This patch adds a new 'modify' watchpoint type and it is the default.
Re-landing this patch after addressing testsuite failures found in CI on
Linux, Intel machines, and windows.
rdar://108234227
The problem is that the when the "attach" command is initiated, the
ExecutionContext for the command has a process - it's the exited one
from the previour run. But the `attach wait` creates a new process for
the attach, and then errors out instead of interrupting when it finds
that its process and the one in the command's ExecutionContext don't
match.
This change checks that if we're returning a target from
GetExecutionContext, we fill the context with it's current process, not
some historical one.
The size of ZA depends on the streaming vector length regardless
of the active mode. So in addition to vg (which reports the active
mode) we must send the client svg.
Otherwise the mechanics are the same as for non-streaming SVE.
Use the svg value to update the defined size of ZA, accounting
for the fact that ZA is not a single vector but a suqare matrix.
So if svg is 8, a single streaming vector would be 8*8 = 64 bytes.
ZA is that squared, so 64*64 = 4096 bytes.
Testing is included in a later patch.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D159504
Watchpoints in lldb can be either 'read', 'write', or 'read/write'. This
is exposing the actual behavior of hardware watchpoints. gdb has a
different behavior: a "write" type watchpoint only stops when the
watched memory region *changes*.
A user is using a watchpoint for one of three reasons:
1. Want to find what is changing/corrupting this memory.
2. Want to find what is writing to this memory.
3. Want to find what is reading from this memory.
I believe (1) is the most common use case for watchpoints, and it
currently can't be done in lldb -- the user needs to continue every time
the same value is written to the watched-memory manually. I think gdb's
behavior is the correct one. There are some use cases where a developer
wants to find every function that writes/reads to/from a memory region,
regardless of value, I want to still allow that functionality.
This is also a bit of groundwork for my large watchpoint support
proposal
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
where I will be adding support for AArch64 MASK watchpoints which watch
power-of-2 memory regions. A user might ask to watch 24 bytes, and a
MASK watchpoint stub can do this with a 32-byte MASK watchpoint if it is
properly aligned. And we need to ignore writes to the final 8 bytes of
that watched region, and not show those hits to the user.
This patch adds a new 'modify' watchpoint type and it is the default.
rdar://108234227
As suggested by Greg in https://github.com/llvm/llvm-project/pull/66534,
I'm adding a setting at the Target level that controls whether to show
leading zeroes in hex ValueObject values.
This has the benefit of reducing the amount of characters displayed in
certain interfaces, like VSCode.
The remaining use of ConstString in StructuredData is the Dictionary
class. Internally it's backed by a `std::map<ConstString, ObjectSP>`.
I propose that we replace it with a `llvm::StringMap<ObjectSP>`.
Many StructuredData::Dictionary objects are ephemeral and only exist for
a short amount of time. Many of these Dictionaries are only produced
once and are never used again. That leaves us with a lot of string data
in the ConstString StringPool that is sitting there never to be used
again. Even if the same string is used many times for keys of different
Dictionary objects, that is something we can measure and adjust for
instead of assuming that every key may be reused at some point in the
future.
Quick comparisons of key data is likely not a concern with Dictionary,
but the use of `llvm::StringMap` means that lookups should be fast with
its hashing strategy.
Switching to a llvm::StringMap meant that the iteration order may be
different. To account for this when serializing/dumping the dictionary,
I added some code to sort the output by key before emitting anything.
Differential Revision: https://reviews.llvm.org/D159313
The majority of UnixSignals strings are static in the sense that they do
not change. The overwhelming majority of these strings are string
literals. Using ConstString to manage their lifetime does not make
sense. The only exception to this is one of the subclasses of
UnixSignals, for which I have created a StringSet local to that file
which will guarantee the lifetimes of these StringRefs.
As for the other benefits of ConstString, string uniqueness is not a
concern (as many of them are already string literals) and comparing
signal names and aliases should not be a hot path.
Differential Revision: https://reviews.llvm.org/D159011
Some functions in Process were using LLDB_INVALID_ADDRESS instead of
LLDB_INVALID_TOKEN.
The only visible effect of this appears to be that "process unload
<tab>" would complete to 0 even after the image was unloaded. Since the
command is checking for LLDB_INVALID_TOKEN.
Everything else worked somehow. I've added a check to the existing load
unload tests anyway.
The tab completion cannot be checked as is, but when I make them more
strict in a later patch it will be tested.
D157648 broke the function because it put the blocking wait into a
critical section. This meant that, if m_iohandler_sync was not updated
before entering the function, no amount of waiting would help.
Fix that by restriciting the scope of the critical section to the
iohandler check.
ConstString can be implicitly converted into a llvm::StringRef. This is
very useful in many places, but it also hides places where we are
creating a ConstString only to use it as a StringRef for the entire
lifespan of the ConstString object.
I locally removed the implicit conversion and found some of the places we
were doing this.
Differential Revision: https://reviews.llvm.org/D159237
This is the next step in removing ConstString from StructuredData. There
are StringRef overloads already, let's use those where we can.
Differential Revision: https://reviews.llvm.org/D152870