Similar to my previous patch (#71613) where I changed
`GetItemAtIndexAsString`, this patch makes the same change to
`GetItemAtIndexAsDictionary`.
`GetItemAtIndexAsDictionary` now returns a std::optional that is either
`std::nullopt` or is a valid pointer. Therefore, if the optional is
populated, we consider the pointer to always be valid (i.e. no need to
check pointer validity).
This register is a pseudo register but mirrors the architectural
register's contents. See:
https://developer.arm.com/documentation/ddi0616/latest/
For the full details. Example output:
```
(lldb) register read svcr
svcr = 0x0000000000000002
= (ZA = 1, SM = 0)
```
This is a Linux pseudo register provided by the NT_ARM_TAGGED_ADDR_CTRL
register set. It reflects the value passed to prctl
PR_SET_TAGGED_ADDR_CTRL.
https://docs.kernel.org/arch/arm64/memory-tagging-extension.html
The fields are made from the #defines the kernel provides for setting
the value. Its contents are constant so no runtime detection is needed
(once we've decided we have this register in the first place).
The permitted generated tags is technically a bitfield but at this time
we don't have a way to mark a field as preferring hex formatting.
```
(lldb) register read mte_ctrl
mte_ctrl = 0x000000000007fffb
= (TAGS = 65535, TCF_ASYNC = 0, TCF_SYNC = 1, TAGGED_ADDR_ENABLE = 1)
```
(4 bit tags mean 16 possible tags, 16 bit bitfield)
Testing has been added to TestMTECtrlRegister.py, which needed a more
granular way to check for XML support, so I've added hasXMLSupport that
can be used within a test case instead of skipping whole tests if XML
isn't supported.
Same for the core file tests.
Follows the format laid out in the Arm manual, AArch32 only fields are
ignored.
```
(lldb) register read fpcr
fpcr = 0x00000000
= (AHP = 0, DN = 0, FZ = 0, RMMode = 0, FZ16 = 0, IDE = 0, IXE = 0, UFE = 0, OFE = 0, DZE = 0, IOE = 0)
```
Tests use the first 4 fields that we know are always present.
Converted all the HCWAP defines to `UL` because I'm bound to
forget one if I don't do it now.
This one is easy because none of the fields depend on extensions. Only
thing to note is that I've ignored some AArch32 only fields.
```
(lldb) register read fpsr
fpsr = 0x00000000
= (QC = 0, IDC = 0, IXC = 0, UFC = 0, OFC = 0, DZC = 0, IOC = 0)
```
This patch should fix a test failure in
`Expr/TestIRMemoryMapWindows.test`:
https://lab.llvm.org/buildbot/#/builders/219/builds/6786
The problem here is that since 7991412 landed, all the
`ScriptInterpreter::CreateScripted*Interface` now return a `nullptr`
when using the base `ScriptInterpreter` instance, instead of
`ScriptInterpreterPython` for instance.
This nullptr is actually well handled in the various places where we
create a Scripted Interface, however, because of the way to instanciate
a process, the process plugin manager have to iterate over every process
plugin and call the `CreateInstance` static function that should
instanciate the right object.
So in the ScriptedProcess case, because we are getting a `nullptr` when
trying to create a `ScriptedProcessInterface`, we try to discard the
process object, which calls the Process destructor, which in turns calls
the `ScriptedProcess` plugin `IsAlive` method. That method will fire an
assertion if the scripted interface pointer is not allocated.
This patch address that issue by setting a flag when destroying the
ScriptedProcess object, and checks that flag when calling `IsAlive`.
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
The contents of which are mostly SPSR_EL1 as shown in the Arm manual,
with a few adjustments for things Linux says userspace shouldn't concern
itself with.
```
(lldb) register read cpsr
cpsr = 0x80001000
= (N = 1, Z = 0, C = 0, V = 0, SS = 0, IL = 0, ...
```
Some fields are always present, some depend on extensions. I've checked
for those extensions using HWCAP and HWCAP2.
To provide this for core files and live processes I've added a new class
LinuxArm64RegisterFlags. This is a container for all the registers we'll
want to have fields and handles detecting fields and updating register
info.
This is used by the native process as follows:
* There is a global LinuxArm64RegisterFlags object.
* The first thread takes a mutex on it, and updates the fields.
* Subsequent threads see that detection is already done, and skip it.
* All threads then update their own copy of the register information
with pointers to the field information contained in the global object.
This means that even though every thread will have the same fields, we
only detect them once and have one copy of the information.
Core files instead have a LinuxArm64RegisterFlags as a member, because
each core file could have different saved capabilities. The logic from
there is the same but we get HWACP values from the corefile note.
This handler class is Linux specific right now, but it can easily be
made more generic if needed. For example by using LLVM's FeatureBitset
instead of HWCAPs.
Updating register info is done with string comparison, which isn't
ideal. For CPSR, we do know the register number ahead of time but we do
not for other registers in dynamic register sets. So in the interest of
consistency, I'm going to use string comparison for all registers
including cpsr.
I've added tests with a core file and live process. Only checking for
fields that are always present to account for CPU variance.
This removes AArch64 specific code from the GDB* classes.
To do this I've added 2 new methods to Architecture:
* RegisterWriteCausesReconfigure to check if what you are about to do
will trash the register info.
* ReconfigureRegisterInfo to do the reconfiguring. This tells you if
anything changed so that we only invalidate registers when needed.
So that ProcessGDBRemote can call ReconfigureRegisterInfo in
SetThreadStopInfo,
I've added forwarding calls to GDBRemoteRegisterContext and the base
class
RegisterContext.
(which removes a slightly sketchy static cast as well)
RegisterContext defaults to doing nothing for both the methods
so anything other than GDBRemoteRegisterContext will do nothing.
This completes the conversion of LocateSymbolFile into a SymbolLocator
plugin. The only remaining function is DownloadSymbolFileAsync which
doesn't really fit into the plugin model, and therefore moves into the
SymbolLocator class, while still relying on the plugins to do the
underlying work.
This builds on top of the work started in c3a302d to convert
LocateSymbolFile to a SymbolLocator plugin. This commit moves
DownloadObjectAndSymbolFile.
This commit contains the initial scaffolding to convert the
functionality currently implemented in LocateSymbolFile to a plugin
architecture. The plugin approach allows us to easily add new ways to
find symbols and fixes some issues with the current implementation.
For instance, currently we (ab)use the host OS to include support for
querying the DebugSymbols framework on macOS. The plugin approach
retains all the benefits (including the ability to compile this out on
other platforms) while maintaining a higher level of separation with the
platform independent code.
To limit the scope of this patch, I've only converted a single function:
LocateExecutableObjectFile. Future commits will convert the remaining
LocateSymbolFile functions and eventually remove LocateSymbolFile. To
make reviewing easier, that will done as follow-ups.
The ZT0 register is always 64 bytes in size so it is a lot easier to
handle than ZA which is scalable. In addition, reading an inactive ZT0
via ptrace returns all 0s, unlike ZA which returns no register data.
This means that a corefile from a process where ZA and ZT0 were inactive
still contains an NT_ARM_ZT note and we can simply say that if it's
there, then we should be able to read from it.
Along the way I removed a redundant check on the size of the ZA note. If
that note's size is < the ZA header size, we do not have SME, and
therefore could not have SME2 either.
I have added ZT0 to the existing SME core files tests. This means that
you need an SME2 system to generate them (Arm's FVP at this point). I
think this is a fair tradeoff given that this is all running in
simulation anyway and seperate ZT0 tests would be 99% identical copies
of the ZA only tests.
This removes explicit invalidation of vg and svg that was done in
`GDBRemoteRegisterContext::AArch64Reconfigure`. This was in fact
covering up a bug elsehwere.
Register information says that a write to vg also invalidates svg (it
does not unless you are in streaming mode, but we decided to keep it
simple and say it always does).
This invalidation was not being applied until *after* AArch64Reconfigure
was called. This meant that without those manual invalidates this
happened:
* vg is written
* svg is not invalidated
* Reconfigure uses the written vg value
* Reconfigure uses the *old* svg value
I have moved the AArch64Reconfigure call to after we've processed the
invalidations caused by the register write, so we no longer need the
manual invalidates in AArch64Reconfigure.
In addition I have changed the order in which expedited registers as
parsed. These registers come with a stop notification and include,
amongst others, vg and svg.
So now we:
* Parse them and update register values (including vg and svg)
* AArch64Reconfigure, which uses those values, and invalidates every
register, because offsets may have changed.
* Parse the expedited registers again, knowing that none of the values
will have changed due to the scaling.
This means we use the expedited registers during the reconfigure, but
the invalidate does not mean we throw all of them away.
The cost is we parse them twice client side, but this is cheap compared
to a network packet, and is limited to AArch64 targets only.
On a system with SVE and SME, these are the packets sent for a step:
```
(lldb) b-remote.async> < 803> read packet:
$T05thread:p1f80.1f80;name:main.o;threads:1f80;thread-pcs:000000000040056c<...>a1:0800000000000000;d9:0400000000000000;reason:trace;#fc
intern-state < 21> send packet: $xfffffffff200,200#5e
intern-state < 516> read packet:
$e4f2ffffffff000000<...>#71
intern-state < 15> send packet: $Z0,400568,4#4d
intern-state < 6> read packet: $OK#9a
dbg.evt-handler < 16> send packet: $jThreadsInfo#c1
dbg.evt-handler < 224> read packet:
$[{"name":"main.o","reason":"trace","registers":{"161":"0800000000000000",<...>}],"signal":5,"tid":8064}]]#73
```
You can see there are no extra register reads which means we're using
the expedited registers.
For a write to vg:
```
(lldb) register write vg 4
lldb < 37> send packet:
$Pa1=0400000000000000;thread:1f80;#4a
lldb < 6> read packet: $OK#9a
lldb < 20> send packet: $pa1;thread:1f80;#29
lldb < 20> read packet: $0400000000000000#04
lldb < 20> send packet: $pd9;thread:1f80;#34
lldb < 20> read packet: $0400000000000000#04
```
There is the initial P write, and lldb correctly assumes that SVG is
invalidated by this also so we read back the new vg and svg values
afterwards.
SME2 is documented as part of the main SME supplement:
https://developer.arm.com/documentation/ddi0616/latest/
The one change for debug is this new ZT0 register. This register
contains data to be used with new table lookup instructions.
It's size is always 512 bits (not scalable) and can be
interpreted in many different ways depending on the instructions
that use it.
The kernel has implemented this as a new register set containing
this single register. It always returns register data (with no header,
unlike ZA which does have a header).
https://docs.kernel.org/arch/arm64/sme.html
ZT0 is only active when ZA is active (when SVCR.ZA is 1). In the
inactive state the kernel returns 0s for its contents. Therefore
lldb doesn't need to create 0s like it does for ZA.
However, we will skip restoring the value of ZT0 if we know that
ZA is inactive. As writing to an inactive ZT0 sets SVCR.ZA to 1,
which is not desireable as it would activate ZA also. Whether
SVCR.ZA is set will be determined only by the ZA data we restore.
Due to this, I've added a new save/restore kind SME2. This is easier
than accounting for the variable length ZA in the SME data. We'll only
save an SME2 data block if ZA is active. If it's not we can get fresh
0s back from the kernel for ZT0 anyway so there's nothing for us to
restore.
This new register will only show up if the system has SME2 therefore
the SME set presented to the user may change, and I've had to account
for that in in a few places.
I've referred to it internally as simply "ZT" as the kernel does in
NT_ARM_ZT, but the architecture refers to the specific register as "ZT0"
so that's what you'll see in lldb.
```
(lldb) register read -s 6
Scalable Matrix Extension Registers:
svcr = 0x0000000000000000
svg = 0x0000000000000004
za = {0x00 <...> 0x00}
zt0 = {0x00 <...> 0x00}
```
On an SVE/SME the register context configuration may change after the
inferior process has executed. This was handled via
https://reviews.llvm.org/D159504 but it is reconfiguring and clearing
the register context after we've parsed any expedited reigster values
from the stop reply packet. That results in lldb having to read each
register value one at a time while at that stop location, which will be
a performance problem on non-local debug setups.
The configuration & clearing needs to happen first. Also, update the
names of the local variables for a little clarity.
[lldb] Part 2 of 2 - Refactor `CommandObject::DoExecute(...)` to return
`void` instead of ~~`bool`~~
Justifications:
- The code doesn't ultimately apply the `true`/`false` return values.
- The methods already pass around a `CommandReturnObject`, typically
with a `result` parameter.
- Each command return object already contains:
- A more precise status
- The error code(s) that apply to that status
Part 1 refactors the `CommandObject::Execute(...)` method.
- See
[https://github.com/llvm/llvm-project/pull/69989](https://github.com/llvm/llvm-project/pull/69989)
rdar://117378957
For most register sets, if it was enabled this meant you could use it,
it was present in the process. There was no present but turned off
state. So "enabled" made sense.
Then ZA came along (and soon to be ZT0) where ZA can be present in the
hardware when you have SME, but ZA itself can be made inactive. This
means that "IsZAEnabled()" doesn't mean is it active, it means do you
have SME. Which is very confusing when we actually want to know if ZA is
active.
So instead say "IsZAPresent", to make these checks more specific. For
things that can't be made inactive, present will imply "active" as
they're never inactive.
This adds ToXML methods to encode RegisterFlags and its fields into XML
according to GDB's target XML format:
https://sourceware.org/gdb/onlinedocs/gdb/Target-Description-Format.html#Target-Description-Format
lldb-server does not use libXML to build XML, so this follows the
existing code that uses strings. Indentation is used so the result is
still human readable.
```
<flags id=\"Foo\" size=\"4\">
<field name=\"abc\" start=\"0\" end=\"0\"/>
</flags>
```
This is used by lldb-server when building target XML, though no one sets
any fields yet. That'll come in a later commit.
This patch should fix some assertion that started getting hit after f22d82c.
That commit changed the scripted object plugin creation to use
`llvm::Expected<T>` as a return type to enforce error handling, however
I forgot to handle the error which caused the assert.
The interesting part about this, is that since that assert was triggered
in the ScriptedProcess constructor (where the `llvm::Error` wasn't
handled), that impacted every test that launched any kind of process,
since the process plugin manager would eventually also iterate over the
`ScriptedProcess::Create` factory method.
This patch should fix the assertions by handling the errors.
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
This patch changes the way plugin objects used with Scripted Interfaces
are created.
Instead of implementing a different SWIG method to create the object for
every scripted interface, this patch makes the creation more generic by
re-using some of the ScriptedPythonInterface templated Dispatch code.
This patch also improves error handling of the object creation by
returning an `llvm::Expected`.
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
This register reports the configuration of the AArch64 Linux tagged
address ABI, part of which is the memory tagging (MTE) settings.
It will always be present in core files because even without MTE, there
are parts of the tagged address ABI that can be configured (these parts
use the Top Byte Ignore feature).
I missed adding this when I previously worked on MTE support. Until now
you could read memory tags from a core file but not this register.
This reverts commit 8d80a452b8.
The pointer to the invalidates lists needs to be non-const. Though in this case
I don't think it's ever modified.
Also I realised that the invalidate list was being set on svg not vg.
Should be the other way around.
This fixes a bug where writing vg during streaming mode
could prevent you reading za directly afterwards.
vg is invalidated just prior to us reading it in AArch64Reconfigure,
but svg was not. This lead to some situations where vg would be
updated or cleared and re-read, but svg would not be.
This meant it had some undefined value which lead to errors
that prevented us reading ZA. Likely we received a lot more
data than we were expecting.
There are at least 2 ways to get into this situation:
* Explicit write by the user to vg.
* We have just stopped and need to get the potentially new svg and vg.
The first is handled by invalidating svg client side before fetching the
new one. This also
covers some but not all of the second scenario. For the second, I've
made writes to vg
invalidate svg by noting this in the register information.
Whichever one of those kicks in, we'll get the latest value of svg.
The bug may depend on timing, I could not find a consistent way
to trigger it. I originally found it when checking whether za
is disabled after a vg change, so I've added checks for that
to TestZAThreadedDynamic.
The SVE VG version of the bug did show up on the buildbot,
but not consistently. So it's possible that TestZAThreadedDynamic
does in fact cover this, but I haven't run it enough times to know.
As ReadRegister always read into a uint64_t, when it called operator=
with uint64_t it was setting the RegisterValue's type to eTypeUInt64
regardless of its size.
This mostly works because most registers are 64 bit, and very few bits
of code rely on the type being correct. However, cpsr, fpsr and fpcr are
in fact 32 bit, and my upcoming register fields code relies on this type
being correct.
Which is how I found this bug and unfortunately is the only way to test
it. As RegisterValue::Type never makes it out via the API anywhere. So
this change will be tested once I start adding register field
information.
The implemtation support parsing kernel module for FreeBSD Kernel and
has been test on x86-64 and arm64.
In summary, this class parse the linked list resides in the kernel
memory that record all kernel module and load the debug symbol file to
facilitate debug process
This patch implements the thread local storage support for linux
(https://github.com/llvm/llvm-project/issues/28766).
TLS feature is originally only implemented for Mac. With my previous
patch to enable `fs_base` register for Linux
(https://reviews.llvm.org/D155256), now it is feasible to implement this
feature for Linux.
The major changes are:
* Track the main module's link address during launch
* Fetch thread pointer from `fs_base` register
* Create register alias for thread pointer
* Read pthread metadata from target memory instead of process so that it
works for coredump
With the patch the failing test is passing now. Note: I am only enabling
this test for Mac and Linux because I do not have machine to test for
FreeBSD/NetBSD.
---------
Co-authored-by: jeffreytan81 <jeffreytan@fb.com>
I just fixed a bug in the core file equivalent where this was the issue.
This class avoids the issue by setting m_sve_state early but should
still be fixed so it doesn't crop up in a later refactor.
This reverts commit 3fa5035823.
m_sve_state was not initialised which (I'm guessing) meant that
it could potentially be a value that matched a real SVE state.
Then we'd be acting as if we're streaming mode, for example,
without ever having the data required to back that up.
By sheer luck this only turn up on x86, AArch64 and ARM were fine.
It is UB regardless.
This adds the ability to read streaming SVE registers,
ZA, SVCR and SVG from core files.
Streaming SVE is in a new note NT_ARM_SSVE but otherwise
has the same format as SVE. So I've done the same as I
did for live processes and reused the existing SVE state
with an extra state for the mode variable.
ZA is in a note NT_ARM_ZA and again the handling matches
live processes. Except that it gets setup only once. A
disabled ZA reads as 0s as usual.
SVCR and SVG are pseudo registers, generated from the notes.
An important detail is that the notes represent what
you would have got if you read from ptrace at the time of
the crash.
This means that for a corefile in non-streaming mode,
there is still an NT_ARM_SSVE note and we check the header
flags to tell if it is active. We cannot just say if you
have the note you're in streaming mode.
The kernel does not provide register values for the inactive
mode and even if it did, they would be undefined, so if we find
streaming state, we ignore the non-streaming state.
Same for ZA, a disabled ZA still has the header in the note.
The tests do not cover all combinations but enough different
vector lengths, modes and ZA states to be confident.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D158500
This reverts commit a7b78cac9a.
With updates to the tests.
TestWatchTaggedAddress.py: Updated the expected watchpoint types,
though I'm not sure there should be a differnt default for the two
ways of setting them, that needs to be confirmed.
TestStepOverWatchpoint.py: Skipped this everywhere because I think
what used to happen is you couldn't put 2 watchpoints on the same
address (after alignment). I guess that this is now allowed because
modify watchpoints aren't accounted for, but likely should be.
Needs investigating.
Watchpoints in lldb can be either 'read', 'write', or 'read/write'. This
is exposing the actual behavior of hardware watchpoints. gdb has a
different behavior: a "write" type watchpoint only stops when the
watched memory region *changes*.
A user is using a watchpoint for one of three reasons:
1. Want to find what is changing/corrupting this memory.
2. Want to find what is writing to this memory.
3. Want to find what is reading from this memory.
I believe (1) is the most common use case for watchpoints, and it
currently can't be done in lldb -- the user needs to continue every time
the same value is written to the watched-memory manually. I think gdb's
behavior is the correct one. There are some use cases where a developer
wants to find every function that writes/reads to/from a memory region,
regardless of value, I want to still allow that functionality.
This is also a bit of groundwork for my large watchpoint support
proposal
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
where I will be adding support for AArch64 MASK watchpoints which watch
power-of-2 memory regions. A user might ask to watch 24 bytes, and a
MASK watchpoint stub can do this with a 32-byte MASK watchpoint if it is
properly aligned. And we need to ignore writes to the final 8 bytes of
that watched region, and not show those hits to the user.
This patch adds a new 'modify' watchpoint type and it is the default.
Re-landing this patch after addressing testsuite failures found in CI on
Linux, Intel machines, and windows.
rdar://108234227
This fixes 46b961f36b.
Prior to the SME changes the steps were:
* Invalidate all registers.
* Update value of VG and use that to reconfigure the registers.
* Invalidate all registers again.
With the changes for SME I removed the initial invalidate thinking
that it didn't make sense to do if we were going to invalidate them
all anyway after reconfiguring.
Well the reason it made sense was that it forced us to get the
latest value of vg which we needed to reconfigure properly.
Not doing so caused a test failure on our Graviton bot which has SVE
(https://lab.llvm.org/buildbot/#/builders/96/builds/45722). It was
flaky and looping it locally would always fail within a few minutes.
Presumably it was using an invalid value of vg, which caused some offsets
to be calculated incorrectly.
To fix this I've invalided vg in AArch64Reconfigure just before we read
it. This is the same as the fix I have in review for SME's svg register.
Pushing this directly to fix the ongoing test failure.
Software can tell if it is in streaming SVE mode by checking
the Streaming Vector Control Register (SVCR).
"E3.1.9 SVCR, Streaming Vector Control Register" in
"Arm® Architecture Reference Manual Supplement, The Scalable Matrix
Extension (SME), for Armv9-A"
https://developer.arm.com/documentation/ddi0616/latest/
This is especially useful for debug because the names of the
SVE registers are the same betweeen non-streaming and streaming mode.
The Linux Kernel chose to not put this register behind ptrace,
and it can be read from EL0. However, this would mean running code
in process to read it. That can be done but we already know all
the information just from ptrace.
So this is a pseudo register that matches the architectural
content. The name is just "svcr", which aligns with GDB's proposed naming,
and it's added to the existing SME register set.
The SVCR register contains two bits:
0 : Whether streaming SVE mode is enabled (SM)
1 : Whether the array storage is enabled (ZA)
Array storage can be active when streaming mode is not, so this register
can have any permutation of those bits.
This register is currently read only. We can emulate the result of
writing to it, using ptrace. However at this point the utility of that
is not obvious.
Existing tests have been updated to check for appropriate SVCR values
at various points.
Given that this register is a read only pseudo, there is no need
to save and restore it around expressions.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D154927
The size of ZA depends on the streaming vector length regardless
of the active mode. So in addition to vg (which reports the active
mode) we must send the client svg.
Otherwise the mechanics are the same as for non-streaming SVE.
Use the svg value to update the defined size of ZA, accounting
for the fact that ZA is not a single vector but a suqare matrix.
So if svg is 8, a single streaming vector would be 8*8 = 64 bytes.
ZA is that squared, so 64*64 = 4096 bytes.
Testing is included in a later patch.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D159504
This adds a register "svg" which mirrors SVE's "vg" register.
This reports the streaming vector length at all times, read
from the ZA ptrace header.
This register is needed first to implement ZA resizing as
the streaming vector length changes. Like vg, svg will be
expedited to the client so it can reconfigure its register
definitions.
The other use is for users to be able to know the streaming
vector length without resorting to counting the (many, many)
bytes in ZA, or temporarily entering streaming mode (which
would be destructive).
Some refactoring has been done so we don't have to recalculate the
register offsets twice.
Testing for this will come in a later patch.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D159503
Note: This requires later commits for ZA to function properly,
it is split for ease of review. Testing is also in a later patch.
The "Matrix" part of the Scalable Matrix Extension is a new register
"ZA". You can think of this as a square matrix made up of scalable rows,
where each row is one scalable vector long. However it is not made
of the existing scalable vector registers, it is its own register.
Meaning that the size of ZA is the vector length in bytes * the vector
length in bytes.
https://developer.arm.com/documentation/ddi0616/latest/
It uses the streaming vector length, even when streaming mode itself
is not active. For this reason, it's register data header always
includes the streaming vector length.
Due to it's size I've changed kMaxRegisterByteSize to the maximum
possible ZA size and kTypicalRegisterByteSize will be the maximum
possible scalable vector size. Therefore ZA transactions will cause heap
allocations, and non ZA registers will perform exactly as before.
ZA can be enabled and disabled independently of streaming mode. The way
this works in ptrace is different to SVE versus streaming SVE. Writing
NT_ARM_ZA without register data disables ZA, writing NT_ARM_ZA with
register data enables ZA (LLDB will only support the latter, and only
because it's convenient for us to do so).
https://kernel.org/doc/html/v6.2/arm64/sme.html
LLDB does not handle registers that can appear and dissappear at
runtime. Rather than add complexity to implement that, LLDB will
show a block of 0s when ZA is disabled.
The alternative is not only updating the vector lengths every stop,
but every register definition. It's possible but I'm not sure it's worth
pursuing.
Users should refer to the SVCR register (added in later patches)
for the final word on whether ZA is active or not.
Writing to "VG" during streaming mode will change the size of the
streaming sve registers and ZA. LLDB will not attempt to preserve
register values in this case, we'll just read back the undefined
content the kernel shows. This is in line with, as stated, the
kernel ABIs and the prospective software ABIs look like.
ZA is defined as a vector of size SVL*SVL, so the display in lldb
is very basic. A giant block of values. This is no worse than SVE,
just larger. There is scope to improve this but that can wait
until we see some use cases.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D159502
Watchpoints in lldb can be either 'read', 'write', or 'read/write'. This
is exposing the actual behavior of hardware watchpoints. gdb has a
different behavior: a "write" type watchpoint only stops when the
watched memory region *changes*.
A user is using a watchpoint for one of three reasons:
1. Want to find what is changing/corrupting this memory.
2. Want to find what is writing to this memory.
3. Want to find what is reading from this memory.
I believe (1) is the most common use case for watchpoints, and it
currently can't be done in lldb -- the user needs to continue every time
the same value is written to the watched-memory manually. I think gdb's
behavior is the correct one. There are some use cases where a developer
wants to find every function that writes/reads to/from a memory region,
regardless of value, I want to still allow that functionality.
This is also a bit of groundwork for my large watchpoint support
proposal
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
where I will be adding support for AArch64 MASK watchpoints which watch
power-of-2 memory regions. A user might ask to watch 24 bytes, and a
MASK watchpoint stub can do this with a 32-byte MASK watchpoint if it is
properly aligned. And we need to ignore writes to the final 8 bytes of
that watched region, and not show those hits to the user.
This patch adds a new 'modify' watchpoint type and it is the default.
rdar://108234227
The remaining use of ConstString in StructuredData is the Dictionary
class. Internally it's backed by a `std::map<ConstString, ObjectSP>`.
I propose that we replace it with a `llvm::StringMap<ObjectSP>`.
Many StructuredData::Dictionary objects are ephemeral and only exist for
a short amount of time. Many of these Dictionaries are only produced
once and are never used again. That leaves us with a lot of string data
in the ConstString StringPool that is sitting there never to be used
again. Even if the same string is used many times for keys of different
Dictionary objects, that is something we can measure and adjust for
instead of assuming that every key may be reused at some point in the
future.
Quick comparisons of key data is likely not a concern with Dictionary,
but the use of `llvm::StringMap` means that lookups should be fast with
its hashing strategy.
Switching to a llvm::StringMap meant that the iteration order may be
different. To account for this when serializing/dumping the dictionary,
I added some code to sort the output by key before emitting anything.
Differential Revision: https://reviews.llvm.org/D159313