This patch replaces uses of StringRef::{starts,ends}with with
StringRef::{starts,ends}_with for consistency with
std::{string,string_view}::{starts,ends}_with in C++20.
I'm planning to deprecate and eventually remove
StringRef::{starts,ends}with.
This patch replaces uses of StringRef::{starts,ends}with with
StringRef::{starts,ends}_with for consistency with
std::{string,string_view}::{starts,ends}_with in C++20.
I'm planning to deprecate and eventually remove
StringRef::{starts,ends}with.
We need to generate events when finalizing, or we won't know that we
succeeded in stopping the process to detach/kill. Instead, we stall and
then after our 20 interrupt timeout, we kill the process (even if we
were supposed to detach) and exit.
OTOH, we have to not generate events when the Process is being
destructed because shared_from_this has already been torn down, and
using it will cause crashes.
This patch is rearranging code a bit to add WatchpointResources to
Process. A WatchpointResource is meant to represent a hardware
watchpoint register in the inferior process. It has an address, a size,
a type, and a list of Watchpoints that are using this
WatchpointResource.
This current patch doesn't add any of the features of
WatchpointResources that make them interesting -- a user asking to watch
a 24 byte object could watch this with three 8 byte WatchpointResources.
Or a Watchpoint on 1 byte at 0x1002 and a second watchpoint on 1 byte at
0x1003, these must both be served by a single WatchpointResource on that
doubleword at 0x1000 on a 64-bit target, if two hardware watchpoint
registers were used to track these separately, one of them may not be
hit. Or if you have one Watchpoint on a variable with a condition set,
and another Watchpoint on that same variable with a command defined or
different condition, or ignorecount, both of those Watchpoints need to
evaluate their criteria/commands when their WatchpointResource has been
hit.
There's a bit of code movement to rearrange things in the direction I'll
need for implementing this feature, so I want to start with reviewing &
landing this mostly NFC patch and we can focus on the algorithmic
choices about how WatchpointResources are shared and handled as they're
triggeed, separately.
This patch also stops printing "Watchpoint <n> hit: old value: <x>, new
vlaue: <y>" for Read watchpoints. I could make an argument for print
"Watchpoint <n> hit: current value <x>" but the current output doesn't
make any sense, and the user can print the value if they are
particularly interested. Read watchpoints are used primarily to
understand what code is reading a variable.
This patch adds more fallbacks for how to print the objects being
watched if we have types, instead of assuming they are all integral
values, so a struct will print its elements. As large watchpoints are
added, we'll be doing a lot more of those.
To track the WatchpointSP in the WatchpointResources, I changed the
internal API which took a WatchpointSP and devolved it to a Watchpoint*,
which meant touching several different Process files. I removed the
watchpoint code in ProcessKDP which only reported that watchpoints
aren't supported, the base class does that already.
I haven't yet changed how we receive a watchpoint to identify the
WatchpointResource responsible for the trigger, and identify all
Watchpoints that are using this Resource to evaluate their conditions
etc. This is the same work that a BreakpointSite needs to do when it has
been tiggered, where multiple Breakpoints may be at the same address.
There is not yet any printing of the Resources that a Watchpoint is
implemented in terms of ("watchpoint list", or
SBWatchpoint::GetDescription).
"watchpoint set var" and "watchpoint set expression" take a size
argument which was previously 1, 2, 4, or 8 (an enum). I've changed this
to an unsigned int. Most hardware implementations can only watch 1, 2,
4, 8 byte ranges, but with Resources we'll allow a user to ask for
different sized watchpoints and set them in hardware-expressble terms
soon.
I've annotated areas where I know there is work still needed with
LWP_TODO that I'll be working on once this is landed.
I've tested this on aarch64 macOS, aarch64 Linux, and Intel macOS.
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
(cherry picked from commit fc6b72523f)
Currently when you interrupt a:
(lldb) process attach -w -n some_process
lldb just closes the connection to the stub and kills the
lldb_private::Process it made for the attach. The stub at the other end
notices the connection go down and exits because of that. But when
communication to a device is handled through some kind of proxy server
which isn't as well behaved as one would wish, that signal might not be
reliable, causing debugserver to persist on the machine, waiting to
steal the next instance of that process.
We can work around those failures by sending an explicit interrupt
before closing down the connection. The stub will also have to be
waiting for the interrupt for this to make any difference. I changed
debugserver to do that.
I didn't make the equivalent change in lldb-server. So long as you
aren't faced with a flakey connection, this should not be necessary.
This patch is rearranging code a bit to add WatchpointResources to
Process. A WatchpointResource is meant to represent a hardware
watchpoint register in the inferior process. It has an address, a size,
a type, and a list of Watchpoints that are using this
WatchpointResource.
This current patch doesn't add any of the features of
WatchpointResources that make them interesting -- a user asking to watch
a 24 byte object could watch this with three 8 byte WatchpointResources.
Or a Watchpoint on 1 byte at 0x1002 and a second watchpoint on 1 byte at
0x1003, these must both be served by a single WatchpointResource on that
doubleword at 0x1000 on a 64-bit target, if two hardware watchpoint
registers were used to track these separately, one of them may not be
hit. Or if you have one Watchpoint on a variable with a condition set,
and another Watchpoint on that same variable with a command defined or
different condition, or ignorecount, both of those Watchpoints need to
evaluate their criteria/commands when their WatchpointResource has been
hit.
There's a bit of code movement to rearrange things in the direction I'll
need for implementing this feature, so I want to start with reviewing &
landing this mostly NFC patch and we can focus on the algorithmic
choices about how WatchpointResources are shared and handled as they're
triggeed, separately.
This patch also stops printing "Watchpoint <n> hit: old value: <x>, new
vlaue: <y>" for Read watchpoints. I could make an argument for print
"Watchpoint <n> hit: current value <x>" but the current output doesn't
make any sense, and the user can print the value if they are
particularly interested. Read watchpoints are used primarily to
understand what code is reading a variable.
This patch adds more fallbacks for how to print the objects being
watched if we have types, instead of assuming they are all integral
values, so a struct will print its elements. As large watchpoints are
added, we'll be doing a lot more of those.
To track the WatchpointSP in the WatchpointResources, I changed the
internal API which took a WatchpointSP and devolved it to a Watchpoint*,
which meant touching several different Process files. I removed the
watchpoint code in ProcessKDP which only reported that watchpoints
aren't supported, the base class does that already.
I haven't yet changed how we receive a watchpoint to identify the
WatchpointResource responsible for the trigger, and identify all
Watchpoints that are using this Resource to evaluate their conditions
etc. This is the same work that a BreakpointSite needs to do when it has
been tiggered, where multiple Breakpoints may be at the same address.
There is not yet any printing of the Resources that a Watchpoint is
implemented in terms of ("watchpoint list", or
SBWatchpoint::GetDescription).
"watchpoint set var" and "watchpoint set expression" take a size
argument which was previously 1, 2, 4, or 8 (an enum). I've changed this
to an unsigned int. Most hardware implementations can only watch 1, 2,
4, 8 byte ranges, but with Resources we'll allow a user to ask for
different sized watchpoints and set them in hardware-expressble terms
soon.
I've annotated areas where I know there is work still needed with
LWP_TODO that I'll be working on once this is landed.
I've tested this on aarch64 macOS, aarch64 Linux, and Intel macOS.
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
The Watchpoint and Breakpoint objects try to track the hardware index
that was used for them, if they are hardware wp/bp's. The majority of
our debugging goes over the gdb remote serial protocol, and when we set
the watchpoint/breakpoint, there is no (standard) way for the remote
stub to communicate to lldb which hardware index was used. We have an
lldb-extension packet to query the total number of watchpoint registers.
When a watchpoint is hit, there is an lldb extension to the stop reply
packet (documented in lldb-gdb-remote.txt) to describe the watchpoint
including its actual hardware index,
<addr within wp range> <wp hw index> <actual accessed address>
(the third field is specifically needed for MIPS). At this point, if the
stub reported these three fields (the stub is only required to provide
the first), we can know the actual hardware index for this watchpoint.
Breakpoints are worse; there's never any way for us to be notified about
which hardware index was used. Breakpoints got this as a side effect of
inherting from StoppointSite with Watchpoints.
We expose the watchpoint hardware index through "watchpoint list -v" and
through SBWatchpoint::GetHardwareIndex.
With my large watchpoint support, there is no *single* hardware index
that may be used for a watchpoint, it may need multiple resources. Also
I don't see what a user is supposed to do with this information, or an
IDE. Knowing the total number of watchpoint registers on the target, and
knowing how many Watchpoint Resources are currently in use, is helpful.
Knowing how many Watchpoint Resources
a single user-specified watchpoint needed to be implemented is useful.
But knowing which registers were used is an implementation detail and
not available until we hit the watchpoint when using gdb remote serial
protocol.
So given all that, I'm removing watchpoint hardware index numbers. I'm
changing the SB API to always return -1.
This patch adds support for saving minidumps with the arm64
architecture. It also will cause unsupported architectures to emit an
error where before this patch it would emit a minidump with partial
information. This new code is tested by the arm64 windows buildbot that
was failing:
https://lab.llvm.org/buildbot/#/builders/219/builds/6868
This is needed following this PR:
https://github.com/llvm/llvm-project/pull/71772
Similar to my previous patch (#71613) where I changed
`GetItemAtIndexAsString`, this patch makes the same change to
`GetItemAtIndexAsDictionary`.
`GetItemAtIndexAsDictionary` now returns a std::optional that is either
`std::nullopt` or is a valid pointer. Therefore, if the optional is
populated, we consider the pointer to always be valid (i.e. no need to
check pointer validity).
This register is a pseudo register but mirrors the architectural
register's contents. See:
https://developer.arm.com/documentation/ddi0616/latest/
For the full details. Example output:
```
(lldb) register read svcr
svcr = 0x0000000000000002
= (ZA = 1, SM = 0)
```
This is a Linux pseudo register provided by the NT_ARM_TAGGED_ADDR_CTRL
register set. It reflects the value passed to prctl
PR_SET_TAGGED_ADDR_CTRL.
https://docs.kernel.org/arch/arm64/memory-tagging-extension.html
The fields are made from the #defines the kernel provides for setting
the value. Its contents are constant so no runtime detection is needed
(once we've decided we have this register in the first place).
The permitted generated tags is technically a bitfield but at this time
we don't have a way to mark a field as preferring hex formatting.
```
(lldb) register read mte_ctrl
mte_ctrl = 0x000000000007fffb
= (TAGS = 65535, TCF_ASYNC = 0, TCF_SYNC = 1, TAGGED_ADDR_ENABLE = 1)
```
(4 bit tags mean 16 possible tags, 16 bit bitfield)
Testing has been added to TestMTECtrlRegister.py, which needed a more
granular way to check for XML support, so I've added hasXMLSupport that
can be used within a test case instead of skipping whole tests if XML
isn't supported.
Same for the core file tests.
Follows the format laid out in the Arm manual, AArch32 only fields are
ignored.
```
(lldb) register read fpcr
fpcr = 0x00000000
= (AHP = 0, DN = 0, FZ = 0, RMMode = 0, FZ16 = 0, IDE = 0, IXE = 0, UFE = 0, OFE = 0, DZE = 0, IOE = 0)
```
Tests use the first 4 fields that we know are always present.
Converted all the HCWAP defines to `UL` because I'm bound to
forget one if I don't do it now.
This one is easy because none of the fields depend on extensions. Only
thing to note is that I've ignored some AArch32 only fields.
```
(lldb) register read fpsr
fpsr = 0x00000000
= (QC = 0, IDC = 0, IXC = 0, UFC = 0, OFC = 0, DZC = 0, IOC = 0)
```
This patch should fix a test failure in
`Expr/TestIRMemoryMapWindows.test`:
https://lab.llvm.org/buildbot/#/builders/219/builds/6786
The problem here is that since 7991412 landed, all the
`ScriptInterpreter::CreateScripted*Interface` now return a `nullptr`
when using the base `ScriptInterpreter` instance, instead of
`ScriptInterpreterPython` for instance.
This nullptr is actually well handled in the various places where we
create a Scripted Interface, however, because of the way to instanciate
a process, the process plugin manager have to iterate over every process
plugin and call the `CreateInstance` static function that should
instanciate the right object.
So in the ScriptedProcess case, because we are getting a `nullptr` when
trying to create a `ScriptedProcessInterface`, we try to discard the
process object, which calls the Process destructor, which in turns calls
the `ScriptedProcess` plugin `IsAlive` method. That method will fire an
assertion if the scripted interface pointer is not allocated.
This patch address that issue by setting a flag when destroying the
ScriptedProcess object, and checks that flag when calling `IsAlive`.
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
The contents of which are mostly SPSR_EL1 as shown in the Arm manual,
with a few adjustments for things Linux says userspace shouldn't concern
itself with.
```
(lldb) register read cpsr
cpsr = 0x80001000
= (N = 1, Z = 0, C = 0, V = 0, SS = 0, IL = 0, ...
```
Some fields are always present, some depend on extensions. I've checked
for those extensions using HWCAP and HWCAP2.
To provide this for core files and live processes I've added a new class
LinuxArm64RegisterFlags. This is a container for all the registers we'll
want to have fields and handles detecting fields and updating register
info.
This is used by the native process as follows:
* There is a global LinuxArm64RegisterFlags object.
* The first thread takes a mutex on it, and updates the fields.
* Subsequent threads see that detection is already done, and skip it.
* All threads then update their own copy of the register information
with pointers to the field information contained in the global object.
This means that even though every thread will have the same fields, we
only detect them once and have one copy of the information.
Core files instead have a LinuxArm64RegisterFlags as a member, because
each core file could have different saved capabilities. The logic from
there is the same but we get HWACP values from the corefile note.
This handler class is Linux specific right now, but it can easily be
made more generic if needed. For example by using LLVM's FeatureBitset
instead of HWCAPs.
Updating register info is done with string comparison, which isn't
ideal. For CPSR, we do know the register number ahead of time but we do
not for other registers in dynamic register sets. So in the interest of
consistency, I'm going to use string comparison for all registers
including cpsr.
I've added tests with a core file and live process. Only checking for
fields that are always present to account for CPU variance.
This removes AArch64 specific code from the GDB* classes.
To do this I've added 2 new methods to Architecture:
* RegisterWriteCausesReconfigure to check if what you are about to do
will trash the register info.
* ReconfigureRegisterInfo to do the reconfiguring. This tells you if
anything changed so that we only invalidate registers when needed.
So that ProcessGDBRemote can call ReconfigureRegisterInfo in
SetThreadStopInfo,
I've added forwarding calls to GDBRemoteRegisterContext and the base
class
RegisterContext.
(which removes a slightly sketchy static cast as well)
RegisterContext defaults to doing nothing for both the methods
so anything other than GDBRemoteRegisterContext will do nothing.
This completes the conversion of LocateSymbolFile into a SymbolLocator
plugin. The only remaining function is DownloadSymbolFileAsync which
doesn't really fit into the plugin model, and therefore moves into the
SymbolLocator class, while still relying on the plugins to do the
underlying work.
This builds on top of the work started in c3a302d to convert
LocateSymbolFile to a SymbolLocator plugin. This commit moves
DownloadObjectAndSymbolFile.
This commit contains the initial scaffolding to convert the
functionality currently implemented in LocateSymbolFile to a plugin
architecture. The plugin approach allows us to easily add new ways to
find symbols and fixes some issues with the current implementation.
For instance, currently we (ab)use the host OS to include support for
querying the DebugSymbols framework on macOS. The plugin approach
retains all the benefits (including the ability to compile this out on
other platforms) while maintaining a higher level of separation with the
platform independent code.
To limit the scope of this patch, I've only converted a single function:
LocateExecutableObjectFile. Future commits will convert the remaining
LocateSymbolFile functions and eventually remove LocateSymbolFile. To
make reviewing easier, that will done as follow-ups.
The ZT0 register is always 64 bytes in size so it is a lot easier to
handle than ZA which is scalable. In addition, reading an inactive ZT0
via ptrace returns all 0s, unlike ZA which returns no register data.
This means that a corefile from a process where ZA and ZT0 were inactive
still contains an NT_ARM_ZT note and we can simply say that if it's
there, then we should be able to read from it.
Along the way I removed a redundant check on the size of the ZA note. If
that note's size is < the ZA header size, we do not have SME, and
therefore could not have SME2 either.
I have added ZT0 to the existing SME core files tests. This means that
you need an SME2 system to generate them (Arm's FVP at this point). I
think this is a fair tradeoff given that this is all running in
simulation anyway and seperate ZT0 tests would be 99% identical copies
of the ZA only tests.
This removes explicit invalidation of vg and svg that was done in
`GDBRemoteRegisterContext::AArch64Reconfigure`. This was in fact
covering up a bug elsehwere.
Register information says that a write to vg also invalidates svg (it
does not unless you are in streaming mode, but we decided to keep it
simple and say it always does).
This invalidation was not being applied until *after* AArch64Reconfigure
was called. This meant that without those manual invalidates this
happened:
* vg is written
* svg is not invalidated
* Reconfigure uses the written vg value
* Reconfigure uses the *old* svg value
I have moved the AArch64Reconfigure call to after we've processed the
invalidations caused by the register write, so we no longer need the
manual invalidates in AArch64Reconfigure.
In addition I have changed the order in which expedited registers as
parsed. These registers come with a stop notification and include,
amongst others, vg and svg.
So now we:
* Parse them and update register values (including vg and svg)
* AArch64Reconfigure, which uses those values, and invalidates every
register, because offsets may have changed.
* Parse the expedited registers again, knowing that none of the values
will have changed due to the scaling.
This means we use the expedited registers during the reconfigure, but
the invalidate does not mean we throw all of them away.
The cost is we parse them twice client side, but this is cheap compared
to a network packet, and is limited to AArch64 targets only.
On a system with SVE and SME, these are the packets sent for a step:
```
(lldb) b-remote.async> < 803> read packet:
$T05thread:p1f80.1f80;name:main.o;threads:1f80;thread-pcs:000000000040056c<...>a1:0800000000000000;d9:0400000000000000;reason:trace;#fc
intern-state < 21> send packet: $xfffffffff200,200#5e
intern-state < 516> read packet:
$e4f2ffffffff000000<...>#71
intern-state < 15> send packet: $Z0,400568,4#4d
intern-state < 6> read packet: $OK#9a
dbg.evt-handler < 16> send packet: $jThreadsInfo#c1
dbg.evt-handler < 224> read packet:
$[{"name":"main.o","reason":"trace","registers":{"161":"0800000000000000",<...>}],"signal":5,"tid":8064}]]#73
```
You can see there are no extra register reads which means we're using
the expedited registers.
For a write to vg:
```
(lldb) register write vg 4
lldb < 37> send packet:
$Pa1=0400000000000000;thread:1f80;#4a
lldb < 6> read packet: $OK#9a
lldb < 20> send packet: $pa1;thread:1f80;#29
lldb < 20> read packet: $0400000000000000#04
lldb < 20> send packet: $pd9;thread:1f80;#34
lldb < 20> read packet: $0400000000000000#04
```
There is the initial P write, and lldb correctly assumes that SVG is
invalidated by this also so we read back the new vg and svg values
afterwards.
SME2 is documented as part of the main SME supplement:
https://developer.arm.com/documentation/ddi0616/latest/
The one change for debug is this new ZT0 register. This register
contains data to be used with new table lookup instructions.
It's size is always 512 bits (not scalable) and can be
interpreted in many different ways depending on the instructions
that use it.
The kernel has implemented this as a new register set containing
this single register. It always returns register data (with no header,
unlike ZA which does have a header).
https://docs.kernel.org/arch/arm64/sme.html
ZT0 is only active when ZA is active (when SVCR.ZA is 1). In the
inactive state the kernel returns 0s for its contents. Therefore
lldb doesn't need to create 0s like it does for ZA.
However, we will skip restoring the value of ZT0 if we know that
ZA is inactive. As writing to an inactive ZT0 sets SVCR.ZA to 1,
which is not desireable as it would activate ZA also. Whether
SVCR.ZA is set will be determined only by the ZA data we restore.
Due to this, I've added a new save/restore kind SME2. This is easier
than accounting for the variable length ZA in the SME data. We'll only
save an SME2 data block if ZA is active. If it's not we can get fresh
0s back from the kernel for ZT0 anyway so there's nothing for us to
restore.
This new register will only show up if the system has SME2 therefore
the SME set presented to the user may change, and I've had to account
for that in in a few places.
I've referred to it internally as simply "ZT" as the kernel does in
NT_ARM_ZT, but the architecture refers to the specific register as "ZT0"
so that's what you'll see in lldb.
```
(lldb) register read -s 6
Scalable Matrix Extension Registers:
svcr = 0x0000000000000000
svg = 0x0000000000000004
za = {0x00 <...> 0x00}
zt0 = {0x00 <...> 0x00}
```
On an SVE/SME the register context configuration may change after the
inferior process has executed. This was handled via
https://reviews.llvm.org/D159504 but it is reconfiguring and clearing
the register context after we've parsed any expedited reigster values
from the stop reply packet. That results in lldb having to read each
register value one at a time while at that stop location, which will be
a performance problem on non-local debug setups.
The configuration & clearing needs to happen first. Also, update the
names of the local variables for a little clarity.
[lldb] Part 2 of 2 - Refactor `CommandObject::DoExecute(...)` to return
`void` instead of ~~`bool`~~
Justifications:
- The code doesn't ultimately apply the `true`/`false` return values.
- The methods already pass around a `CommandReturnObject`, typically
with a `result` parameter.
- Each command return object already contains:
- A more precise status
- The error code(s) that apply to that status
Part 1 refactors the `CommandObject::Execute(...)` method.
- See
[https://github.com/llvm/llvm-project/pull/69989](https://github.com/llvm/llvm-project/pull/69989)
rdar://117378957
For most register sets, if it was enabled this meant you could use it,
it was present in the process. There was no present but turned off
state. So "enabled" made sense.
Then ZA came along (and soon to be ZT0) where ZA can be present in the
hardware when you have SME, but ZA itself can be made inactive. This
means that "IsZAEnabled()" doesn't mean is it active, it means do you
have SME. Which is very confusing when we actually want to know if ZA is
active.
So instead say "IsZAPresent", to make these checks more specific. For
things that can't be made inactive, present will imply "active" as
they're never inactive.
This adds ToXML methods to encode RegisterFlags and its fields into XML
according to GDB's target XML format:
https://sourceware.org/gdb/onlinedocs/gdb/Target-Description-Format.html#Target-Description-Format
lldb-server does not use libXML to build XML, so this follows the
existing code that uses strings. Indentation is used so the result is
still human readable.
```
<flags id=\"Foo\" size=\"4\">
<field name=\"abc\" start=\"0\" end=\"0\"/>
</flags>
```
This is used by lldb-server when building target XML, though no one sets
any fields yet. That'll come in a later commit.
This patch should fix some assertion that started getting hit after f22d82c.
That commit changed the scripted object plugin creation to use
`llvm::Expected<T>` as a return type to enforce error handling, however
I forgot to handle the error which caused the assert.
The interesting part about this, is that since that assert was triggered
in the ScriptedProcess constructor (where the `llvm::Error` wasn't
handled), that impacted every test that launched any kind of process,
since the process plugin manager would eventually also iterate over the
`ScriptedProcess::Create` factory method.
This patch should fix the assertions by handling the errors.
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
This patch changes the way plugin objects used with Scripted Interfaces
are created.
Instead of implementing a different SWIG method to create the object for
every scripted interface, this patch makes the creation more generic by
re-using some of the ScriptedPythonInterface templated Dispatch code.
This patch also improves error handling of the object creation by
returning an `llvm::Expected`.
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
This register reports the configuration of the AArch64 Linux tagged
address ABI, part of which is the memory tagging (MTE) settings.
It will always be present in core files because even without MTE, there
are parts of the tagged address ABI that can be configured (these parts
use the Top Byte Ignore feature).
I missed adding this when I previously worked on MTE support. Until now
you could read memory tags from a core file but not this register.
This reverts commit 8d80a452b8.
The pointer to the invalidates lists needs to be non-const. Though in this case
I don't think it's ever modified.
Also I realised that the invalidate list was being set on svg not vg.
Should be the other way around.
This fixes a bug where writing vg during streaming mode
could prevent you reading za directly afterwards.
vg is invalidated just prior to us reading it in AArch64Reconfigure,
but svg was not. This lead to some situations where vg would be
updated or cleared and re-read, but svg would not be.
This meant it had some undefined value which lead to errors
that prevented us reading ZA. Likely we received a lot more
data than we were expecting.
There are at least 2 ways to get into this situation:
* Explicit write by the user to vg.
* We have just stopped and need to get the potentially new svg and vg.
The first is handled by invalidating svg client side before fetching the
new one. This also
covers some but not all of the second scenario. For the second, I've
made writes to vg
invalidate svg by noting this in the register information.
Whichever one of those kicks in, we'll get the latest value of svg.
The bug may depend on timing, I could not find a consistent way
to trigger it. I originally found it when checking whether za
is disabled after a vg change, so I've added checks for that
to TestZAThreadedDynamic.
The SVE VG version of the bug did show up on the buildbot,
but not consistently. So it's possible that TestZAThreadedDynamic
does in fact cover this, but I haven't run it enough times to know.
As ReadRegister always read into a uint64_t, when it called operator=
with uint64_t it was setting the RegisterValue's type to eTypeUInt64
regardless of its size.
This mostly works because most registers are 64 bit, and very few bits
of code rely on the type being correct. However, cpsr, fpsr and fpcr are
in fact 32 bit, and my upcoming register fields code relies on this type
being correct.
Which is how I found this bug and unfortunately is the only way to test
it. As RegisterValue::Type never makes it out via the API anywhere. So
this change will be tested once I start adding register field
information.
The implemtation support parsing kernel module for FreeBSD Kernel and
has been test on x86-64 and arm64.
In summary, this class parse the linked list resides in the kernel
memory that record all kernel module and load the debug symbol file to
facilitate debug process
This patch implements the thread local storage support for linux
(https://github.com/llvm/llvm-project/issues/28766).
TLS feature is originally only implemented for Mac. With my previous
patch to enable `fs_base` register for Linux
(https://reviews.llvm.org/D155256), now it is feasible to implement this
feature for Linux.
The major changes are:
* Track the main module's link address during launch
* Fetch thread pointer from `fs_base` register
* Create register alias for thread pointer
* Read pthread metadata from target memory instead of process so that it
works for coredump
With the patch the failing test is passing now. Note: I am only enabling
this test for Mac and Linux because I do not have machine to test for
FreeBSD/NetBSD.
---------
Co-authored-by: jeffreytan81 <jeffreytan@fb.com>
I just fixed a bug in the core file equivalent where this was the issue.
This class avoids the issue by setting m_sve_state early but should
still be fixed so it doesn't crop up in a later refactor.