This uses [teyit](https://pypi.org/project/teyit/) to modernize asserts,
as recommended by the [unittest release
notes](https://docs.python.org/3.12/whatsnew/3.12.html#id3).
For example, `assertTrue(a == b)` is replaced with `assertEqual(a, b)`.
This produces better error messages, e.g. `error: unexpectedly found 1
and 2 to be different` instead of `error: False`.
assertRegexpMatches is a deprecated alias for assertRegex and has been
removed in Python 3.12. This wasn't an issue previously because we used
a vendored version of the unittest module. Now that we use the built-in
version this gets updated together with the Python version used to run
the test suite.
This register is a pseudo register but mirrors the architectural
register's contents. See:
https://developer.arm.com/documentation/ddi0616/latest/
For the full details. Example output:
```
(lldb) register read svcr
svcr = 0x0000000000000002
= (ZA = 1, SM = 0)
```
This is a Linux pseudo register provided by the NT_ARM_TAGGED_ADDR_CTRL
register set. It reflects the value passed to prctl
PR_SET_TAGGED_ADDR_CTRL.
https://docs.kernel.org/arch/arm64/memory-tagging-extension.html
The fields are made from the #defines the kernel provides for setting
the value. Its contents are constant so no runtime detection is needed
(once we've decided we have this register in the first place).
The permitted generated tags is technically a bitfield but at this time
we don't have a way to mark a field as preferring hex formatting.
```
(lldb) register read mte_ctrl
mte_ctrl = 0x000000000007fffb
= (TAGS = 65535, TCF_ASYNC = 0, TCF_SYNC = 1, TAGGED_ADDR_ENABLE = 1)
```
(4 bit tags mean 16 possible tags, 16 bit bitfield)
Testing has been added to TestMTECtrlRegister.py, which needed a more
granular way to check for XML support, so I've added hasXMLSupport that
can be used within a test case instead of skipping whole tests if XML
isn't supported.
Same for the core file tests.
The ZT0 register is always 64 bytes in size so it is a lot easier to
handle than ZA which is scalable. In addition, reading an inactive ZT0
via ptrace returns all 0s, unlike ZA which returns no register data.
This means that a corefile from a process where ZA and ZT0 were inactive
still contains an NT_ARM_ZT note and we can simply say that if it's
there, then we should be able to read from it.
Along the way I removed a redundant check on the size of the ZA note. If
that note's size is < the ZA header size, we do not have SME, and
therefore could not have SME2 either.
I have added ZT0 to the existing SME core files tests. This means that
you need an SME2 system to generate them (Arm's FVP at this point). I
think this is a fair tradeoff given that this is all running in
simulation anyway and seperate ZT0 tests would be 99% identical copies
of the ZA only tests.
This register reports the configuration of the AArch64 Linux tagged
address ABI, part of which is the memory tagging (MTE) settings.
It will always be present in core files because even without MTE, there
are parts of the tagged address ABI that can be configured (these parts
use the Top Byte Ignore feature).
I missed adding this when I previously worked on MTE support. Until now
you could read memory tags from a core file but not this register.
This reverts commit 3fa5035823.
m_sve_state was not initialised which (I'm guessing) meant that
it could potentially be a value that matched a real SVE state.
Then we'd be acting as if we're streaming mode, for example,
without ever having the data required to back that up.
By sheer luck this only turn up on x86, AArch64 and ARM were fine.
It is UB regardless.
This adds the ability to read streaming SVE registers,
ZA, SVCR and SVG from core files.
Streaming SVE is in a new note NT_ARM_SSVE but otherwise
has the same format as SVE. So I've done the same as I
did for live processes and reused the existing SVE state
with an extra state for the mode variable.
ZA is in a note NT_ARM_ZA and again the handling matches
live processes. Except that it gets setup only once. A
disabled ZA reads as 0s as usual.
SVCR and SVG are pseudo registers, generated from the notes.
An important detail is that the notes represent what
you would have got if you read from ptrace at the time of
the crash.
This means that for a corefile in non-streaming mode,
there is still an NT_ARM_SSVE note and we check the header
flags to tell if it is active. We cannot just say if you
have the note you're in streaming mode.
The kernel does not provide register values for the inactive
mode and even if it did, they would be undefined, so if we find
streaming state, we ignore the non-streaming state.
Same for ZA, a disabled ZA still has the header in the note.
The tests do not cover all combinations but enough different
vector lengths, modes and ZA states to be confident.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D158500
Previously lldb was storing them but not restoring them. Meaning that this function:
```
void expr(uint64_t value) {
__asm__ volatile("msr tpidr_el0, %0" ::"r"(value));
}
```
When run from lldb:
```
(lldb) expression expr()
```
Would leave tpidr as `value` instead of the original value of the register.
A check for this scenario has been added to TestAArch64LinuxTLSRegisters.py,
which covers tpidr and the SME excluisve tpidr2 register when it's present.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D156512
This changes the TLS regset to not only be dynamic in that it could
exist or not (though it always does) but also of a dynamic size.
If SME is present then the regset is 16 bytes and contains both tpidr
and tpidr2.
Testing is the same as tpidr. Write from assembly, read from lldb and
vice versa since we have no way to predict what its value should be
by just running a program.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D154930
This register is used as the pointer to the current thread
local storage block and is read from NT_ARM_TLS on Linux.
Though tpidr will be present on all AArch64 Linux, I am soon
going to add a second register tpidr2 to this set.
tpidr is only present when SME is implemented, therefore the
NT_ARM_TLS set will change size. This is why I've added this
as a dynamic register set to save changes later.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D152516
This is an ongoing series of commits that are reformatting our Python
code. Reformatting is done with `black` (23.1.0).
If you end up having problems merging this commit because you have made
changes to a python file, the best way to handle that is to run `git
checkout --ours <yourfile>` and then reformat it with black.
RFC: https://discourse.llvm.org/t/rfc-document-and-standardize-python-code-style
Differential revision: https://reviews.llvm.org/D151460
Previously we only looked at the si_signo field, so you got:
```
(lldb) bt
* thread #1, name = 'a.out.mte', stop reason = signal SIGSEGV
* frame #0: 0x00000000004007f4
```
This patch adds si_code so we can show:
```
(lldb) bt
* thread #1, name = 'a.out.mte', stop reason = signal SIGSEGV: sync tag check fault
* frame #0: 0x00000000004007f4
```
The order of errno and code was incorrect in ElfLinuxSigInfo::Parse.
It was the order that a "swapped" siginfo arch would use, which for Linux,
is only MIPS. We removed MIPS Linux support some time ago.
See:
fe15c26ee2/include/uapi/asm-generic/siginfo.h (L121)
A test is added using memory tagging faults. Which were the original
motivation for the changes.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D146045
This is a follow up to https://reviews.llvm.org/D141629
and applies the change it made to all paths through ToAddress
(now DoToAddress).
I have included the test from my previous attempt
https://reviews.llvm.org/D136938.
The initial change only applied fixing to addresses that
would parse as integers, so my test case failed. Since
ToAddress has multiple exit points, I've wrapped it into
a new method DoToAddress.
Now you can call ToAddress, it will call DoToAddress and
no matter what path you take, the address will be fixed.
For the memory tagging commands we actually want the full
address (to work out mismatches). So I added ToRawAddress
for that.
I have tested this on a QEMU AArch64 Linux system with
Memory Tagging, Pointer Authentication and Top Byte Ignore
enabled. By running the new test and all other tests in
API/linux/aarch64.
Some commands have had calls to the ABI plugin removed
as ToAddress now does this for them.
The "memory region" command still needs to use the ABI plugin
to detect the end of memory when there are non-address bits.
Reviewed By: jasonmolenda
Differential Revision: https://reviews.llvm.org/D142715
In API tests, replace use of the `p` alias with the `expression` command.
To avoid conflating tests of the alias with tests of the expression command,
this patch canonicalizes to the use `expression`.
Differential Revision: https://reviews.llvm.org/D141539
This teaches ProcessElfCore to recognise the MTE tag segments.
https://www.kernel.org/doc/html/latest/arm64/memory-tagging-extension.html#core-dump-support
These segments contain all the tags for a matching memory segment
which will have the same size in virtual address terms. In real terms
it's 2 tags per byte so the data in the segment is much smaller.
Since MTE is the only tag type supported I have hardcoded some
things to those values. We could and should support more formats
as they appear but doing so now would leave code untested until that
happens.
A few things to note:
* /proc/pid/smaps is not in the core file, only the details you have
in "maps". Meaning we mark a region tagged only if it has a tag segment.
* A core file supports memory tagging if it has at least 1 memory
tag segment, there is no other flag we can check to tell if memory
tagging was enabled. (unlike a live process that can support memory
tagging even if there are currently no tagged memory regions)
Tests have been added at the commands level for a core file with
mte and without.
There is a lot of overlap between the "memory tag read" tests here and the unit tests for
MemoryTagManagerAArch64MTE::UnpackTagsFromCoreFileSegment, but I think it's
worth keeping to check ProcessElfCore doesn't cause an assert.
Depends on D129487
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D129489
Eliminate boilerplate of having each test manually assign to `mydir` by calling
`compute_mydir` in lldbtest.py.
Differential Revision: https://reviews.llvm.org/D128077
Add a function to make it easier to debug a test failure caused by an
unexpected state.
Currently, tests are using assertEqual which results in a cryptic error
message: "AssertionError: 5 != 10". Even when a test provides a message
to make it clear why a particular state is expected, you still have to
figure out which of the two was the expected state, and what the other
value corresponds to.
We have a function in lldbutil that helps you convert the state number
into a user readable string. This patch adds a wrapper around
assertEqual specifically for comparing states and reporting better error
messages.
The aforementioned error message now looks like this: "AssertionError:
stopped (5) != exited (10)". If the user provided a message, that
continues to get printed as well.
Differential revision: https://reviews.llvm.org/D127355
This is off by default. If you get a result and that
memory has memory tags, when --show-tags is given you'll
see the tags inline with the memory content.
```
(lldb) memory read mte_buf mte_buf+64 --show-tags
<...>
0xfffff7ff8020: 00 00 00 00 00 00 00 00 0d f0 fe ca 00 00 00 00 ................ (tag: 0x2)
<...>
(lldb) memory find -e 0xcafef00d mte_buf mte_buf+64 --show-tags
data found at location: 0xfffff7ff8028
0xfffff7ff8028: 0d f0 fe ca 00 00 00 00 00 00 00 00 00 00 00 00 ................ (tags: 0x2 0x3)
0xfffff7ff8038: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ (tags: 0x3 0x4)
```
The logic for handling alignments is the same as for memory read
so in the above example because the line starts misaligned to the
granule it covers 2 granules.
Depends on D125089
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D125090
This reverts commit 3e928c4b9d.
This fixes an issue seen on Windows where we did not properly
get the section names of regions if they overlapped. Windows
has regions like:
[0x00007fff928db000-0x00007fff949a0000) ---
[0x00007fff949a0000-0x00007fff949a1000) r-- PECOFF header
[0x00007fff949a0000-0x00007fff94a3d000) r-x .hexpthk
[0x00007fff949a0000-0x00007fff94a85000) r-- .rdata
[0x00007fff949a0000-0x00007fff94a88000) rw- .data
[0x00007fff949a0000-0x00007fff94a94000) r-- .pdata
[0x00007fff94a94000-0x00007fff95250000) ---
I assumed that you could just resolve the address and get the section
name using the start of the region but here you'd always get
"PECOFF header" because they all have the same start point.
The usual command repeating loop used the end address of the previous
region when requesting the next, or getting the section name.
So I've matched this in the --all scenario.
In the example above, somehow asking for the region at
0x00007fff949a1000 would get you a region that starts at
0x00007fff949a0000 but has a different end point. Using the load
address you get (what I assume is) the correct section name.
This does 2 things:
* Moves it after the short options. Which makes sense given it's
a niche, default off option.
(if 2 files for one option seems a bit much, I am going to reuse
them for "memory find" later)
* Fixes the use of repeated commands. For example:
memory read buf --show-tags
<shows tags>
memory read
<shows tags>
Added tests for the repetition and updated existing help tests.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D125089
Previously if you read a code/data mask before there was a valid thread
you would get the top byte mask. This meant the value was "valid" as in,
don't read it again.
When using a corefile we ask for the data mask very early on and this
meant that later once you did have a thread it wouldn't read the
register to get the rest of the mask.
This fixes that and adds a corefile test generated from the same program
as in my previous change on this theme.
Depends on D118794
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D122411
Non-address bits are not part of the virtual address in a pointer.
So they must be removed before passing to interfaces like ptrace.
Some of them we get way with not removing, like AArch64's top byte.
However this is only because of a hardware feature that ignores them.
This change updates all the Process/Target Read/Write memory methods
to remove non-address bits before using addresses.
Doing it in this way keeps lldb-server simple and also fixes the
memory caching when differently tagged pointers for the same location
are read.
Removing the bits is done at the ReadMemory level not DoReadMemory
because particualrly for process, many subclasses override DoReadMemory.
Tests have been added for read/write at the command and API level,
for process and target. This includes variants like
Read<sometype>FromMemory. Commands are tested to make sure we remove
at the command and API level.
"memory find" is not included because:
* There is no API for it.
* It already has its own address handling tests.
Software breakpoints do use these methods but they are not tested
here because there are bigger issues to fix with those. This will
happen in another change.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D118794
This adds an option to the memory region command
to print all regions at once. Like you can do by
starting at address 0 and repeating the command
manually.
memory region [-a] [<address-expression>]
(lldb) memory region --all
[0x0000000000000000-0x0000000000400000) ---
[0x0000000000400000-0x0000000000401000) r-x <...>/a.out PT_LOAD[0]
<...>
[0x0000fffffffdf000-0x0001000000000000) rw- [stack]
[0x0001000000000000-0xffffffffffffffff) ---
The output matches exactly what you'd get from
repeating the command. Including that it shows
unmapped areas between the mapped regions.
(this is why Process GetMemoryRegions is not
used, that skips unmapped areas)
Help text has been updated to show that you can have
an address or --all but not both.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D111791
Replace forms of `assertTrue(err.Success())` with `assertSuccess(err)` (added in D82759).
* `assertSuccess` prints out the error's message
* `assertSuccess` expresses explicit higher level semantics, both to the reader and for test failure output
* `assertSuccess` seems not to be well known, using it where possible will help spread knowledge
* `assertSuccess` statements are more succinct
Differential Revision: https://reviews.llvm.org/D119616
This reverts commit 0df522969a.
Additional checks are added to fix the detection of the last memory region
in GetMemoryRegions or repeating the "memory region" command when the
target has non-address bits.
Normally you keep reading from address 0, looking up each region's end
address until you get LLDB_INVALID_ADDR as the region end address.
(0xffffffffffffffff)
This is what the remote will return once you go beyond the last mapped region:
[0x0000fffffffdf000-0x0001000000000000) rw- [stack]
[0x0001000000000000-0xffffffffffffffff) ---
Problem is that when we "fix" the lookup address, we remove some bits
from it. On an AArch64 system we have 48 bit virtual addresses, so when
we fix the end address of the [stack] region the result is 0.
So we loop back to the start.
[0x0000fffffffdf000-0x0001000000000000) rw- [stack]
[0x0000000000000000-0x0000000000400000) ---
To fix this I added an additional check for the last range.
If the end address of the region is different once you apply
FixDataAddress, we are at the last region.
Since the end of the last region will be the last valid mappable
address, plus 1. That 1 will be removed by the ABI plugin.
The only side effect is that on systems with non-address bits, you
won't get that last catch all unmapped region from the max virtual
address up to 0xf...f.
[0x0000fffff8000000-0x0000fffffffdf000) ---
[0x0000fffffffdf000-0x0001000000000000) rw- [stack]
<ends here>
Though in some way this is more correct because that region is not
just unmapped, it's not mappable at all.
No extra testing is needed because this is already covered by
TestMemoryRegion.py, I simply forgot to run it on system that had
both top byte ignore and pointer authentication.
This change has been tested on a qemu VM with top byte ignore,
memory tagging and pointer authentication enabled.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D115508
This adds an option --show-tags to "memory read".
(lldb) memory read mte_buf mte_buf+32 -f "x" -s8 --show-tags
0x900fffff7ff8000: 0x0000000000000000 0x0000000000000000 (tag: 0x0)
0x900fffff7ff8010: 0x0000000000000000 0x0000000000000000 (tag: 0x1)
Tags are printed on the end of each line, if that
line has any tags associated with it. Meaning that
untagged memory output is unchanged.
Tags are printed based on the granule(s) of memory that
a line covers. So you may have lines with 1 tag, with many
tags, no tags or partially tagged lines.
In the case of partially tagged lines, untagged granules
will show "<no tag>" so that the ordering is obvious.
For example, a line that covers 2 granules where the first
is not tagged:
(lldb) memory read mte_buf-16 mte_buf+16 -l32 -f"x" --show-tags
0x900fffff7ff7ff0: 0x00000000 <...> (tags: <no tag> 0x0)
Untagged lines will just not have the "(tags: ..." at all.
Though they may be part of a larger output that does have
some tagged lines.
To do this I've extended DumpDataExtractor to also print
memory tags where it has a valid execution context and
is asked to print them.
There are no special alignment requirements, simply
use "memory read" as usual. All alignment is handled
in DumpDataExtractor.
We use MakeTaggedRanges to find all the tagged memory
in the current dump, then read all that into a MemoryTagMap.
The tag map is populated once in DumpDataExtractor and re-used
for each subsequently printed line (or recursive call of
DumpDataExtractor, which some formats do).
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D107140
This removes the non-address bits before we try to use
the addresses.
Meaning that when results are shown, those results won't
show non-address bits either. This follows what "memory read"
has done. On the grounds that non-address bits are a property
of a pointer, not the memory pointed to.
I've added testing and merged the find and read tests into one
file.
Note that there are no API side changes because "memory find"
does not have an equivalent API call.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D117299
Although the memory tag commands use a memory tag manager to handle
addresses, that only removes the top byte.
That top byte is 4 bits of memory tag and 4 free bits, which is more
than it should strictly remove but that's how it is for now.
There are other non-address bit uses like pointer authentication.
To ensure the memory tag manager only has to deal with memory tags,
use the ABI plugin to remove the rest.
The tag access test has been updated to sign all the relevant pointers
and require that we're running on a system with pointer authentication
in addition to memory tagging.
The pointers will look like:
<4 bit user tag><4 bit memory tag><signature><bit virtual address>
Note that there is currently no API for reading memory tags. It will
also have to consider this when it arrives.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D117672
This was left over from when I had used some pointer authentication
instructions to sign the pointer. Then I realised that simply setting
the top byte is enough to prove the ABI plugin is being called.
Top byte ignore is a feature of the armv8-a architecure and doesn't
need any extra compiler flags.
Due to a missing cast the << 60 always resulted in zero leaving
the top nibble empty. So we weren't actually testing that lldb
ignores those bits in addition to the tag bits.
Correct that and also set the top nibbles to ascending values
so that we can catch if lldb only removes one of the tag bits
and top nibble, but not both.
In future the tag manager will likely only remove the tag bits
and leave non-address bits to the ABI plugin but for now make
sure we're testing what we claim to implement.
Addresses on AArch64 can have top byte tags, memory tags and pointer
authentication signatures in the upper bits.
While testing memory tagging I found that memory read couldn't
read a range if the two addresses had different tags. The same
could apply to signed pointers given the right circumstance.
(lldb) memory read mte_buf_alt_tag mte_buf+16
error: end address (0x900fffff7ff8010) must be greater than the start
address (0xa00fffff7ff8000).
Or it would try to read a lot more memory than expected.
(lldb) memory read mte_buf mte_buf_alt_tag+16
error: Normally, 'memory read' will not read over 1024 bytes of data.
error: Please use --force to override this restriction just once.
error: or set target.max-memory-read-size if you will often need a
larger limit.
Fix this by removing non address bits before we calculate the read
range. A test is added for AArch64 Linux that confirms this by using
the top byte ignore feature.
This means that if you do read with a tagged pointer the output
does not include those tags. This is potentially confusing but I think
overall it's better that we don't pretend that we're reading memory
from a range that the process is unable to map.
(lldb) p ptr1
(char *) $4 = 0x3400fffffffff140 "\x80\xf1\xff\xff\xff\xff"
(lldb) p ptr2
(char *) $5 = 0x5600fffffffff140 "\x80\xf1\xff\xff\xff\xff"
(lldb) memory read ptr1 ptr2+16
0xfffffffff140: 80 f1 ff ff ff ff 00 00 38 70 bc f7 ff ff 00 00 ........8p......
Reviewed By: omjavaid, danielkiss
Differential Revision: https://reviews.llvm.org/D103626
This reverts commit 640beb38e7.
That commit caused performance degradtion in Quicksilver test QS:sGPU and a functional test failure in (rocPRIM rocprim.device_segmented_radix_sort).
Reverting until we have a better solution to s_cselect_b64 codegen cleanup
Change-Id: Ibf8e397df94001f248fba609f072088a46abae08
Reviewed By: kzhuravl
Differential Revision: https://reviews.llvm.org/D115960
Change-Id: Id169459ce4dfffa857d5645a0af50b0063ce1105
It was being used only in some very old tests (which pass even without
it) and its implementation is highly questionable.
These days we have different mechanisms for requesting a build with a
particular kind of c++ library (USE_LIB(STD)CPP in the makefile).
This reverts commit fac3f20de5.
I found this has broken how we detect the last memory region in
GetMemoryRegions/"memory region" command.
When you're debugging an AArch64 system with pointer authentication,
the ABI plugin will remove the top bit from the end address of the last
user mapped area.
(lldb)
[0x0000fffffffdf000-0x0001000000000000) rw- [stack]
ABI plugin removes anything above the 48th bit (48 bit virtual addresses
by default on AArch64, leaving an address of 0.
(lldb)
[0x0000000000000000-0x0000000000400000) ---
You get back a mapping for 0 and get into an infinite loop.
This adds a specific unwind plan for AArch64 Linux sigreturn frames.
Previously we assumed that the fp would be valid here but it is not.
https://github.com/torvalds/linux/blob/master/arch/arm64/kernel/vdso/sigreturn.S
On Ubuntu Bionic it happened to point to an old frame info which meant
you got what looked like a correct backtrace. On Focal, the info is
completely invalid. (probably due to some code shuffling in libc)
This adds an UnwindPlan that knows that the sp in a sigreturn frame
points to an rt_sigframe from which we can offset to get saved
sp and pc values to backtrace correctly.
Based on LibUnwind's change: https://reviews.llvm.org/D90898
A new test is added that sets all compares the frames from the initial
signal catch to the handler break. Ensuring that the stack/frame pointer,
function name and register values match.
(this test is AArch64 Linux specific because it's the only one
with a specific unwind plan for this situation)
Fixes https://bugs.llvm.org/show_bug.cgi?id=52165
Reviewed By: omjavaid, labath
Differential Revision: https://reviews.llvm.org/D112069
This reverts commit 5fbcf67734.
ProcessDebugger is used in ProcessWindows and NativeProcessWindows.
I thought I was simplifying things by renaming to DoGetMemoryRegionInfo
in ProcessDebugger but the Native process side expects "GetMemoryRegionInfo".
Follow the pattern that WriteMemory uses. So:
* ProcessWindows::DoGetMemoryRegioninfo calls ProcessDebugger::GetMemoryRegionInfo
* NativeProcessWindows::GetMemoryRegionInfo does the same
On AArch64 we have various things using the non address bits
of pointers. This means when you lookup their containing region
you won't find it if you don't remove them.
This changes Process GetMemoryRegionInfo to a non virtual method
that uses the current ABI plugin to remove those bits. Then it
calls DoGetMemoryRegionInfo.
That function does the actual work and is virtual to be overriden
by Process implementations.
A test case is added that runs on AArch64 Linux using the top
byte ignore feature.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D102757
The "memory tag read" command will now tell you
when the allocation tag read does not match the logical
tag.
(lldb) memory tag read mte_buf+(8*16) mte_buf+(8*16)+48
Logical tag: 0x9
Allocation tags:
[0xfffff7ff7080, 0xfffff7ff7090): 0x8 (mismatch)
[0xfffff7ff7090, 0xfffff7ff70a0): 0x9
[0xfffff7ff70a0, 0xfffff7ff70b0): 0xa (mismatch)
The logical tag will be taken from the start address
so the end could have a different tag. You could for example
read from ptr_to_array_1 to ptr_to_array_2. Where the latter
is tagged differently to prevent buffer overflow.
The existing command will read 1 granule if you leave
off the end address. So you can also use it as a quick way
to check a single location.
(lldb) memory tag read mte_buf
Logical tag: 0x9
Allocation tags:
[0xfffff7ff7000, 0xfffff7ff7010): 0x0 (mismatch)
This avoids the need for a seperate "memory tag check" command.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D106880
In the latest Linux kernels synchronous tag faults
include the tag bits in their address.
This change adds logical and allocation tags to the
description of synchronous tag faults.
(asynchronous faults have no address)
Process 1626 stopped
* thread #1, name = 'a.out', stop reason = signal SIGSEGV: sync tag check fault (fault address: 0x900fffff7ff9010 logical tag: 0x9 allocation tag: 0x0)
This extends the existing description and will
show as much as it can on the rare occasion something
fails.
This change supports AArch64 MTE only but other
architectures could be added by extending the
switch at the start of AnnotateSyncTagCheckFault.
The rest of the function is generic code.
Tests have been added for synchronous and asynchronous
MTE faults.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D105178
The default mode of "memory tag write" is to calculate the
range from the start address and the number of tags given.
(just like "memory write" does)
(lldb) memory tag write mte_buf 1 2
(lldb) memory tag read mte_buf mte_buf+48
Logical tag: 0x0
Allocation tags:
[0xfffff7ff9000, 0xfffff7ff9010): 0x1
[0xfffff7ff9010, 0xfffff7ff9020): 0x2
[0xfffff7ff9020, 0xfffff7ff9030): 0x0
This new option allows you to set an end address and have
the tags repeat until that point.
(lldb) memory tag write mte_buf 1 2 --end-addr mte_buf+64
(lldb) memory tag read mte_buf mte_buf+80
Logical tag: 0x0
Allocation tags:
[0xfffff7ff9000, 0xfffff7ff9010): 0x1
[0xfffff7ff9010, 0xfffff7ff9020): 0x2
[0xfffff7ff9020, 0xfffff7ff9030): 0x1
[0xfffff7ff9030, 0xfffff7ff9040): 0x2
[0xfffff7ff9040, 0xfffff7ff9050): 0x0
This is implemented using the QMemTags packet previously
added. We skip validating the number of tags in lldb and send
them on to lldb-server, which repeats them as needed.
Apart from the number of tags, all the other client side checks
remain. Tag values, memory range must be tagged, etc.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D105183