With this commit, we also hide the implementation details of
`std::invoke`. To do so, the `LibCXXFrameRecognizer` got a couple more
regular expressions.
The regular expression passed into `AddRecognizer` became problematic,
as it was evaluated on the demangled name. Those names also included
result types for C++ symbols. For `std::__invoke` the return type is a
huge `decltype(...)`, making the regular expresison really hard to
write.
Instead, I added support to `AddRecognizer` for matching on the
demangled names without result type and argument types.
By hiding the implementation details of `invoke`, also the back traces
for `std::function` become even nicer, because `std::function` is using
`__invoke` internally.
Co-authored-by: Adrian Prantl <aprantl@apple.com>
Compilers and language runtimes often use helper functions that are
fundamentally uninteresting when debugging anything but the
compiler/runtime itself. This patch introduces a user-extensible
mechanism that allows for these frames to be hidden from backtraces and
automatically skipped over when navigating the stack with `up` and
`down`.
This does not affect the numbering of frames, so `f <N>` will still
provide access to the hidden frames. The `bt` output will also print a
hint that frames have been hidden.
My primary motivation for this feature is to hide thunks in the Swift
programming language, but I'm including an example recognizer for
`std::function::operator()` that I wished for myself many times while
debugging LLDB.
rdar://126629381
Example output. (Yes, my proof-of-concept recognizer could hide even
more frames if we had a method that returned the function name without
the return type or I used something that isn't based off regex, but it's
really only meant as an example).
before:
```
(lldb) thread backtrace --filtered=false
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
* frame #0: 0x0000000100001f04 a.out`foo(x=1, y=1) at main.cpp:4:10
frame #1: 0x0000000100003a00 a.out`decltype(std::declval<int (*&)(int, int)>()(std::declval<int>(), std::declval<int>())) std::__1::__invoke[abi:se200000]<int (*&)(int, int), int, int>(__f=0x000000016fdff280, __args=0x000000016fdff224, __args=0x000000016fdff220) at invoke.h:149:25
frame #2: 0x000000010000399c a.out`int std::__1::__invoke_void_return_wrapper<int, false>::__call[abi:se200000]<int (*&)(int, int), int, int>(__args=0x000000016fdff280, __args=0x000000016fdff224, __args=0x000000016fdff220) at invoke.h:216:12
frame #3: 0x0000000100003968 a.out`std::__1::__function::__alloc_func<int (*)(int, int), std::__1::allocator<int (*)(int, int)>, int (int, int)>::operator()[abi:se200000](this=0x000000016fdff280, __arg=0x000000016fdff224, __arg=0x000000016fdff220) at function.h:171:12
frame #4: 0x00000001000026bc a.out`std::__1::__function::__func<int (*)(int, int), std::__1::allocator<int (*)(int, int)>, int (int, int)>::operator()(this=0x000000016fdff278, __arg=0x000000016fdff224, __arg=0x000000016fdff220) at function.h:313:10
frame #5: 0x0000000100003c38 a.out`std::__1::__function::__value_func<int (int, int)>::operator()[abi:se200000](this=0x000000016fdff278, __args=0x000000016fdff224, __args=0x000000016fdff220) const at function.h:430:12
frame #6: 0x0000000100002038 a.out`std::__1::function<int (int, int)>::operator()(this= Function = foo(int, int) , __arg=1, __arg=1) const at function.h:989:10
frame #7: 0x0000000100001f64 a.out`main(argc=1, argv=0x000000016fdff4f8) at main.cpp:9:10
frame #8: 0x0000000183cdf154 dyld`start + 2476
(lldb)
```
after
```
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
* frame #0: 0x0000000100001f04 a.out`foo(x=1, y=1) at main.cpp:4:10
frame #1: 0x0000000100003a00 a.out`decltype(std::declval<int (*&)(int, int)>()(std::declval<int>(), std::declval<int>())) std::__1::__invoke[abi:se200000]<int (*&)(int, int), int, int>(__f=0x000000016fdff280, __args=0x000000016fdff224, __args=0x000000016fdff220) at invoke.h:149:25
frame #2: 0x000000010000399c a.out`int std::__1::__invoke_void_return_wrapper<int, false>::__call[abi:se200000]<int (*&)(int, int), int, int>(__args=0x000000016fdff280, __args=0x000000016fdff224, __args=0x000000016fdff220) at invoke.h:216:12
frame #6: 0x0000000100002038 a.out`std::__1::function<int (int, int)>::operator()(this= Function = foo(int, int) , __arg=1, __arg=1) const at function.h:989:10
frame #7: 0x0000000100001f64 a.out`main(argc=1, argv=0x000000016fdff4f8) at main.cpp:9:10
frame #8: 0x0000000183cdf154 dyld`start + 2476
Note: Some frames were hidden by frame recognizers
```
This patch renames the `scripting template` subcommand to be `scripting
extension` instead since that would make more sense for upcoming
commands.
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
This patch introduces a new `template` multiword sub-command to the
`scripting` top-level command. As the name suggests, this sub-command
operates on scripting templates, and currently has the ability to
automatically discover the various scripting extensions that lldb
supports.
This was previously reviewed in #97273.
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
This patch introduces a new `template` multiword sub-command to the
`scripting` top-level command. As the name suggests, this sub-command
operates on scripting templates, and currently has the ability to
automatically discover the various scripting extensions that lldb
supports.
This was previously reviewed in #97273.
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
This patch introduces a new `template` multiword sub-command to the
`scripting` top-level command. As the name suggests, this sub-command
operates on scripting templates, and currently has the ability to
automatically discover the various scripting extensions that lldb
supports.
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
This patch introduces a new top-level `scripting` command with an `run`
sub-command, that basically replaces the `script` raw command.
To avoid breaking the `script` command usages, this patch also adds an
`script` alias to the `scripting run` sub-command.
The reason behind this change is to have a top-level command that will
cover scripting related subcommands.
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
# Added/changed options
The following options are **added** to the `statistics dump` command:
* `--targets=bool`: Boolean. Dumps the `targets` section.
* `--modules=bool`: Boolean. Dumps the `modules` section.
When both options are given, the field `moduleIdentifiers` will be
dumped for each target in the `targets` section.
The following options are **changed**:
* `--transcript=bool`: Changed to a boolean. Dumps the `transcript`
section.
# Behavior of `statistics dump` with various options
The behavior is **backward compatible**:
- When no options are provided, `statistics dump` dumps all sections.
- When `--summary` is provided, only dumps the summary info.
**New** behavior:
- `--targets=bool`, `--modules=bool`, `--transcript=bool` overrides the
above "default".
For **example**:
- `statistics dump --modules=false` dumps summary + targets +
transcript. No modules.
- `statistics dump --summary --targets=true --transcript=true` dumps
summary + targets (in summary mode) + transcript.
# Added options into public API
In `SBStatisticsOptions`, add:
* `Set/GetIncludeTargets`
* `Set/GetIncludeModules`
* `Set/GetIncludeTranscript`
**Alternative considered**: Thought about adding
`Set/GetIncludeSections(string sections_spec)`, which receives a
comma-separated list of section names to be included ("targets",
"modules", "transcript"). The **benefit** of this approach is that the
API is more future-proof when it comes to possible adding/changing of
section names. **However**, I feel the section names are likely to
remain unchanged for a while - it's not like we plan to make big changes
to the output of `statistics dump` any time soon. The **downsides** of
this approach are: 1\ the readability of the API is worse (requires
reading doc to understand what string can be accepted), 2\ string input
are more prone to human error (e.g. typo "target" instead of expected
"targets").
# Tests
```
bin/llvm-lit -sv ../external/llvm-project/lldb/test/API/commands/statistics/basic/TestStats.py
```
```
./tools/lldb/unittests/Interpreter/InterpreterTests
```
New test cases have been added to verify:
* Different sections are dumped/not dumped when different
`StatisticsOptions` are given through command line (CLI or
`HandleCommand`; see `test_sections_existence_through_command`) or API
(see `test_sections_existence_through_api`).
* The order in which the options are given in command line does not
matter (see `test_order_of_options_do_not_matter`).
---------
Co-authored-by: Roy Shi <royshi@meta.com>
Very minor change to help message on `process save-core`. Adds space
between two sentences explaining the `-p` option:
"Specify a plugin name to create the core file.This allows core files to
be saved in different formats."
-->
"Specify a plugin name to create the core file. This allows core files
to be saved in different formats."
Before:
```
(lldb) help process save-core
Save the current process as a core file using an appropriate file type.
Syntax: process save-core [-s corefile-style -p plugin-name] FILE
Command Options Usage:
process save-core [-p[<plugin>]] [-s <corefile-style>] <path>
-p[<plugin>] ( --plugin-name=[<plugin>] )
Specify a plugin name to create the core file.This allows core files to be saved in different formats.
-s <corefile-style> ( --style <corefile-style> )
Request a specific style of corefile to be saved.
Values: full | modified-memory | stack
This command takes options and free-form arguments. If your arguments resemble option specifiers (i.e., they start with a -
or --), you must use ' -- ' between the end of the command options and the beginning of the arguments.
```
After:
```
michristensen@devbig356 build/Debug » $HOME/llvm-sand/build/Debug/bin/lldb -x
(lldb) help process save-core
Save the current process as a core file using an appropriate file type.
Syntax: process save-core [-s corefile-style -p plugin-name] FILE
Command Options Usage:
process save-core [-p[<plugin>]] [-s <corefile-style>] <path>
-p[<plugin>] ( --plugin-name=[<plugin>] )
Specify a plugin name to create the core file. This allows core files to be saved in different formats.
-s <corefile-style> ( --style <corefile-style> )
Request a specific style of corefile to be saved.
Values: full | modified-memory | stack
This command takes options and free-form arguments. If your arguments resemble option specifiers (i.e., they start with a -
or --), you must use ' -- ' between the end of the command options and the beginning of the arguments.
```
# Changes
1. Changes to the structured transcript.
1. Add fields `commandName` and `commandArguments`. They will hold the
name and the arguments string of the expanded/executed command (e.g.
`breakpoint set` and `-f main.cpp -l 4`). This is not to be confused
with the `command` field, which holds the user input (e.g. `br s -f
main.cpp -l 4`).
2. Add field `timestampInEpochSeconds`. It will hold the timestamp when
the command is executed.
3. Rename field `seconds` to `durationInSeconds`, to improve
readability, especially since `timestampInEpochSeconds` is added.
2. When transcript is available and the newly added option
`--transcript` is present, add the transcript to the output of
`statistics dump`, as a JSON array under a new field `transcript`.
3. A few test name and comment changes.
The help output for `thread backtrace` specifies that you can pass -1 to
`--count` to display all the frames.
```
-c <count> ( --count <count> )
How many frames to display (-1 for all)
```
However, that doesn't work:
```
(lldb) thread backtrace --count -1
error: invalid integer value for option 'c'
```
The problem is that we store the option value as an unsigned and the
code to parse the string correctly rejects it. There's two ways to fix
this:
1. Make `m_count` a signed value so that it accepts negative values and
appease the parser. The function that prints the frames takes an
unsigned so a negative value will just become a really large positive
value, which is what the current implementation relies on.
2. Keep `m_count` unsigned and instead use 0 the magic value to show all
frames. I don't really see a point in not showing any frames at all,
plus that's already broken (`error: error displaying backtrace for
thread: "0x0001"`).
This patch implements (2) and at the same time improve the error
reporting so that we print the invalid value when we cannot parse it.
rdar://123881767
The help for the `-r` option to `image list` says:
-r[<width>] ( --ref-count=[<width>] )
Display the reference count if the module is still in the shared module
cache.
but that's not what it actually does. It unconditionally shows the
use_count for all Module shared pointers, regardless of whether they are
still in the shared module cache or whether they are just in the
ModuleCollection and other entities are keeping them alive. That seems
like a more useful behavior, but then it is also useful to know what's
in the shared cache, so I changed this to:
-r[<width>] ( --ref-count=[<width>] )
Display whether the module is still in the the shared module cache
(Y/N), and its shared pointer use_count.
So instead of just `{5}` you will see `{Y 5}` if it is in the shared
cache and `{N 5}` if not.
I didn't add tests for this because I'm not sure how much we want to fix
shared cache behavior in the testsuite.
Updates:
- The previous patch changed the default behavior to not load dwos in
`DWARFUnit`
~~`SymbolFileDWARFDwo *GetDwoSymbolFile(bool load_all_debug_info =
false);`~~
`SymbolFileDWARFDwo *GetDwoSymbolFile(bool load_all_debug_info = true);`
- This broke some lldb-shell tests (see
https://green.lab.llvm.org/green/view/LLDB/job/as-lldb-cmake/16273/)
- TestDebugInfoSize.py
- with symbol on-demand, by default statistics dump only reports
skeleton debug info size
- `statistics dump -f` will load all dwos. debug info = skeleton debug
info + all dwo debug info
Currently running `statistics dump` will trigger lldb to load debug info
that's not yet loaded (eg. dwo files). Resulted in a delay in the
command return, which, can be interrupting.
This patch also added a new option `--load-all-debug-info` asking
statistics to dump all possible debug info, which will force loading all
debug info available if not yet loaded.
Currently running `statistics dump` will trigger lldb to load debug info
that's not yet loaded (eg. dwo files). Resulted in a delay in the
command return, which, can be interrupting.
This patch also added a new option `--load-all-debug-info` asking
statistics to dump all possible debug info, which will force loading all
debug info available if not yet loaded.
This allows you to specify options and arguments and their definitions
and then have lldb handle the completions, help, etc. in the same way
that lldb does for its parsed commands internally.
This feature has some design considerations as well as the code, so I've
also set up an RFC, but I did this one first and will put the RFC
address in here once I've pushed it...
Note, the lldb "ParsedCommand interface" doesn't actually do all the
work that it should. For instance, saying the type of an option that has
a completer doesn't automatically hook up the completer, and ditto for
argument values. We also do almost no work to verify that the arguments
match their definition, or do auto-completion for them. This patch
allows you to make a command that's bug-for-bug compatible with built-in
ones, but I didn't want to stall it on getting the auto-command checking
to work all the way correctly.
As an overall design note, my primary goal here was to make an interface
that worked well in the script language. For that I needed, for
instance, to have a property-based way to get all the option values that
were specified. It was much more convenient to do that by making a
fairly bare-bones C interface to define the options and arguments of a
command, and set their values, and then wrap that in a Python class
(installed along with the other bits of the lldb python module) which
you can then derive from to make your new command. This approach will
also make it easier to experiment.
See the file test_commands.py in the test case for examples of how this
works.
Often, we only care about the split-dwarf files that have failed to
load. This can be useful when diagnosing binaries with many separate
debug info files where only some have errors.
```
(lldb) help image dump separate-debug-info
List the separate debug info symbol files for one or more target modules.
Syntax: target modules dump separate-debug-info <cmd-options> [<filename> [<filename> [...]]]
Command Options Usage:
target modules dump separate-debug-info [-ej] [<filename> [<filename> [...]]]
-e ( --errors-only )
Filter to show only debug info files with errors.
-j ( --json )
Output the details in JSON format.
This command takes options and free-form arguments. If your arguments
resemble option specifiers (i.e., they start with a - or --), you must use
' -- ' between the end of the command options and the beginning of the
arguments.
'image' is an abbreviation for 'target modules'
```
I updated the following tests
```
# on Linux
bin/lldb-dotest -p TestDumpDwo
# on Mac
bin/lldb-dotest -p TestDumpOso
```
This change applies to both the table and JSON outputs.
---------
Co-authored-by: Tom Yang <toyang@fb.com>
## Description
This pull request adds a new `stop-at-user-entry` option to LLDB
`process launch` command, allowing users to launch a process and pause
execution at the entry point of the program (for C-based languages,
`main` function).
## Motivation
This option provides a convenient way to begin debugging a program by
launching it and breaking at the desired entry point.
## Changes Made
- Added `stop-at-user-entry` option to `Options.td` and the
corresponding case in `CommandOptionsProcessLaunch.cpp` (short option is
'm')
- Implemented `GetUserEntryPointName` method in the Language plugins
available at the moment.
- Declared the `CreateBreakpointAtUserEntry` method in the Target API.
- Create Shell test for the command
`command-process-launch-user-entry.test`.
## Usage
`process launch --stop-at-user-entry` or `process launch -m` launches
the process and pauses execution at the entry point of the program.
This patch should allow the user to set specific auto-completion type
for their custom commands.
To do so, we had to hoist the `CompletionType` enum so the user can
access it and add a new completion type flag to the CommandScriptAdd
Command Object.
So now, the user can specify which completion type will be used with
their custom command, when they register it.
This also makes the `crashlog` custom commands use disk-file completion
type, to browse through the user file system and load the report.
Differential Revision: https://reviews.llvm.org/D152011
Signed-off-by: Med Ismail Bennani <ismail@bennani.ma>
Context: The `expression` command uses artificial variables to store the expression
result. This result variable is unconditionally kept around after the expression command
has completed. These variables are known as persistent results. These are the variables
`$0`, `$1`, etc, that are displayed when running `p` or `expression`.
This change allows users to control whether result variables are persisted, by
introducing a `--persistent-result` flag.
This change keeps the current default behavior, persistent results are created by
default. This change gives users the ability to opt-out by re-aliasing `p`. For example:
```
command unalias p
command alias p expression --persistent-result false --
```
For consistency, this flag is also adopted by `dwim-print`. Of note, if asked,
`dwim-print` will create a persistent result even for frame variables.
Differential Revision: https://reviews.llvm.org/D144230
This change adds a `--recognizer-function` (`-R`) to `type summary add`
and `type synth add` that allows users to specify that the names in
the command are not type names but python function names.
It also adds an example to lldb/examples, and a section in the data
formatters documentation on how to use recognizer functions.
Differential Revision: https://reviews.llvm.org/D137000
Add a "diagnostics dump" command to, as the name implies, dump the
diagnostics to disk. The goal of this command is to let the user
generate the diagnostics in case of an issue that doesn't cause the
debugger to crash.
This command is also critical for testing, where we don't want to cause
a crash to emit the diagnostics.
Differential revision: https://reviews.llvm.org/D135622
The command is thread trace dump function-calls and as minimum will
require printing to a file in json and non-json format
I added a test
Differential Revision: https://reviews.llvm.org/D135521
The per-PSB packet decoding logic was wrong because it was assuming that pt_insn_get_sync_offset was being udpated after every PSB. Silly me, that is not true. It returns the offset of the PSB packet after invoking pt_insn_sync_forward regardless of how many PSBs are visited later. Instead, I'm now following the approach described in https://github.com/intel/libipt/blob/master/doc/howto_libipt.md#parallel-decode for parallel decoding, which is basically what we need.
A nasty error that happened because of this is that when we had two PSBs (A and B), the following was happening
1. PSB A was processed all the way up to the end of the trace, which includes PSB B.
2. PSB B was then processed until the end of the trace.
The instructions emitted by step 2. were also emitted as part of step 1. so our trace had duplicated chunks. This problem becomes worse when you many PSBs.
As part of making sure this diff is correct, I added some other features that are very useful.
- Added a "synchronization point" event to the TraceCursor, so we can inspect when PSBs are emitted.
- Removed the single-thread decoder. Now the per-cpu decoder and single-thread decoder use the same code paths.
- Use the query decoder to fetch PSBs and timestamps. It turns out that the pt_insn_sync_forward of the instruction decoder can move past several PSBs (this means that we could skip some TSCs). On the other hand, the pt_query_sync_forward method doesn't skip PSBs, so we can get more accurate sync events and timing information.
- Turned LibiptDecoder into PSBBlockDecoder, which decodes single PSB blocks. It is the fundamental processing unit for decoding.
- Added many comments, asserts and improved error handling for clarity.
- Improved DecodeSystemWideTraceForThread so that a TSC is emitted always before a cpu change event. This was a bug that was annoying me before.
- SplitTraceInContinuousExecutions and FindLowestTSCInTrace are now using the query decoder, which can identify precisely each PSB along with their TSCs.
- Added an "only-events" option to the trace dumper to inspect only events.
I did extensive testing and I think we should have an in-house testing CI. The LLVM buildbots are not capable of supporting testing post-mortem traces of hundreds of megabytes. I'll leave that for later, but at least for now the current tests were able to catch most of the issues I encountered when doing this task.
A sample output of a program that I was single stepping is the following. You can see that only one PSB is emitted even though stepping happened!
```
thread #1: tid = 3578223
0: (event) trace synchronization point [offset = 0x0xef0]
a.out`main + 20 at main.cpp:29:20
1: 0x0000000000402479 leaq -0x1210(%rbp), %rax
2: (event) software disabled tracing
3: 0x0000000000402480 movq %rax, %rdi
4: (event) software disabled tracing
5: (event) software disabled tracing
6: 0x0000000000402483 callq 0x403bd4 ; std::vector<int, std::allocator<int>>::vector at stl_vector.h:391:7
7: (event) software disabled tracing
a.out`std::vector<int, std::allocator<int>>::vector() at stl_vector.h:391:7
8: 0x0000000000403bd4 pushq %rbp
9: (event) software disabled tracing
10: 0x0000000000403bd5 movq %rsp, %rbp
11: (event) software disabled tracing
```
This is another trace of a long program with a few PSBs.
```
(lldb) thread trace dump instructions -E -f thread #1: tid = 3603082
0: (event) trace synchronization point [offset = 0x0x80]
47417: (event) software disabled tracing
129231: (event) trace synchronization point [offset = 0x0x800]
146747: (event) software disabled tracing
246076: (event) software disabled tracing
259068: (event) trace synchronization point [offset = 0x0xf78]
259276: (event) software disabled tracing
259278: (event) software disabled tracing
no more data
```
Differential Revision: https://reviews.llvm.org/D131630
Refactor the command option enum values and the command argument table
to connect the two. This has two benefits:
- We guarantee that two options that use the same argument type have
the same accepted values.
- We can print the enum values and their description in the help
output. (D129707)
Differential revision: https://reviews.llvm.org/D129703
Thanks to ymeng@fb.com for coming up with this change.
`thread trace dump info` can dump some metrics that can be useful for
analyzing the performance and quality of a trace. This diff adds a --json
option for dumping this information in json format that can be easily
understood my machines.
Differential Revision: https://reviews.llvm.org/D129332
A trace bundle contains many trace files, and, in the case of intel pt, the
largest files are often the context switch traces because they are not
compressed by default. As a way to improve this, I'm adding a --compact option
to the `trace save` command that filters out unwanted processes from the
context switch traces. Eventually we can do the same for intel pt traces as
well.
Differential Revision: https://reviews.llvm.org/D129239
We want to include events with metadata, like context switches, and this
requires the API to handle events with payloads (e.g. information about
such context switches). Besides this, we want to support multiple
similar events between two consecutive instructions, like multiple
context switches. However, the current implementation is not good for this because
we are defining events as bitmask enums associated with specific
instructions. Thus, we need to decouple instructions from events and
make events actual items in the trace, just like instructions and
errors.
- Add accessors in the TraceCursor to know if an item is an event or not
- Modify from the TraceDumper all the way to DecodedThread to support
- Renamed the paused event to disabled.
- Improved the tsc handling logic. I was using an API for getting the tsc from libipt, but that was an overkill that should be used when not processing events manually, but as we are already processing events, we can more easily get the tscs.
event items. Fortunately this simplified many things
- As part of this refactor, I also fixed and long stating issue, which is that some non decoding errors were being inserted in the decoded thread. I changed this so that TraceIntelPT::Decode returns an error if the decoder couldn't be set up proplerly. Then, errors within a trace are actual anomalies found in between instrutions.
All test pass
Differential Revision: https://reviews.llvm.org/D128576
Add a log dump command to dump logs to a file. This only works for
channels that have a log handler associated that supports dumping. For
now that's limited to the circular log handler, but more could be added
in the future.
Differential revision: https://reviews.llvm.org/D128557
This patch adds a new flag to `log enable`, allowing the user to specify
a custom log handler. In addition to the default (stream) handler, this
allows using the circular log handler (which logs to a fixed size,
in-memory circular buffer) as well as the system log handler (which logs
to the operating system log).
Differential revision: https://reviews.llvm.org/D128323
Drop the thread-safe flag and make the locking strategy the
responsibility of the individual log handler.
Previously we got away with a non-thread safe mode because we were using
unbuffered streams that rely on the underlying syscalls/OS for
synchronization. With the introduction of log handlers, we can have
arbitrary logic involved in writing out the logs. With this patch the
log handlers can pick the most appropriate locking strategy for their
particular implementation.
Differential revision: https://reviews.llvm.org/D127922
This patch adds a buffered logging mode to lldb. A buffer size can be
passed to `log enable` with the -b flag. If no buffer size is specified,
logging is unbuffered.
Differential revision: https://reviews.llvm.org/D127986
In order to provide simple scripting support on top of instruction traces, a simple solution is to enhance the `dump instructions` command and allow printing in json and directly to a file. The format is verbose and not space efficient, but it's not supposed to be used for really large traces, in which case the TraceCursor API is the way to go.
- add a -j option for printing the dump in json
- add a -J option for pretty printing the json output
- add a -F option for specifying an output file
- add a -a option for dumping all the instructions available starting at the initial point configured with the other flags
- add tests for all cases
- refactored the instruction dumper and abstracted the actual "printing" logic. There are two writer implementations: CLI and JSON. This made the dumper itself much more readable and maintanable
sample output:
```
(lldb) thread trace dump instructions -t -a --id 100 -J
[
{
"id": 100,
"tsc": "43591204528448966"
"loadAddress": "0x407a91",
"module": "a.out",
"symbol": "void std::deque<Foo, std::allocator<Foo>>::_M_push_back_aux<Foo>(Foo&&)",
"mnemonic": "movq",
"source": "/usr/include/c++/8/bits/deque.tcc",
"line": 492,
"column": 30
},
...
```
Differential Revision: https://reviews.llvm.org/D128316