llvm][docs] Use default checkout location in test suite guide (#139264)

Step 2 tells you to checkout "llvm-test-suite" to "test-suite", but I
don't see a particular reason to use a non-default path.

If you're following the instructions exactly, it all works, but if you
autopilot that step it is surprising later when things do not work.

It's not hard for an individual to fix later, but we should suggest the
least surprising thing where we can.
This commit is contained in:
David Spickett
2025-05-13 10:58:29 +01:00
committed by GitHub
parent 28b7d6621a
commit d05854dfc8

View File

@@ -41,7 +41,7 @@ Quickstart
2. Check out the `test-suite` module with:
```bash
% git clone https://github.com/llvm/llvm-test-suite.git test-suite
% git clone https://github.com/llvm/llvm-test-suite.git
```
3. Create a build directory and use CMake to configure the suite. Use the
@@ -53,8 +53,8 @@ Quickstart
% mkdir test-suite-build
% cd test-suite-build
% cmake -DCMAKE_C_COMPILER=<path to llvm build>/bin/clang \
-C../test-suite/cmake/caches/O3.cmake \
../test-suite
-C../llvm-test-suite/cmake/caches/O3.cmake \
../llvm-test-suite
```
**NOTE!** if you are using your built clang, and you want to build and run the
@@ -102,9 +102,9 @@ MicroBenchmarks/XRay microbenchmarks, you need to add `compiler-rt` to your
# Make sure pandas and scipy are installed. Prepend `sudo` if necessary.
% pip install pandas scipy
# Show a single result file:
% test-suite/utils/compare.py results.json
% llvm-test-suite/utils/compare.py results.json
# Compare two result files:
% test-suite/utils/compare.py results_a.json results_b.json
% llvm-test-suite/utils/compare.py results_a.json results_b.json
```
@@ -287,7 +287,7 @@ benchmarks. CMake can print a list of them:
Generate build files for the ninja build tool.
- `-Ctest-suite/cmake/caches/<cachefile.cmake>`
- `-Cllvm-test-suite/cmake/caches/<cachefile.cmake>`
Use a CMake cache. The test-suite comes with several CMake caches which
predefine common or tricky build configurations.
@@ -304,7 +304,7 @@ Example usage:
- Basic Usage:
```text
% test-suite/utils/compare.py baseline.json
% llvm-test-suite/utils/compare.py baseline.json
Warning: 'test-suite :: External/SPEC/CINT2006/403.gcc/403.gcc.test' has No metrics!
Tests: 508
Metric: exec_time
@@ -328,14 +328,14 @@ Example usage:
- Show compile_time or text segment size metrics:
```bash
% test-suite/utils/compare.py -m compile_time baseline.json
% test-suite/utils/compare.py -m size.__text baseline.json
% llvm-test-suite/utils/compare.py -m compile_time baseline.json
% llvm-test-suite/utils/compare.py -m size.__text baseline.json
```
- Compare two result files and filter short running tests:
```bash
% test-suite/utils/compare.py --filter-short baseline.json experiment.json
% llvm-test-suite/utils/compare.py --filter-short baseline.json experiment.json
...
Program baseline experiment diff
@@ -349,7 +349,7 @@ Example usage:
runtime each:
```bash
% test-suite/utils/compare.py base0.json base1.json base2.json vs exp0.json exp1.json exp2.json
% llvm-test-suite/utils/compare.py base0.json base1.json base2.json vs exp0.json exp1.json exp2.json
```
### Continuous Tracking with LNT
@@ -365,11 +365,11 @@ External Suites
External suites such as SPEC can be enabled by either
- placing (or linking) them into the `test-suite/test-suite-externals/xxx` directory (example: `test-suite/test-suite-externals/speccpu2000`)
- placing (or linking) them into the `llvm-test-suite/test-suite-externals/xxx` directory (example: `llvm-test-suite/test-suite-externals/speccpu2000`)
- using a configuration option such as `-D TEST_SUITE_SPEC2000_ROOT=path/to/speccpu2000`
You can find further information in the respective README files such as
`test-suite/External/SPEC/README`.
`llvm-test-suite/External/SPEC/README`.
For the SPEC benchmarks you can switch between the `test`, `train` and
`ref` input datasets via the `TEST_SUITE_RUN_TYPE` configuration option.
@@ -398,7 +398,7 @@ picked up automatically if placed into a subdirectory of the test-suite or when
setting the `TEST_SUITE_SUBDIRS` variable:
```bash
% cmake -DTEST_SUITE_SUBDIRS=path/to/my/benchmark-suite ../test-suite
% cmake -DTEST_SUITE_SUBDIRS=path/to/my/benchmark-suite ../llvm-test-suite
```
@@ -416,7 +416,7 @@ Example:
% cmake -DTEST_SUITE_PROFILE_GENERATE=ON \
-DTEST_SUITE_USE_IR_PGO=ON \
-DTEST_SUITE_RUN_TYPE=train \
../test-suite
../llvm-test-suite
% make
% llvm-lit .
# Use the profile data for compilation and actual benchmark run:
@@ -446,7 +446,7 @@ information can be found here:
- [https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html](https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html)
Cross compilation from macOS to iOS is possible with the
`test-suite/cmake/caches/target-target-*-iphoneos-internal.cmake` CMake cache
`llvm-test-suite/cmake/caches/target-target-*-iphoneos-internal.cmake` CMake cache
files; this requires an internal iOS SDK.
### Running
@@ -464,10 +464,10 @@ There are two ways to run the tests in a cross compilation setting:
```bash
% cmake -G Ninja -D CMAKE_C_COMPILER=path/to/clang \
-C ../test-suite/cmake/caches/target-arm64-iphoneos-internal.cmake \
-C ../llvm-test-suite/cmake/caches/target-arm64-iphoneos-internal.cmake \
-D CMAKE_BUILD_TYPE=Release \
-D TEST_SUITE_REMOTE_HOST=mydevice \
../test-suite
../llvm-test-suite
% ninja
% ninja rsync
% llvm-lit -j1 -o result.json .