Extraction is the art of extracting the test results from the user’s test logs and saving them into an XML file. This is done by using the PinDown extract command. This XML file is then sub-sequentially read by the PinDown read_results command after which PinDown can start analyzing the results. Setting up the extraction is an important step in integrating PinDown into the user’s test system.
This user guide helps the user setting up the extraction by providing examples for different common scenarios.
1 Functional Verification
1.1 Single Build, Multiple Tests
There are multiple tests and only one build.
The test names form part of the file names or file paths.
The test names are found inside log files together with the test results. It is not possible to get the test names from the file paths or file names. There is one log file per test or one log file containing the logs of all test results run in sequence.
The test results are available in a summary file generated by the user. Each row contains the result from one test.
1.2 Multiple Builds, Multiple Tests
The results consists of both configurations and tests. Each configuration has its own build results and a list of tests associated with it.
Both the configuration names and the test names form part of the file names or file paths.
The configuration names and the test names are found inside log files together with the build and test results. It is not possible to get the configuration or test names from the file paths or file names.
Both the build results and the test results are available in a summary file generated by the user. Each row contains the results from one test and its associated compilation results.
The builds are compiled in build steps. At the first build step there is no concept of configurations. It is just one initial compile that needs to work properly for it to be possible to build the subsequent build steps. The later build steps does peforms one compile per configuration. The test results are available in test log files, which also contains configuration name that the each test belongs to.
The builds are compiled in build steps as above. The tests are here also compiled in steps. The first step is compiling test libs used by all tests (there are no test names at this point), the second step is compiling each individual test and the third step is to actually run the test
An index file contains a list of result logs from which the results should be extracted.
2 Non-Functional Verification
A test log contains a metric, in this case a performance metric, which must be less/more/equal than a certain limit in order for the test to be considered to have passed. The same method can be used to extract any metric, e.g. synthesis area, max critical path, functional coverage etc.
Extraction of linting results is special in two ways: 1) there are no pass messages (no news is good news) and 2) there are no test names. The test names are instead set to a combination of the file name and the error ID number, both taken from the lint error message. This allows multiple lint issues to be debugged per file.