Skip to content

Commit

Permalink
Update Readme (unicode-org#126)
Browse files Browse the repository at this point in the history
  • Loading branch information
echeran authored Nov 18, 2023
1 parent 750f684 commit d13b91c
Showing 1 changed file with 50 additions and 24 deletions.
74 changes: 50 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Conceptually, there are three main functional units of the DDT implementation:

![Conceptual model of Data Driven Testing](./ddt_concept_model.png)

## Data generation:
## Data generation

Utilizes Unicode (UTS-35) specifications, CLDR data, and existing ICU test
data. Existing ICU test data has the advantage of being already structured
Expand Down Expand Up @@ -93,7 +93,7 @@ parameters to be set for computing a result.
},
```

## Text Execution:
## Text Execution

Test execution consists of a Test Driver script and implementation-specific
executables. The test driver executes each of the configured test
Expand All @@ -113,42 +113,42 @@ is executed. Results are saved to a JSON output file.

See [executors/README](./executors/README.md) for more details

## Verification:
## Verification

Each test is matched with the corresponding data from the required test
results. A report of the test results is generated. Several kinds of status
values are possible for each test item:

* Success: the actual result agrees with expected results
* Failure: a result is generated, but the result is not the same as the expected
* **Success**: the actual result agrees with expected results
* **Failure**: a result is generated, but the result is not the same as the expected
value.
* No test run: The test was not executed by the test implementation for the data
* **No test run**: The test was not executed by the test implementation for the data
item
* Error: the test resulted in an exception or other behavior not anticipated for
* **Error**: the test resulted in an exception or other behavior not anticipated for
the test case

### Open questions for the verifier:
### Open questions for the verifier
* What should be done if the test driver fails to complete? How can this be
determined?

* Proposal: each test execution shall output a completion message,
indicating that the test driver finished its execution normally, i.e., did not
crash.

# How to use DDT:
# How to use DDT

In its first implementation, Data Driven Test uses data files formatted with
JSON structures describing tests and parameters. The data directory string is
set up as follows:

## A directory testData containing:
## A directory `testData` containing
* Test data files for each type of test, e.g., collation, numberformat,
displaynames, etc. Each file contains tests with a label, input, and
parameters.
* Verify files for each test type. Each contains a list of test labels and
expected results from the corresponding tests.

## Directory testOutput
## Directory `testOutput`

This contains a subdirectory for each executor. The output file from each test
is stored in the appropriate subdirectory. Each test result contains the label
Expand All @@ -157,7 +157,7 @@ string.

The results file contains information identifying the test environment as well
as the result from each test. As an example, collation test results from the
testOutput/node file are shown here:
`testOutput/node` file are shown here:

```
{
Expand Down Expand Up @@ -192,27 +192,27 @@ testOutput/node file are shown here:
}
```

## Directory testReports
This directory stores summary results from verifying the tests performed by each executor. Included in the testReports directory are:
## Directory `testReports`
This directory stores summary results from verifying the tests performed by each executor. Included in the `testReports` directory are:

* index.html: shows all tests run and verified for all executors and versions. Requires a webserver to display this properly.
* `index.html`: shows all tests run and verified for all executors and versions. Requires a webserver to display this properly.

* exec_summary.json: contains summarized results for each pair (executor, icu version) in a graphical form. Contains links to details for each test pair.
* `exec_summary.json`: contains summarized results for each pair (executor, icu version) in a graphical form. Contains links to details for each test pair.

* subdirectory for each executor, each containing verification of the tested icu versions, e.g., node/, rust/, etc.
* subdirectory for each executor, each containing verification of the tested icu versions, e.g., `node/`, `rust/`, etc.

Under each executor, one or more ICU version files are created, each containing:

* verfier_test_report.html for showing results to a user via a web server
* `verfier_test_report.html` - for showing results to a user via a web server

* verfier_test_report.json containing verifier output for programmatic use
* `verfier_test_report.json` - containing verifier output for programmatic use

* failing_tests.json: a list of all failing tests with input values
* pass.json: list of test cases that match their expected results
* test_errors.json: list of test cases where the executor reported an error
* unsupported.json" list of test cases that are not expected to be supported in this version
* `failing_tests.json` - a list of all failing tests with input values
* `pass.json` - list of test cases that match their expected results
* `test_errors.json` - list of test cases where the executor reported an error
* `unsupported.json` - list of test cases that are not expected to be supported in this version

The verifier_test_report.json file contains information on tests run and comparison with the expected results. At a minimum, each report contains:
The `verifier_test_report.json` file contains information on tests run and comparison with the expected results. At a minimum, each report contains:

* The executor and test type
* Date and time of the test
Expand All @@ -227,6 +227,32 @@ The verifier_test_report.json file contains information on tests run and compari
differences such as missing or extra characters or substitutions found in
output data.

## Contributor setup

Requirements to run Data Driven Testing code locally:

- Install the Python package `jsonschema`
* In a standard Python environment, you can run
```
pip install jsonschema
```
* Some operating systems (ex: Debian) might prefer that you install
the OS package that encapsulates the Python package
```
sudo apt-get install python-jsonschema
```
- Install the minimum version supported by ICU4X
* The latest minimum supported supported Rust version ("MSRV") can be found in the
[`rust-toolchain.toml` file](https://github.com/unicode-org/icu4x/blob/main/rust-toolchain.toml)
* To view your current default Rust version (and other locally installed Rust versions):
```
rustup show
```
* To update to the latest Rust version:
```
rustup update
```
# History
Data Driven Test was initiated in 2022 at Google. The first release of the
Expand Down

0 comments on commit d13b91c

Please sign in to comment.