Skip to content

Writing regression tests

Steve Crawford edited this page Dec 30, 2019 · 10 revisions

RegtestData

A class that holds state information that allows a test to communicate with bytesalad (Artifactory). Attributes point to were input and truth data files live locally and on Artifactory, and where outputs from the test live locally.

RegtestData.input
RegtestData.input_remote
RegtestData.output
RegtestData.truth
RegtestData.truth_remote

Also provides some convenience methods to get the input and truth data from Artifactory into the local directory where the test is running.

RegtestData.get_data()
RegtestData.get_truth()
RegtestData.get_asn()

Fixtures that return a RegtestData instance to the test:

  • rtdata returns an instance of RegtestData to a test for use.

  • rtdata_module returns an instance of RegtestData to another fixture that can be scoped at function or module level, and then that fixture in turn runs a pipeline or step, and then those results get supplied to individual tests that can test the results of the pipeline or step in various ways.

JSON breadcrumbs get left behind on failure

  • As in our current system, if a test fails, a JSON specfile gets left behind, and Jenkins slurps these up and uploads the failed outputs to Artifactory
  • Additionally, a JSON specfile that allows OKification gets left behind. We will have to write a small script to utilize these to allow okifying.

An example test using the rtdata fixture:

import pytest
from astropy.io.fits.diff import FITSDiff

from jwst.pipeline.collect_pipeline_cfgs import collect_pipeline_cfgs
from jwst.stpipe import Step

@pytest.mark.bigdata
def test_miri_image2_cal(_jail, rtdata, fitsdiff_default_kwargs):
    rtdata.get_data("miri/image/jw00001001001_01101_00001_mirimage_rate.fits")

    collect_pipeline_cfgs("config")
    args = ["config/calwebb_image2.cfg", rtdata.input]
    Step.from_cmdline(args)
    rtdata.output = "jw00001001001_01101_00001_mirimage_cal.fits"

    rtdata.get_truth("truth/test_miri_image2_cal/jw00001001001_01101_00001_mirimage_cal.fits")

    fitsdiff_default_kwargs["rtol"] = 0.0001
    diff = FITSDiff(rtdata.output, rtdata.truth, **fitsdiff_default_kwargs)
    assert diff.identical, diff.report()

Instructions for writing new regression tests

These are instructions for writing regression tests for the new regression test software.

Note: For running new regression, the argument '--dev=newdev' needs to be added to the pytest command.

  1. Identify the test that needs to be created. If a Jira ticket in the JP project does not exist, then create one.

  2. If necessary, determine the data that is needed to create the test. This could be data that already exists in the older regression tests, data that exists in SDP testing suite, or data that needs to be simulated.

  3. If it is obtained at the same time, run the input data through the code to produce the expected outcome data. Check to make sure that the output data is as expected.

  4. Upload the data to artifactory. The input data should be uploaded to a directory under jwst-pipeline/newdev/[ins]/[exp_type]. The truth files should be uploaded to a directory under jwst-pipeline/newdev/truth/[name of test].

  5. Write the test. These should be in the jwst/regtest directory in the code repository. There are examples of how to write tests for a step or a pipeline and if they have single or multiple outputs.

  6. Test the test locally. This can be done by running pytest --env=newdev --bigdata jwst/regtest/ -k [name of test]

  7. Make a pull request to the main repository.

Clone this wiki locally