Python Unit Testing

Note

Changes to this document must be approved by the System Architect (RFC-24). To request changes to these policies, please file an RFC.

This page provides technical guidance to developers writing unit tests for DM’s Python code base. See Software Unit Test Policy for an overview of LSST Stack testing. LSST tests must support being run using the pytest test runner and may use its features. They can be, and traditionally have been, written using the unittest framework, with default test discovery. The use of this framework is described below. If you want to jump straight to a full example of the standard LSST Python testing boilerplate without reading the background, read the section on memory testing later in this document.

Introduction to unittest

This document will not attempt to explain full details of how to use unittest but instead shows common scenarios encountered in the LSST codebase.

A simple unittest example is shown below:

 1import unittest
 2import math
 3
 4
 5class DemoTestCase1(unittest.TestCase):
 6    """Demo test case 1."""
 7
 8    def testDemo(self):
 9        self.assertGreater(10, 5)
10        with self.assertRaisesRegex(TypeError, "unsupported operand"):
11            1 + "2"
12
13
14class DemoTestCase2(unittest.TestCase):
15    """Demo test case 2."""
16
17    def testDemo1(self):
18        self.assertNotEqual("string1", "string2")
19
20    def testDemo2(self):
21        self.assertAlmostEqual(3.14, math.pi, places=2)
22
23
24if __name__ == "__main__":
25    unittest.main()

The important things to note in this example are:

  • Run a given test file manually via pytest -sv tests/test_Example.py; see these Useful pytest options for more details. Python tests explicitly should not contain a shebang (#!/usr/bin/env python) and should not be executable (so cannot be run directly with ./test_Example.py). This avoids problems encountered when running tests on macOS and helps ensure consistency in the way that tests are executed.

  • Test file names must begin with test_ to allow pytest to automatically detect them without requiring an explicit test list.

  • Test classes must, ultimately, inherit from unittest.TestCase in order to be discovered. It is recommended that lsst.utils.tests.TestCase be used as the base class when afw objects are involved; this adds several useful test assert methods.

  • The tests themselves must be methods of the test class with names that begin with test. All other methods and classes will be ignored by the test system but can be used by tests.

  • Specific test asserts, such as assertGreater, assertIsNone or assertIn, should be used wherever possible. It is always better to use a specific assert because the error message will contain more useful detail and the intent is more obvious to someone reading the code. Only use assertTrue or assertFalse if you are checking a boolean value, or a complex statement that is unsupported by other asserts.

  • When testing that an exception is raised always use assertRaisesRegex or assertRaises as a context manager, as shown in line 10 of the above example. Test against the specific exception, do not just check for the generic Exception.

  • If a test method completes, the test passes; if it throws an uncaught exception the test has failed.

We write test files to allow them to be run by pytest rather than simply python, as the former provides more flexibility and enhanced reporting when running tests (such as specifying that only certain tests run).

Supporting Pytest

Note

pytest and its plugins are standard stack EUPS packages and do not have to be installed separately.

All LSST products that are built using scons will execute Python tests using pytest so all tests should be written using it. pytest provides a much richer execution and reporting environment for tests and can be used to run multiple test files together.

The pytest scheme for discovering tests inside Python modules is much more flexible than that provided by unittest. In particular, care must be taken not to have free functions that use a test prefix or non-TestCase test classes that are named with a Test prefix in the test files.

Note

When pytest is run by scons full warnings are reported, including DeprecationWarning. Previously these warnings were hidden in the test output but now they are more obvious, allowing you to fix any problems early.

Useful pytest options

pytest options that are useful when running tests manually:

  • Install pdbpp to get an ipython-like interface for the pytest --pdb command.

  • Use pytest -k name test.py to only run tests that match the name pattern. This is generally easier and more flexible than specifying individual tests via ClassName.testName in the unittest style.

  • Run with -sv to get extended output during the run (normally pytest grabs all of stdout).

  • Run with -r to get a summary of successes and failures at the end that can be useful when you have many failing tests.

  • Run with -n X to run your tests with X processes to speed things up (like scons -jX).

  • Run with --durations=N to get a list of the top N longest-running tests.

  • Run with --log-cli-level=INFO -sv to print log messages as they are emitted. See the note about pytest capturing log messages for more details.

The tests/SConscript file

The behavior of pytest when invoked by scons is controlled by the tests/SConscript file. At minimum this file should contain the following to enable testing with automated test discovery:

from lsst.sconsUtils import scripts
scripts.BasicSConscript.tests(pyList=[])

pyList is used to specify which Python test files to run. Here the empty list is interpreted as “allow pytest to automatically discover tests” pytest will scan the directory tree itself to find tests and will run them all together using the number of subprocesses matching the -j argument given to scons. For this mode to work, all test files must be named test_*.py.

If pyList=None (the historical default) is used, the scons tests target will be used to locate test files using a glob for *.py in the tests directory. This list will then be passed explicitly to the pytest command, bypassing its automatic test discovery.

Automatic test discovery is preferred as this ensures that there is no difference between running the tests with scons and running them with pytest without arguments, and it enables the possibility of adjusting pytest test discovery to add additional testing of all Python files in the package.

If there is pybind wrapper code in tests/ that must be compiled for the python tests to run (for example, a test C++ library that must be loaded by the python tests), there must be a BasicSConscript.pybind11() entry before the BasicSConscript.tests() entry in the tests/SConscript. Having the pybind11 come first ensures the necessary code will be compiled before any tests are loaded and run.

Running tests standalone

pySingles is an optional argument to the tests method that can be used for the rare cases where a test must be run standalone and not with other test files in a shared process.

scripts.BasicSConscript.tests(pyList=[], pySingles=["testSingle.py"])

The tests are still run using pytest but executed one at a time without using any multi-process execution. Use of this should be extremely rare. In the base package one test file is used to confirm that the LSST import code is working; this can only be tested if we know that it hasn’t previously been imported as part of another test. The other reason, so far, to run a test standalone is for test classes that dynamically generate large amounts of test data during the set up phase. Until it is possible to pin test classes to a particular process with pytest-xdist, tests such as these interact badly when test methods within the class are allocated to different subprocesses since each subprocess will generate the test files. This can use significantly more disk and CPU when the test runs, and can even cause Jenkins to fail. It is important to ensure that any files listed in pySingles should be named such that they will not be discovered by pytest. The convention is to name these files test*.py without the underscore.

Where does the output go?

When scons runs any tests, the output from those tests is written to the tests/.tests directory, and a file is created for each test that is executed. For the usual case where pytest is running on multiple test files at once, there is a single file created, pytest-*.out, in that directory, along with an XML file containing the test output in JUnit format. If a test command fails, that output is renamed to have a .failed extension and is reported by scons.

For convenience the output from the main pytest run is also written to standard output so it is visible in the log or in the shell along with other scons output.

Our default logging configuration results in pytest automatically capturing log message output and reporting it separately from output sent to stdout/stderr; successful tests don’t show any log output, while the logs from failed tests are collated at the end. To override this and get log messages printed directly (for example to see logs from successful tests, or to see log messages while you are stepping through a debugger), include --log-cli-level=INFO -sv in your pytest command when running your tests. The first option sets the log level that pytest will send directly to stderr (in this case, INFO), while the -sv options get pytest to show which tests it is executing (-v) and to print all output as it appears (-s).

Common Issues

This section describes some common problems that are encountered when using pytest.

Testing global state

pytest can run tests from more than one file in a single invocation and this can be used to verify that there is no state contaminating later tests. To run pytest use the pytest executable:

$ pytest

to run all files in the tests directory named test_*.py. To ensure that the order of test execution does not matter it is useful to sometimes run the tests in reverse order by listing the test files manually:

$ pytest `ls -r tests/test_*.py`

Note

pytest plugins are usually all enabled by default. In particular, if you install the, otherwise excellent, pytest-random-order plugin to randomize your tests, this will most likely break your builds as it interacts badly with pytest-xdist used by scons when -j is used. You can install it temporarily for investigative purposes so long as it is uninstalled afterwards.

Test Skipping and Expected Failures

When writing tests it is important that tests are skipped using the pytest skipping framework or the unittest skipping framework rather than returning from the test early. Both pytest and unittest support skipping of individual tests and entire classes using decorators or skip exceptions. LSST code sometimes raises skip exceptions in setUp or setUpClass class methods. It is also possible to indicate that a particular test is expected to fail, being reported as an error if the test unexpectedly passes. Expected failures can be used to write test code that triggers a reported bug before the fix to the bug has been implemented and without causing the continuous integration system to die. One of the primary advantages of using a modern test runner such as pytest is that it is very easy to generate machine-readable pass/fail/skip/xfail statistics to see how the system is evolving over time, and it is also easy to enable code coverage. Jenkins now provides test result information.

Enabling additional Pytest options: flake8

As described in Code MAY be validated with flake8, Python modules can be configured using the setup.cfg file. This configuration is supported by pytest and can be used to enable additional testing or tuning on a per-package basis. pytest uses the [tool:pytest] block in the configuration file. This flake8 block is automatically added to any package created via our package template system. The information below is included for reference, but generally will not need to be modified from what is in an existing package, or one created via the template.

To enable automatic flake8 testing as part of the normal test execution the following can be added to the setup.cfg file:

[tool:pytest]
addopts = --flake8
flake8-ignore = E133 E226 E228 N802 N803 N806 N812 N813 N815 N816 W503

The addopts parameter adds additional command-line options to the pytest command when it is run either from the command-line or from scons. A wrinkle with the configuration of the pytest-flake8 plugin is that it inherits the max-line-length and exclude settings from the [flake8] section of setup.cfg but you are required to explicitly list the codes to ignore when running within pytest by using the flake8-ignore parameter. One advantage of this approach is that you can ignore error codes from specific files such that the unit tests will pass, but running flake8 from the command line will remind you there is an outstanding issue. This feature should be used sparingly, but can be useful when you wish to enable code linting for the bulk of the project but have some issues preventing full compliance. For example, at the time of writing this is an extract from the setup.cfg file for the lsst.meas.base package:

[flake8]
max-line-length = 110
max-doc-length = 79
ignore = E133, E226, E228, E266, N802, N803, N806, N812, N815, N816, W503
exclude = __init__.py, tests/testLib.py

[tool:pytest]
addopts = --flake8
flake8-ignore = E133 E226 E228 N802 N803 N806 N812 N815 N816 W503
    # TODO: remove E266 lines when Task documentation is converted to rst
    baseMeasurement.py E266
    forcedMeasurement.py E266

Here two files trigger an error because Doxygen syntax sometimes requires non-compliant comment code.

Note

With this configuration each Python file tested by pytest will have flake8 run on it. If scons has not been configured to use pytest in automatic test discovery mode, you will discover that flake8 is only being run on the test files themselves rather than all the Python files in the package.

Using a shared base class

For some tests it is helpful to provide a base class and then share it amongst multiple test classes that are configured with different attributes. If this is required, be careful to not have helper functions prefixed with test. Do not have the base class named with a Test prefix and ensure it does not inherit from TestCase; if you do, pytest will attempt to find tests inside it and will issue a warning if none can be found. Historically LSST code has dealt with this by creating a test suite that only includes the classes to be tested, omitting the base class. This does not work in a pytest environment.

Consider the following test code:

import unittest


class BaseClass:
    def testParam(self):
        self.assertLess(self.PARAM, 5)


class ThisIsTest1(BaseClass, unittest.TestCase):
    PARAM = 3


if __name__ == "__main__":
    unittest.main()

which inherits from the helper class and unittest.TestCase and runs a single test without attempting to run any tests in BaseClass.

$ pytest -v python/examples/test_baseclass.py
======================================= test session starts ========================================
platform darwin -- Python 3.4.3, pytest-3.2.1, py-1.4.30, pluggy-0.3.1 -- /usr/local/bin/python3.4
cachedir: python/examples/.cache
rootdir: python/examples, inifile:
collected 1 items

python/examples/test_baseclass.py::ThisIsTest1::testParam PASSED

===================================== 1 passed in 0.02 seconds =====================================

LSST Utility Test Support Classes

lsst.utils.tests provides several helpful functions and classes for writing Python tests that developers should make use of.

Special Asserts

Inheriting from lsst.utils.tests.TestCase rather than unittest.TestCase enables new asserts that are useful for doing element-wise comparison of two floating-point numpy-like arrays or scalars.

lsst.utils.tests.TestCase.assertFloatsAlmostEqual

Asserts that floating point scalars and/or arrays are equal within the specified tolerance. The default tolerance is significantly tighter than the tolerance used by unittest.TestCase.assertAlmostEqual or numpy.testing.assert_almost_equal; if you are replacing either of those methods you may have to specify rtol and/or atol to prevent failing asserts.

lsst.utils.tests.TestCase.assertFloatsEqual

Asserts that floating point scalars and/or arrays are identically equal.

lsst.utils.tests.TestCase.assertFloatsNotEqual

Asserts that floating point scalars and/or arrays are not equal.

Additionally, lsst.geom, lsst.afw.geom, and lsst.afw.image provide additional asserts that get loaded into lsst.utils.tests.TestCase when the associated module is loaded. These include methods for Geom (SpherePoints, Angles, Pairs, Boxes), and Images, such as:

assertSpherePointsAlmostEqual

Assert that two sphere points (SpherePoint) are nearly equal (provided by lsst.geom.testUtils).

assertAnglesAlmostEqual

Assert that two angles (Angle) are nearly equal, ignoring wrap differences by default (provided by lsst.geom.testUtils).

assertPairsAlmostEqual

Assert that two planar pairs (e.g. Point2D or Extent2D) are nearly equal (provided by lsst.geom.testUtils).

assertBoxesAlmostEqual

Assert that two boxes (Box2D or Box2I) are nearly equal (provided by lsst.geom.testUtils).

assertWcsAlmostEqualOverBBox

Compare pixelToSky and skyToPixel for two WCS over a rectangular grid of pixel positions (provided by lsst.afw.geom.utils).

assertImagesAlmostEqual

Assert that two images are nearly equal, including non-finite values (provided by lsst.afw.image.testUtils).

assertMasksEqual

Assert that two masks are equal (provided by lsst.afw.image.testUtils).

assertMaskedImagesAlmostEqual

Assert that two masked images are nearly equal, including non-finite values (provided by lsst.afw.image.testUtils).

Testing Executables

In some cases the test to be executed is a shell script or a compiled binary executable. In order for the test running environment to be aware of these tests, a Python test file must be present that can be run by pytest. If none of the tests require special arguments and all the files with the executable bit set are to be run, this can be achieved by copying the file $UTILS_DIR/tests/test_executables.py to the relevant tests directory. The file is reproduced here:

 1import unittest
 2import lsst.utils.tests
 3
 4
 5class UtilsBinaryTester(lsst.utils.tests.ExecutablesTestCase):
 6    pass
 7
 8EXECUTABLES = None
 9UtilsBinaryTester.create_executable_tests(__file__, EXECUTABLES)
10
11if __name__ == "__main__":
12    unittest.main()

The EXECUTABLES variable can be a tuple containing the names of the executables to be run (relative to the directory containing the test file). None indicates that the test script should discover the executables in the same directory as that containing the test file. The call to create_executable_tests initiates executable discovery and creates a test for each executable that is found.

In some cases an explicit test has to be written either because some precondition has to be met before the test will stand a chance of running or because some arguments have to be passed to the executable. To support this the assertExecutable method is available:

def testBinary(self):
    self.assertExecutable("binary1", args=None,
                          root_dir=os.path.dirname(__file__))

where binary1 is the name of the executable relative to the root directory specified in the root_dir optional argument. Arguments can be provided to the args keyword parameter in the form of a sequence of arguments in a list or tuple.

Note

The LSST codebase is currently in transition such that sconsUtils will run executables itself as well as running Python test scripts that run executables. Do not worry about this duplication of test running. When the codebase has migrated to consistently use the testing scheme described in this section sconsUtils will be modified to disable the duplicate testing.

File descriptor leak testing

lsst.utils.tests.MemoryTestCase, despite its name, is used to detect leaks of file descriptors. (It is named that way because it was also used to detect some types of memory leaks in C++ code, and it may be used for similar functionality in the future.) MemoryTestCase should be used in all test files where utils is in the dependency chain.

This example shows the basic structure of an LSST Python unit test module, including MemoryTestCase (the highlighted lines indicate the leak testing modifications):

 1import unittest
 2import lsst.utils.tests
 3
 4
 5class DemoTestCase(lsst.utils.tests.TestCase):
 6    """Demo test case."""
 7
 8    def testDemo(self):
 9        self.assertNotIn("i", "team")
10
11
12class MemoryTester(lsst.utils.tests.MemoryTestCase):
13    pass
14
15
16def setup_module(module):
17    lsst.utils.tests.init()
18
19
20if __name__ == "__main__":
21    lsst.utils.tests.init()
22    unittest.main()

which ends up running the single specified test plus the two running as part of the leak test:

$ pytest -v test_runner_example.py
============================= test session starts ==============================
platform darwin -- Python 3.6.2, pytest-3.2.1, py-1.4.31, pluggy-0.3.1 -- ~/lsstsw/miniconda/bin/python
cachedir: .cache
rootdir: .../python/examples, inifile:
collected 3 items

test_runner_example.py::DemoTestCase::testDemo PASSED
test_runner_example.py::MemoryTester::testFileDescriptorLeaks <- .../lsstsw/stack/DarwinX86/utils/12.0.rc1+f79d1f7db4/python/lsst/utils/tests.py PASSED


=========================== 2 passed in 0.28 seconds ===========================

Note that MemoryTestCase must always be the final test suite. For the file descriptor test to function properly the lsst.utils.tests.init function must be invoked before any of the tests in the class are executed. Since LSST test scripts are required to run properly when called from within pytest, the init function has to be in the setup_module function that is called by pytest whenever a test module is loaded. It is no longer required that this function also be present just before the call to unittest.main to handle being called with python, but if the code is otherwise fully unittest-compatible, it can be useful to have this. If you see strange failures in the file descriptor leak check when tests are run in parallel, make sure that lsst.utils.tests.init is being called properly.

Decorators for iteration

It can be useful to parametrize a class or test function to execute with different combinations of variables. pytest has parametrizing decorators to enable this.

In addition, we have custom decorators that have been used to provide similar functionality but should generally be avoided in new code. lsst.utils.tests.classParameters is a class decorator for generating classes with different combinations of class variables. This is useful for when the setUp method generates the object being tested: placing the decorator on the class allows generating that object with different values. The decorator takes multiple lists of named parameters (which must have the same length) and iterates over the combinations. For example:

@classParameters(foo=[1, 2], bar=[3, 4])
class MyTestCase(unittest.TestCase):
    ...

will generate two classes, as if you wrote:

class MyTestCase_1_3(unittest.TestCase):
    foo = 1
    bar = 3
    ...

class MyTestCase_2_4(unittest.TestCase):
    foo = 2
    bar = 4
    ...

Note that the values are embedded in the class name, which allows identification of the particular class in the event of a test failure.

lsst.utils.tests.methodParameters is a method decorator for running a test method with different value combinations. This is useful for when you want an individual test to iterate over multiple values. As for classParameters, the decorator takes multiple lists of named parameters (which must have the same length) and iterates over the combinations. For example:

class MyTestCase(unittest.TestCase):
    @methodParameters(foo=[1, 2], bar=[3, 4])
    def testSomething(self, foo, bar):
        ...

will run tests:

testSomething(foo=1, bar=3)
testSomething(foo=2, bar=4)

Note that the method being decorated must be within a subclass of unittest.TestCase, since it relies on the existence of the subTest method for identifying the individual iterations. This use of subTest also means that all iterations will be executed, not stopping at the first failure.

Unicode

It is now commonplace for Unicode to be used in Python code and the LSST test cases should reflect this situation. In particular file paths, externally supplied strings and strings originating from third party software packages may well include code points outside of US-ASCII. LSST tests should ensure that these cases are handled by explicitly including strings that include code points outside of this range. For example,

  • file paths should be generated that include spaces as well as international characters,

  • accented characters should be included for name strings, and

  • unit strings should include the µm if appropriate.

Legacy Test Code

If you have legacy DM unittest suite-based code (code that sets up a unittest.TestSuite object by listing specific test classes and that uses lsst.utils.tests.run rather than unittest.main), please refer to tech note SQR-012 for porting instructions.