Basic testing with pytest

In Python, there are libraries to facilitate the writing of tests. You have seen in the documentation section the use of doctest to write simple tests attached to the documentation of a method or a class. While doctest provides a first layer of tests, called unit tests, it is not well suited to more complex tests, outside the scope of a single object.

The other widely used test framework in Python is pytest. pytest is a testing framework for Python. It is a powerful tool for discovering and running tests, and it has a number of useful features, such as the ability to rerun only the tests that failed during the last run, and support for running tests in parallel.

Basic usage

You will find at ~/robust-programming/tests/test_addition.py a simple test for the add function:

def add(a: int, b: int):
    """ Add two integers

    Parameters
    ----------
    a: int
        First integer
    b: int
        Second integer

    Returns
    ----------
    out: int
        Sum of `a` and `b`

    Examples
    --------
    >>> add(2, 3)
    5
    """
    return a + b

def test_addition():
    assert add(2, 3) == 5
    assert add(-2, 3) == 1
    assert add(2, -3) == -1
    assert add(0, 0) == 0

In this script we define the function with its corresponding test suite, although in practice you will decouple the code and the tests in different files (even directory) and import necessary modules in the test files. Just execute this test and observe the result:

pytest ~/robust-programming/tests/test_addition.py


============================ test session starts =============================
platform linux -- Python 3.9.12, pytest-7.1.0, pluggy-1.0.0
Using --randomly-seed=1793550101
rootdir: /home/peloton/codes/robust-programming/practicals
plugins: asdf-2.14.3, typeguard-4.1.5, anyio-3.5.0, cov-4.0.0, factoryboy-2.5.1, rerunfailures-13.0, randomly-3.15.0
collected 1 item                                                             

code/tests/test_addition.py .                                          [100%]

============================= 1 passed in 0.16s ==============================

There are several pieces of information

  • platform: it gives you information on which platform (linux here) the tests ran, and versions for Python and pytest used. Report this information for example in bug reports or merge requests to ease the reproducibility of results.
  • seed: the initial seed used when running the test (useful if your program uses randomly generates numbers, to allow reproduction of test runs).
  • plugins: additional plugins loaded (might not be used)
  • collected item: number of tests run. Notice that we check 4 assertions, but the number of collected items is 1: pytest reports the number of test functions.
  • passed items: Summary of success and failure. In this case, our test passes.

Exercise: Change one value of a test to fail it on purpose, re-run pytest, and observe the report.

Use with doctest

You can also run doctest within pytest. Just add the option --doctest-modules when running the test:

pytest --doctest-modules ~/robust-programming/tests/test_addition.py

Note that in this case, pytest collected 2 items (our manually defined test plus the doctest attached to the function add).

Parametrize tests

Our current way of defining tests is not very compact (series of copy/paste with different values). You can instead parametrize your test by templating the operation to perform, and specifying the list of values (in/out) to test.

Exercise: Rewrite the test by using the pytest.mark.parametrize decorator.

Handle exceptions

Being aware of the limitations of your code is as important as knowing what your code does under normal circumstances. Hence it is a good practice to identify and test exceptions raised by your code.

The pytest.raises function allows you to test that a specific exception is raised when certain code is run. This can be useful for verifying that your code is handling errors and exceptions as expected.

Exercise: Implement a test to check that the function add raises a TypeError if one of the two arguments is a string.

Coverage

One of the advantage of using a tool such a pytest is to compute the coverage of our test suite, that is the percentage of our code that is tested. To enable the coverage report, you can run pytest with the options --cov and --cov-report:

pytest --cov=. --cov-report=term ~/robust-programming/tests/test_addition.py

---------- coverage: platform linux, python 3.9.12-final-0 -----------
Name                          Stmts   Miss  Cover
-------------------------------------------------
code/tests/test_addition.py       7      0   100%
-------------------------------------------------
TOTAL                             7      0   100%

Covered in this context means being hit during the execution of the test suites – which does not necessarily means it is tested meaningfuly.

Note that if you want to inspect in details which lines of your code are covered or not, you will prefer the argument --cov-report=html. pytest will create a folder htmlcov with the source of a web page that will contain the line-by-line coverage report for each module.

Free practice

Write tests for your code, and execute them with pytest. You might want to explore other functionalities of pytest such as mock.