Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Testing

Traditional Organizations

  • Months of planning
  • Many months of development
  • Many months of testing / qa
  • Release once every few months or once a year
  • (Waterfall)

Quality Assurance

  • Nightly build
  • Testing new features
  • Testing bug fixes
  • Maybe testing critical features again and again...
  • ...or maybe not.
  • Regression testing?
  • Testing / qa has a huge boring repetative part.
  • It is also very slow and expensive.

Web age Organizations

  • Very frequent releases (20-30 / day!)
  • Very little time for manual testing
  • CI - Continuous Integration
  • CD - Continuous Delivery
  • CD - Continuous Deployment

TDD vs Testing as an Afterthought

  • TDD - Test Driven Development.

  • Testing as an afterthought:

  • Exiting product

  • Mostly works

  • Hard to test

Why test?

  • Business Value
  • Avoid regression
  • Better Software Design (TDD)
  • Your Sanity

Testing Modes

  • Functional testing
  • Unit testing
  • Integration testing
  • Acceptance testing (BDD Behavior-driven development?)
  • White box
  • Black box
  • Regression testing
  • Usability testing
  • Performance testing
  • Load testing
  • Security testing
  • ...

Testing Applications

  • Web site
  • Web application
  • Web API / Microservice (JSON, XML)
  • Mobile Application
  • Desktop Application (GUI)
  • Command-line tool (CLI)
  • Batch process

Testing What to test?

  • How would you check that they work as expected?
  • What if they get invalid input?
  • Edge cases? (e.g. 0, -1, 131314134141)
  • A value that is too big or two small.
  • Invalid or no response from third-party system.

Testing in Python

  • Doctest
  • Unittest
  • Pytest
  • Nose
  • Nimoy
  • Hypothesis
  • Selenium
  • Tox

Testing Environment

  • Git (or other VCS)
  • Virtualenv
  • Docker
  • ...

Testing Setup - Fixture

  • Web server
  • Databases
  • Other machines
  • Devices
  • External services

Testing Resources

  • AB Testing Alan and Brent talk about Modern Testing

Testing with unittest

Use a module

We have a module called mymath that has two methods: add and div.

import mymath
print( mymath.add(2, 3) )
print( mymath.div(6, 2) )
import mymath
import sys

if len(sys.argv) != 4:
    exit("Usage: {} [add|div] INT INT".format(sys.argv[0]))

if sys.argv[1] == 'add':
    print(mymath.add(int(sys.argv[2]), int(sys.argv[3])))
if sys.argv[1] == 'div':
    print(mymath.div(int(sys.argv[2]), int(sys.argv[3])))

Test a module

import unittest
import mymath

class TestMath(unittest.TestCase):

    def test_match(self):
        self.assertEqual(mymath.add(2, 3), 5)
        self.assertEqual(mymath.div(6, 3), 2)
        self.assertEqual(mymath.div(42, 1), 42)
        self.assertEqual(mymath.add(-1, 1), 0)

if __name__ == '__main__':
    unittest.main()

The tested module


def add(x, y):
    """Adding two numbers

    >>> add(2, 3)
    5

    """
    return x + y

def div(x, y):
    """Dividing two numbers

    >>> div(8, 2)
    4
    >>> div(8, 0)
    Traceback (most recent call last):
    ...
    ZeroDivisionError: integer division or modulo by zero

    """
    return x / y


#print add(2, 3, 4)

Testing - skeleton

import unittest

def add(x, y):
    return x+y

class Something(unittest.TestCase):

    def setUp(self):
        pass
        #print("setup")

    def tearDown(self):
        pass
        #print("teardown")

    def test_something(self):
        self.assertEqual(add(2, 3), 5)
        self.assertEqual(add(0, 3), 3)
        self.assertEqual(add(0, 3), 2)


    def test_other(self):
        self.assertEqual(add(-3, 3), 0)
        self.assertEqual(add(-3, 2), 7)
        self.assertEqual(add(-3, 2), 0)

    
if __name__ == '__main__':
    unittest.main()

Testing

import unittest

class TestReg(unittest.TestCase):

    def setUp(self):
        self.str_number = "123"
        self.str_not_number = "12x"

    def test_match1(self):
        self.assertEqual(1, 1)
        self.assertRegexpMatches(self.str_number, r'^\d+$')

    def test_match2(self):
        self.assertEqual(1, 1)
        self.assertRegexpMatches(self.str_not_number, r'^\d+$')

if __name__ == '__main__':
    unittest.main()
 

Test examples

Testing with PyTest

Pytest features

  • Organize and run test per directory (test discovery)
  • Run tests by name matching
  • Run tests by mark (smoke, integration, db)
  • Run tests in parallel with the xdist plugin.
  • Create your own fixtures and distribute them.
  • Create your own plugins and distribute them.

Test methods

  • Functional tests

  • Unit test

  • Integration test

  • Acceptance test

  • Regression test

  • Code quality tests

  • Load test

  • Stress test

  • Performance test

  • Endurance test

Pytest setup

Python 2

virtualenv venv2
source venv2/bin/activate
pip install pytest

Python 3

virtualenv venv3 -p python3
source venv3/bin/activate
pip install pytest

Python 3 Debian/Ubuntu

apt-get install python3-pytest

Python 3 RedHat/Centos

yum install python3-pytest

Pytest - AUT - Application Under Test

This is a simple "application" and even that has a bug. Later we'll discuss much more complex cases, but for the unedrstanding of the Pytest testing framework this simple will do.


def add(x, y):
    return x * y

How to use the module?

Before we try to test this function let's see how could we use it?

There is nothing special here, I just wanted to show it, because the testing is basically the same.

import mymath

print(mymath.add(2, 2))
from mymath import add

print(add(2, 2))

Pytest - simple passing test

We don't need much to test such code. Just the following things:

  • Filename startes with test_
  • A function that starts with test_
  • Call the test function with some parameters and check if the results are as expected.

Specifically the assert function of Python expects to recived a True (or False) value. If it is True the code keeps running as if nothing has happened.

If it is False and exception is raised.

import mymath

def test_add():
    assert mymath.add(2, 2) == 4

We can run the tests in two different ways. The regular would be to type in pytest and the name of the test file. In some setups this might not work and then we can also run python -m pytest and the name of the test file.

pytest test_mymath.py
python -m pytest test_mymath.py
============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest/math
plugins: flake8-1.0.6, dash-1.17.0
collected 1 item

test_mymath.py .                                                         [100%]

============================== 1 passed in 0.00s ===============================

The top of the output shows some information about the environment, (version numbers, plugins) then "collected" tells us how many test-cases were found by pytest. Each test function is one test case.

Then we see the name of the test file and a single dot indicating that there was one test-case and it was successful.

After the test run we could also see the exit code of the program by typing in echo $? on Linux or Mac or echo %ERRORLEVEL% on Windows.

$ echo $?
0
> echo %ERRORLEVEL%
0

Pytest failing test in one function

Once we had that passing test we might have shared our code just to receive complaints that it does not always work properly. One use might complain that passing in 2 and 3 does not give the expected 5.

So for your investigation the first thing you need to do is to write a test case expecting it to work proving that your code works. So you add a second assertion.


import mymath

def test_add():
    assert mymath.add(2, 2) == 4
    assert mymath.add(2, 3) == 5

To your surprise the tests fails with the following output:

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest/math
plugins: flake8-1.0.6, dash-1.17.0
collected 1 item

test_mymath_more.py F                                                    [100%]

=================================== FAILURES ===================================
___________________________________ test_add ___________________________________

    def test_add():
        assert mymath.add(2, 2) == 4
>       assert mymath.add(2, 3) == 5
E       assert 6 == 5
E        +  where 6 = <function add at 0x7f6bc3c63160>(2, 3)
E        +    where <function add at 0x7f6bc3c63160> = mymath.add

test_mymath_more.py:6: AssertionError
=========================== short test summary info ============================
FAILED test_mymath_more.py::test_add - assert 6 == 5
============================== 1 failed in 0.02s ===============================

We see the collected 1 item because we still only have one test function.

Then next to the test file we see the letter F indicating that we had a single test failure.

Then we can see the details of the test failure. Among other things we can see the actual value returned by the add function and the expected value.

Knowing that assert only receives the True or False values of the comparision, you might wonder how did this happen. This is part of the magic of pytest. It uses some introspection to see what was in the expression that was passed to assert and it can print out the details helping us see what was the expected value and what was the actual value. This can help understanding the real problem behind the scenes.

You can also check the exit code and it will be something different from 0 indicating that something did not work. The exit code is used by CI-systems to see which test run were successful and which failed.

$ echo $?
1
> echo %ERRORLEVEL%
1

One big disadvantage of having two asserts in the same test function is that we don't have clear indication that the first assert was successful. Moreover if the first assert fails then the second would not be even executed so we would not know what is the status of that case.

Pytest failing test separated

Instead of putting the two asserts in the same test function we could also put them in separate onese like in this example.


import mymath

def test_add():
    assert mymath.add(2, 2) == 4

def test_again():
    assert mymath.add(2, 3) == 5

The result of running this test file shows that it collected 2 items as there were two test functions.

Then next to the test file we see a dot indicating the successful test case and an F indicating the failed test. The more detailed test report helps.

At the bottom of the report you can also see that now it indicates 1 failed and 1 passed test.

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest/math
plugins: flake8-1.0.6, dash-1.17.0
collected 2 items

test_mymath_more_separate.py .F                                          [100%]

=================================== FAILURES ===================================
__________________________________ test_again __________________________________

    def test_again():
>       assert mymath.add(2, 3) == 5
E       assert 6 == 5
E        +  where 6 = <function add at 0x7f4bfffa2c10>(2, 3)
E        +    where <function add at 0x7f4bfffa2c10> = mymath.add

test_mymath_more_separate.py:8: AssertionError
=========================== short test summary info ============================
FAILED test_mymath_more_separate.py::test_again - assert 6 == 5
========================= 1 failed, 1 passed in 0.03s ==========================

Pytest run all the test files

  • in the math directory run pytest and let it find all the test files and all the test functions.
pytest

Exercise: test simple module

  • Take the standard math library and write tests for some of the functions.

Pytest expected exception

  • What if raising an exception is part of the specification of a function?
  • That given certain (incorrect) input it will raise a certain exception?
  • How can we test that we get the right exception. The expected exception?

Pytest a nice Fibonacci example

This is a nice implementation of the Fibonacci function. If we look at the way we can use it we see that it works well for 10.

def fib(n):
    a, b = 1, 1
    for _ in range(1, n):
        a, b = b, a+b
    return a

from fibonacci import fib

print(fib(10))

Output:

55

Pytest testing Fibonacci

from fibonacci import fib

def test_fib():
    assert fib(10) == 55

Output:

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest/fib1
plugins: flake8-1.0.6, dash-1.17.0
collected 1 item

test_fibonacci.py .                                                      [100%]

============================== 1 passed in 0.00s ===============================
  • What if the user calls it with -3 ? We get the result to be 1. We don't want that.

Pytest expected exception

def fib(n):
    if n < 1:
        raise ValueError(f'Invalid parameter {n}')
    a, b = 1, 1
    for _ in range(1, n):
        a, b = b, a+b
    return a

from fibonacci import fib

print(fib(10))
print(fib(-3))

Output:

55
Traceback (most recent call last):
  File "use_fib.py", line 4, in <module>
    print(fib(-1))
  File "fibonacci.py", line 3, in fib
    raise ValueError(f'Invalid parameter {n}')
ValueError: Invalid parameter -1

Pytest testing expected exception

import pytest
from fibonacci import fib

def test_fib():
    assert fib(10) == 55

def test_fib_negative():
    with pytest.raises(Exception) as err:
        fib(-1)
    assert err.type == ValueError
    assert str(err.value) == 'Invalid parameter -1'

def test_fib_negative_again():
    with pytest.raises(ValueError) as err:
        fib(-1)
    assert str(err.value) == 'Invalid parameter -1'

Output:

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest/fib2
plugins: flake8-1.0.6, dash-1.17.0
collected 3 items

test_fibonacci.py ...                                                    [100%]

============================== 3 passed in 0.01s ===============================

Pytest Change the text of the exception

def fib(n):
    if n < 1:
        raise ValueError(f'Invalid parameter was given {n}')
    a, b = 1, 1
    for _ in range(1, n):
        a, b = b, a+b
    return a

import pytest
from fibonacci import fib

def test_fib():
    assert fib(10) == 55

def test_fib_negative():
    with pytest.raises(Exception) as err:
        fib(-1)
    assert err.type == ValueError
    assert str(err.value) == 'Invalid parameter -1'

def test_fib_negative_again():
    with pytest.raises(ValueError) as err:
        fib(-1)
    assert str(err.value) == 'Invalid parameter -1'

Output:

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest/fib3
plugins: flake8-1.0.6, dash-1.17.0
collected 3 items

test_fibonacci.py .FF                                                    [100%]

=================================== FAILURES ===================================
______________________________ test_fib_negative _______________________________

    def test_fib_negative():
        with pytest.raises(Exception) as err:
            fib(-1)
        assert err.type == ValueError
>       assert str(err.value) == 'Invalid parameter -1'
E       AssertionError: assert 'Invalid para... was given -1' == 'Invalid parameter -1'
E         - Invalid parameter -1
E         + Invalid parameter was given -1
E         ?                   ++++++++++

test_fibonacci.py:11: AssertionError
___________________________ test_fib_negative_again ____________________________

    def test_fib_negative_again():
        with pytest.raises(ValueError) as err:
            fib(-1)
>       assert str(err.value) == 'Invalid parameter -1'
E       AssertionError: assert 'Invalid para... was given -1' == 'Invalid parameter -1'
E         - Invalid parameter -1
E         + Invalid parameter was given -1
E         ?                   ++++++++++

test_fibonacci.py:16: AssertionError
=========================== short test summary info ============================
FAILED test_fibonacci.py::test_fib_negative - AssertionError: assert 'Invalid...
FAILED test_fibonacci.py::test_fib_negative_again - AssertionError: assert 'I...
========================= 2 failed, 1 passed in 0.03s ==========================

Pytest Missing exception

def fib(n):
#    if n < 1:
#        raise ValueError(f'Invalid parameter {n}')
    a, b = 1, 1
    for _ in range(1, n):
        a, b = b, a+b
    return a

import pytest
from fibonacci import fib

def test_fib():
    assert fib(10) == 55

def test_fib_negative():
    with pytest.raises(Exception) as err:
        fib(-1)
    assert err.type == ValueError
    assert str(err.value) == 'Invalid parameter -1'

def test_fib_negative_again():
    with pytest.raises(ValueError) as err:
        fib(-1)
    assert str(err.value) == 'Invalid parameter -1'

Output:

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest/fib4
plugins: flake8-1.0.6, dash-1.17.0
collected 3 items

test_fibonacci.py .FF                                                    [100%]

=================================== FAILURES ===================================
______________________________ test_fib_negative _______________________________

    def test_fib_negative():
        with pytest.raises(Exception) as err:
>           fib(-1)
E           Failed: DID NOT RAISE <class 'Exception'>

test_fibonacci.py:9: Failed
___________________________ test_fib_negative_again ____________________________

    def test_fib_negative_again():
        with pytest.raises(ValueError) as err:
>           fib(-1)
E           Failed: DID NOT RAISE <class 'ValueError'>

test_fibonacci.py:15: Failed
=========================== short test summary info ============================
FAILED test_fibonacci.py::test_fib_negative - Failed: DID NOT RAISE <class 'E...
FAILED test_fibonacci.py::test_fib_negative_again - Failed: DID NOT RAISE <cl...
========================= 2 failed, 1 passed in 0.03s ==========================

Pytest Other exception is raised

def fib(n):
    if n < 1:
        raise Exception(f'Invalid parameter {n}')
    a, b = 1, 1
    for _ in range(1, n):
        a, b = b, a+b
    return a

import pytest
from fibonacci import fib

def test_fib():
    assert fib(10) == 55

def test_fib_negative():
    with pytest.raises(Exception) as err:
        fib(-1)
    assert err.type == ValueError
    assert str(err.value) == 'Invalid parameter -1'

def test_fib_negative_again():
    with pytest.raises(ValueError) as err:
        fib(-1)
    assert str(err.value) == 'Invalid parameter -1'

Output:

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest/fib5
plugins: flake8-1.0.6, dash-1.17.0
collected 3 items

test_fibonacci.py .FF                                                    [100%]

=================================== FAILURES ===================================
______________________________ test_fib_negative _______________________________

    def test_fib_negative():
        with pytest.raises(Exception) as err:
            fib(-1)
>       assert err.type == ValueError
E       AssertionError: assert <class 'Exception'> == ValueError
E        +  where <class 'Exception'> = <ExceptionInfo Exception('Invalid parameter -1') tblen=2>.type

test_fibonacci.py:10: AssertionError
___________________________ test_fib_negative_again ____________________________

    def test_fib_negative_again():
        with pytest.raises(ValueError) as err:
>           fib(-1)

test_fibonacci.py:15: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

n = -1

    def fib(n):
        if n < 1:
>           raise Exception(f'Invalid parameter {n}')
E           Exception: Invalid parameter -1

fibonacci.py:3: Exception
=========================== short test summary info ============================
FAILED test_fibonacci.py::test_fib_negative - AssertionError: assert <class '...
FAILED test_fibonacci.py::test_fib_negative_again - Exception: Invalid parame...
========================= 2 failed, 1 passed in 0.03s ==========================

Pytest No exception is raised

def fib(n):
    if n < 1:
        return None
    a, b = 1, 1
    for _ in range(1, n):
        a, b = b, a+b
    return a

import pytest
from fibonacci import fib

def test_fib():
    assert fib(10) == 55

def test_fib_negative():
    with pytest.raises(Exception) as err:
        fib(-1)
    assert err.type == ValueError
    assert str(err.value) == 'Invalid parameter -1'

def test_fib_negative_again():
    with pytest.raises(ValueError) as err:
        fib(-1)
    assert str(err.value) == 'Invalid parameter -1'

Output:

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest/fib6
plugins: flake8-1.0.6, dash-1.17.0
collected 3 items

test_fibonacci.py .FF                                                    [100%]

=================================== FAILURES ===================================
______________________________ test_fib_negative _______________________________

    def test_fib_negative():
        with pytest.raises(Exception) as err:
>           fib(-1)
E           Failed: DID NOT RAISE <class 'Exception'>

test_fibonacci.py:9: Failed
___________________________ test_fib_negative_again ____________________________

    def test_fib_negative_again():
        with pytest.raises(ValueError) as err:
>           fib(-1)
E           Failed: DID NOT RAISE <class 'ValueError'>

test_fibonacci.py:15: Failed
=========================== short test summary info ============================
FAILED test_fibonacci.py::test_fib_negative - Failed: DID NOT RAISE <class 'E...
FAILED test_fibonacci.py::test_fib_negative_again - Failed: DID NOT RAISE <cl...
========================= 2 failed, 1 passed in 0.03s ==========================

Exercise: test more exceptions

  • Find another case that will break the code.
  • Then make changes to the code that it will no break.
  • The write a test to verify it.

Solution: test more exceptions

def fib(n):
    if n < 1 or int(n) != n:
        raise ValueError(f'Invalid parameter {n}')
    a, b = 1, 1
    for _ in range(1, n):
        a, b = b, a+b
    return a

import pytest
from fibonacci import fib

def test_fib():
    assert fib(10) == 55

def test_fib_negative():
    with pytest.raises(Exception) as err:
        fib(-1)
    assert err.type == ValueError
    assert str(err.value) == 'Invalid parameter -1'

def test_fib_negative_again():
    with pytest.raises(ValueError) as err:
        fib(-1)
    assert str(err.value) == 'Invalid parameter -1'

def test_fib_negative_again():
    with pytest.raises(ValueError) as err:
        fib(3.5)
    assert str(err.value) == 'Invalid parameter 3.5'

Output:

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest/fib7
plugins: flake8-1.0.6, dash-1.17.0
collected 3 items

test_fibonacci.py ...                                                    [100%]

============================== 3 passed in 0.01s ===============================

PyTest: Multiple Failures

def test_one():
    assert True
    print('one')

def test_two():
    assert False
    print('two')

def test_three():
    assert True
    print('three')

def test_four():
    assert False
    print('four')

def test_five():
    assert True
    print('five')

PyTest: Multiple Failures output

test_failures.py .F.F.
$ pytest -v test_failures.py

test_failures.py::test_one PASSED
test_failures.py::test_two FAILED
test_failures.py::test_three PASSED
test_failures.py::test_four FAILED
test_failures.py::test_five PASSED
$ pytest -s test_failures.py

one
three
five

PyTest Selective running of test functions

pytest test_failures.py::test_one

pytest test_failures.py::test_two

PyTest: stop on first failure

  • --maxfail
  • -x
pytest -x
pytest --maxfail 42

Pytest: expect a test to fail (xfail or TODO tests)

  • xfail

Use the @pytest.mark.xfail decorator to mark the test.

from mymod_1 import is_anagram
import pytest

def test_anagram():
   assert is_anagram("abc", "acb")
   assert is_anagram("silent", "listen")
   assert not is_anagram("one", "two")

@pytest.mark.xfail(reason = "Bug #42")
def test_multiword_anagram():
   assert is_anagram("ana gram", "naga ram")
   assert is_anagram("anagram", "nag a ram")

Pytest: expect a test to fail (xfail or TODO tests)

$ pytest test_mymod_3.py
======= test session starts =======
platform darwin -- Python 3.5.2, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
Using --random-order-bucket=module
Using --random-order-seed=557111

rootdir: /Users/gabor/work/training/python/examples/pytest, inifile:
plugins: xdist-1.16.0, random-order-0.5.4
collected 2 items

test_mymod_3.py .x

===== 1 passed, 1 xfailed in 0.08 seconds =====

PyTest: show xfailed tests with -rx

  • -rx
$ pytest -rx test_mymod_3.py
======= test session starts =======
platform darwin -- Python 3.5.2, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
Using --random-order-bucket=module
Using --random-order-seed=557111

rootdir: /Users/gabor/work/training/python/examples/pytest, inifile:
plugins: xdist-1.16.0, random-order-0.5.4
collected 2 items

test_mymod_3.py .x

===== short test summary info =====
XFAIL test_mymod_3.py::test_multiword_anagram
  Bug #42

===== 1 passed, 1 xfailed in 0.08 seconds =====

Pytest: skipping tests

import sys
import pytest

@pytest.mark.skipif(sys.platform != 'darwin', reason="Mac tests")
def test_mac():
    assert True

@pytest.mark.skipif(sys.platform != 'linux', reason="Linux tests")
def test_linux():
    assert True

@pytest.mark.skipif(sys.platform != 'win32', reason="Windows tests")
def test_windows():
    assert True

@pytest.mark.skip(reason="To show we can skip tests without any condition.")
def test_any():
    assert True
pytest test_on_condition.py
collected 4 items

test_on_condition.py ss.s

==== 1 passed, 3 skipped in 0.02 seconds ====

Pytest: show skipped tests with -rs

  • -rs
$ pytest -rs test_on_condition.py
collected 4 items

test_on_condition.py s.ss

===== short test summary info =====
SKIP [1] test_on_condition.py:15: To show we can skip tests without any condition.
SKIP [1] test_on_condition.py:7: Linux tests
SKIP [1] test_on_condition.py:11: Windows tests

==== 1 passed, 3 skipped in 0.03 seconds ====

Pytest: show extra test summmary info with -r

  • -r

  • -ra

  • (f)ailed

  • (E)error

  • (s)skipped

  • (x)failed

  • (X)passed

  • (p)passed

  • (P)passed with output

  • (a)all except pP

pytest -rx  - xfail, expected to fail
pytest -rs  - skipped
pytest -ra  - all the special cases
import pytest

def test_pass():
    assert True

def test_fail():
    assert False

@pytest.mark.skip(reason="Unconditional skip")
def test_with_skip():
    assert True

@pytest.mark.skipif(True, reason="Conditional skip")
def test_with_skipif():
    assert True

@pytest.mark.skipif(False, reason="Conditional skip")
def test_with_skipif_but_run():
    assert True


@pytest.mark.xfail(reason = "Expect to fail and failed")
def test_with_xfail_and_fail():
   assert False

@pytest.mark.xfail(reason = "Expect to fail but passed")
def test_with_xfail_but_pass():
   assert True
pytest -h

Pytest: skipping tests output in verbose mode

$ pytest -v test_on_condition.py

test_on_condition.py::test_mac PASSED
test_on_condition.py::test_any SKIPPED
test_on_condition.py::test_windows SKIPPED
test_on_condition.py::test_linux SKIPPED

==== 1 passed, 3 skipped in 0.01 seconds ======

Pytest verbose mode

  • -v
$ pytest -v test_mymod_1.py

test_mymod_1.py::test_anagram PASSED
$ pytest -v test_mymod_2.py

test_mymod_2.py::test_anagram PASSED
test_mymod_2.py::test_multiword_anagram FAILED

Pytest quiet mode

  • -q
$ pytest -q test_mymod_1.py
.
1 passed in 0.01 seconds
$ pytest -q test_mymod_2.py

.F
=========================== FAILURES ===========================
____________________ test_multiword_anagram ____________________

    def test_multiword_anagram():
       assert is_anagram("ana gram", "naga ram")
>      assert is_anagram("anagram", "nag a ram")
E      AssertionError: assert False
E       +  where False = is_anagram('anagram', 'nag a ram')

test_mymod_2.py:10: AssertionError
1 failed, 1 passed in 0.09 seconds

PyTest print STDOUT and STDERR using -s

  • -s
  • -q
import sys

def test_hello():
    print("hello testing")
    print("stderr during testing", file=sys.stderr)
    assert True
$ pytest -s -q test_stdout_stderr.py
hello testing
stderr during testing
.
1 passed in 0.01 seconds

Exercise: test math functions

  • Test methods of the math module.
  • ceil
  • factorial
  • gcd

Exercise: test this app

Write tests for the swap and average functions of the app module. Can you find a bug?


def swap(txt):
    '''
    >>> half("abcd"))
    cdab
    '''
    return txt[int(len(txt)/2):] + txt[:int(len(txt)/2)]

def average(*numbers):
    '''
    >>> average(2, 4, 6)
    4
    '''
    s = 0
    c = 0
    for n in numbers:
        s += n
        c += 1
    return s/c

Exercise: test the csv module

  • csv
  • Create a CSV file, read it and check if the results are as expected!
  • Test creating a CSV file?
  • Test round trip?

Solution: Pytest test math functions

import math

def test_gcd():
    assert math.gcd(6, 9) == 3
    assert math.gcd(17, 9) == 1

def test_ceil():
    assert math.ceil(0) == 0
    assert math.ceil(0.1) == 1
    assert math.ceil(-0.1) == 0

def test_factorial():
    assert math.factorial(0) == 1
    assert math.factorial(1) == 1
    assert math.factorial(2) == 2
    assert math.factorial(3) == 6

import math
import pytest

def test_math():
    with pytest.raises(Exception) as exinfo:
        math.factorial(-1)
    assert exinfo.type == ValueError
    assert str(exinfo.value) == 'factorial() not defined for negative values'


    with pytest.raises(Exception) as exinfo:
        math.factorial(1.2)
    assert exinfo.type == ValueError
    assert str(exinfo.value) == 'factorial() only accepts integral values'


Solution: Pytest test this app

import app

def test_swap():
    assert app.swap("abcd") == "cdab"
    assert app.swap("abc") == "bca"
    assert app.swap("abcde") == "cdeab"
    assert app.swap("a") == "a"
    assert app.swap("") == ""

def test_average():
    assert app.average(2, 4) == 3
    assert app.average(2, 3) == 2.5
    assert app.average(42) == 42
    #assert app.average() == 0

Solution: test the csv module

{% embed include file="src/examples/csv/csv_file_newline.csv)

import csv


def test_csv():
    filename = '../../examples/csv/process_csv_file_newline.csv'
    with open(filename) as fh:
        rd = csv.reader(fh, delimiter=';')
        assert rd.__next__() == ['Tudor', 'Vidor', '10', 'Hapci']
        assert rd.__next__() == ['Szundi', 'Morgo', '7', 'Szende']
        assert rd.__next__() == ['Kuka', 'Hofeherke; \nalma', '100', 'Kiralyno']
        assert rd.__next__() == ['Boszorkany', 'Herceg', '9', 'Meselo']

PyTest using classes

class TestClass():
    def test_one(self):
        print("one")
        assert True
        print("one after")

    def test_two(self):
        print("two")
        assert False
        print("two after")

class TestBad():
    def test_three(self):
        print("three")
        assert False
        print("three after")


Output:

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest
plugins: flake8-1.0.6, dash-1.17.0
collected 3 items

test_with_class.py .FF                                                   [100%]

=================================== FAILURES ===================================
______________________________ TestClass.test_two ______________________________

self = <test_with_class.TestClass object at 0x7fac08abdbe0>

    def test_two(self):
        print("two")
>       assert False
E       assert False

test_with_class.py:9: AssertionError
----------------------------- Captured stdout call -----------------------------
two
______________________________ TestBad.test_three ______________________________

self = <test_with_class.TestBad object at 0x7fac08a606a0>

    def test_three(self):
        print("three")
>       assert False
E       assert False

test_with_class.py:15: AssertionError
----------------------------- Captured stdout call -----------------------------
three
=========================== short test summary info ============================
FAILED test_with_class.py::TestClass::test_two - assert False
FAILED test_with_class.py::TestBad::test_three - assert False
========================= 2 failed, 1 passed in 0.03s ==========================

Exercise: module

Pick one of the modules and write a test for it.

Exercise: Open Source

  • Visit the stats on PyDigger.com
  • List the packages that have GitHub no Travis-CI.
  • Pick one that sounds simple. Visit its GitHub page and check if it has tests.
  • If it does not, wirte one.
  • Send Pull Request

Parametrize PyTest with pytest.mark.parametrize

  • @pytest.mark.parametrize
  • mark
  • parametrize
import pytest

@pytest.mark.parametrize("name", ["Foo", "Bar"])
def test_cases(name):
    print(f"name={name}")
    assert len(name) == 3

Output:

======== test session starts ========
platform linux -- Python 3.7.3, pytest-5.3.2, py-1.8.0, pluggy-0.13.0
rootdir: /home/gabor/work/slides/python-programming/examples/pytest
plugins: flake8-1.0.4
collected 2 items

test_with_param.py name=Foo
.name=Bar
.

=========== 2 passed in 0.00s ========

Parametrize PyTest with multiple parameters

import pytest

@pytest.mark.parametrize("name,email", [
    ("Foo", "foo@email.com"),
    ("Bar", "bar@email.com"),
])
def test_cases(name, email):
    print(f"name={name}  email={email}")
    assert email.lower().startswith(name.lower())

Output:

========= test session starts =======
platform linux -- Python 3.7.3, pytest-5.3.2, py-1.8.0, pluggy-0.13.0
rootdir: /home/gabor/work/slides/python-programming/examples/pytest
plugins: flake8-1.0.4
collected 2 items

test_with_params.py name=Foo  email=foo@email.com
.name=Bar  email=bar@email.com
.

========== 2 passed in 0.01s =======

Pytest and forking

  • This tests passes and generates two reports.
  • I could not find a way yet to avoid the reporting in the child-process. Maybe we need to run this with a special runner that will fork and run this test on our behalf.
import os

def func(x, y):
    pid = os.fork()
    if pid == 0:
        print(f"Child {os.getpid()}")
        #raise Exception("hello")
        exit()
    print(f"Parent {os.getpid()} The child is {pid}")
    os.wait()
    #exit()
    #raise Exception("hello")
    return x+y
  

if __name__ == '__main__':
    func(2, 3)
import app
import os

#def test_func():
#   assert app.func(2, 3) == 5


def test_func():
    pid = os.getpid()
    try:
        res = app.func(2, 3)
        assert res == 5
    except SystemExit as ex:
        assert str(ex) == 'None'
        #SystemExit(None)
        # or ex == 0
        if pid == os.getpid():
            raise ex

import logging

def add(x, y):
#    logger = logging.getLogger("mytest")
    logging.basicConfig(level = logging.INFO)
    logging.info("Just some info log")
    return x * y

def test_one():
    assert add(2, 2) == 4

Exercise: Write tests for script combining files

  • This is a solution for one of the exercises in which we had to combine two files adding the numbers of the vegetables together.
  • Many things could be improved, but before doing that, write a test (or two) to check this code. Without changing it.
c = {}
with open('examples/files/a.txt') as fh:
    for line in fh:
        k, v = line.rstrip("\n").split("=")
        if k in c:
            c[k] += int(v)
        else:
            c[k] = int(v)

with open('examples/files/b.txt') as fh:
    for line in fh:
        k, v = line.rstrip("\n").split("=")
        if k in c:
            c[k] += int(v)
        else:
            c[k] = int(v)


with open('out.txt', 'w') as fh:
    for k in sorted(c.keys()):
        fh.write("{}={}\n".format(k, c[k]))

Data Files:

Tomato=78
Avocado=23
Pumpkin=100
Cucumber=17
Avocado=10
Cucumber=10

Solution: Write tests for script combining files

  • TBD
  • Becaused we have fixed pathes in the script we have to create a directory structure that is similar to what is expected in a temporary location.
  • Run the script and compare the results to some expected file.
  • Then start refactoring the code.

Pytest: Flask echo

from flask import Flask, request
eapp = Flask(__name__)

@eapp.route("/")
def hello():
    return '''
<form action="/echo" method="GET">
<input name="text">
<input type="submit" value="Echo">
</form>
'''

@eapp.route("/echo")
def echo():
    answer = request.args.get('text')
    if answer:
        return "You said: " + answer
    else:
        return "Nothing to say?"


if __name__ == "__main__":
    eapp.run()

Pytest: testing Flask echo

import flask_echo

class TestEcho:
    def setup_method(self):
        self.app = flask_echo.eapp.test_client()
        print("setup")

    def test_main(self):
        rv = self.app.get('/')
        assert rv.status == '200 OK'
        assert b'<form action="/echo" method="GET">' in rv.data

    def test_echo(self):
        rv = self.app.get('/echo?text=Hello')
        assert rv.status == '200 OK'
        assert b'You said: Hello' in rv.data

    def test_empty_echo(self):
        rv = self.app.get('/echo')
        assert rv.status == '200 OK'
        assert b'Nothing to say?' in rv.data

Pytest resources

Anagram on the command line

from mymod_1 import is_anagram
import sys

if len(sys.argv) != 3:
    exit("Usage {} STR STR".format(sys.argv[0]))

print(is_anagram(sys.argv[1], sys.argv[2]))

PyTest testing CLI

import subprocess

def capture(command):
    proc = subprocess.Popen(command,
        stdout = subprocess.PIPE,
        stderr = subprocess.PIPE,
    )
    out,err = proc.communicate()
    return out, err, proc.returncode


def test_anagram_no_param():
    command = ["python3", "examples/pytest/anagram.py"]
    out, err, exitcode = capture(command)
    assert exitcode == 1
    assert out == b''
    assert err == b'Usage examples/pytest/anagram.py STR STR\n'

def test_anagram():
    command = ["python3", "examples/pytest/anagram.py", "abc", "cba"]
    out, err, exitcode = capture(command)
    assert exitcode == 0
    assert out == b'True\n'
    assert err == b''

def test_no_anagram():
    command = ["python3", "examples/pytest/anagram.py", "abc", "def"]
    out, err, exitcode = capture(command)
    assert exitcode == 0
    assert out == b'False\n'
    assert err == b''

Pytest assert

PyTest failure reports

  • Reporting success is boring
  • Reporting failure can be interesting: assert + introspection

PyTest compare numbers

def double(n):
    #return 2*n
    return 2+n

def test_string_equal():
    assert double(2) == 4
    assert double(21) == 42
    $ pytest test_number_equal.py

    def test_string_equal():
        assert double(2) == 4
>       assert double(21) == 42
E       assert 23 == 42
E        +  where 23 = double(21)

PyTest compare numbers relatively

def get_number():
    return 23

def test_string_equal():
    assert get_number() < 0 
$ pytest test_number_less_than.py

Output:

    def test_string_equal():
>       assert get_number() < 0
E       assert 23 < 0
E        +  where 23 = get_number()

PyTest compare strings

def get_string():
    return "abc"

def test_string_equal():
    assert get_string() == "abd"

$ pytest test_string_equal.py

Output:

    def test_string_equal():
>       assert get_string() == "abd"
E       AssertionError: assert 'abc' == 'abd'
E         - abc
E         + abd

PyTest compare long strings

import string

def get_string(s):
    return string.printable + s + string.printable

def test_long_strings():
    assert get_string('a') == get_string('b')

$ pytest test_long_strings.py

Output:

    def test_long_strings():
>       assert get_string('a') == get_string('b')
E       AssertionError: assert '0123456789ab...t\n\r\x0b\x0c' == '0123456789abc...t\n\r\x0b\x0c'
E         Skipping 90 identical leading characters in diff, use -v to show
E         Skipping 91 identical trailing characters in diff, use -v to show
E           {|}~
E
E         - a012345678
E         ? ^
E         + b012345678
E         ? ^

PyTest is one string in another strings

Shows ~250 characters

import string

def get_string():
    return string.printable * 30

def test_long_strings():
    assert 'hello' in get_string()


    def test_long_strings():
>       assert 'hello' in get_string()
E       assert 'hello' in '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c012345...x0b\x0c0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c'
E        +  where '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c012345...x0b\x0c0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c' = get_string()

PyTest test any expression

def test_expression_equal():
    a = 3
    assert a % 2 == 0
$ pytest test_expression_equal.py

    def test_expression_equal():
        a = 3
>       assert a % 2 == 0
E       assert (3 % 2) == 0

PyTest element in list

def get_list():
    return ["monkey", "cat"]

def test_in_list():
    assert "dog" in get_list()

$ pytest test_in_list.py

    def test_in_list():
>       assert "dog" in get_list()
E       AssertionError: assert 'dog' in ['monkey', 'cat']
E        +  where ['monkey', 'cat'] = get_list()

PyTest compare short lists

import string
import re

def get_lista():
    return 'a', 'b', 'c'
def get_listx():
    return 'x', 'b', 'y'

def test_short_lists():
    assert get_lista() == get_listx()
$ pytest test_short_lists.py
    def test_short_lists():
>       assert get_lista() == get_listx()
E       AssertionError: assert ('a', 'b', 'c') == ('x', 'b', 'y')
E         At index 0 diff: 'a' != 'x'
E         Use -v to get the full diff

PyTest compare short lists - verbose output

$ pytest -v test_short_lists.py
    def test_short_lists():
>       assert get_lista() == get_listx()
E       AssertionError: assert ('a', 'b', 'c') == ('x', 'b', 'y')
E         At index 0 diff: 'a' != 'x'
E         Full diff:
E         - ('a', 'b', 'c')
E         ?   ^         ^
E         + ('x', 'b', 'y')
E         ?   ^         ^

PyTest compare lists

import string
import re

def get_list(s):
    return list(string.printable + s + string.printable)

def test_long_lists():
    assert get_list('a') == get_list('b')
$ pytest test_lists.py

    def test_long_lists():
>       assert get_list('a') == get_list('b')
E       AssertionError: assert ['0', '1', '2...'4', '5', ...]
            == ['0', '1', '2'...'4', '5', ...]
E         At index 100 diff: 'a' != 'b'
E         Use -v to get the full diff

PyTest compare dictionaries - different values

def test_different_value():
    a = {
        "name" : "Whale",
        "location": "Ocean",
        "size": "huge",
    }
    b = {
        "name" : "Whale",
        "location": "Water",
        "size": "huge",
    }
    assert a == b


Output:

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest
plugins: flake8-1.0.6, dash-1.17.0
collected 1 item

test_dictionaries.py F                                                   [100%]

=================================== FAILURES ===================================
_____________________________ test_different_value _____________________________

    def test_different_value():
        a = {
            "name" : "Whale",
            "location": "Ocean",
            "size": "huge",
        }
        b = {
            "name" : "Whale",
            "location": "Water",
            "size": "huge",
        }
>       assert a == b
E       AssertionError: assert {'location': ...size': 'huge'} == {'location': ...size': 'huge'}
E         Omitting 2 identical items, use -vv to show
E         Differing items:
E         {'location': 'Ocean'} != {'location': 'Water'}
E         Use -v to get the full diff

test_dictionaries.py:12: AssertionError
=========================== short test summary info ============================
FAILED test_dictionaries.py::test_different_value - AssertionError: assert {'...
============================== 1 failed in 0.03s ===============================

PyTest compare dictionaries - missing-keys

def test_missing_key():
    a = {
        "name" : "Whale",
        "size": "huge",
    }
    b = {
        "name" : "Whale",
        "location": "Water",
    }
    assert a == b

Output:

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest
plugins: flake8-1.0.6, dash-1.17.0
collected 1 item

test_dictionaries_missing_keys.py F                                      [100%]

=================================== FAILURES ===================================
_______________________________ test_missing_key _______________________________

    def test_missing_key():
        a = {
            "name" : "Whale",
            "size": "huge",
        }
        b = {
            "name" : "Whale",
            "location": "Water",
        }
>       assert a == b
E       AssertionError: assert {'name': 'Wha...size': 'huge'} == {'location': ...ame': 'Whale'}
E         Omitting 1 identical items, use -vv to show
E         Left contains 1 more item:
E         {'size': 'huge'}
E         Right contains 1 more item:
E         {'location': 'Water'}
E         Use -v to get the full diff

test_dictionaries_missing_keys.py:10: AssertionError
=========================== short test summary info ============================
FAILED test_dictionaries_missing_keys.py::test_missing_key - AssertionError: ...
============================== 1 failed in 0.03s ===============================

PyTest Fixtures

PyTest: What are Fixtures?

  • In generally we call test fixture the environment in which a test is expected to run.
  • Pytest uses the same word for a more generic concept. All the techniques that make it easy to set up the environment and to tear it down after the tests.

General examples:

  • Setting up a database server - then removing it to clean the machine.
  • Maybe filling the database server with some data - emptying the database.

Specific examples:

  • If I'd like to test the login mechanism, I need that before the test starts running we'll have a verified account in the system.
  • If I test the 3rd element in a pipeline I need the results of the 2nd pipeline to get started and after the test runs I need to remove all those files.

PyTest: test with functions

If we don't have any of the fixture services we need to write a lot of code:

  • We need to call the setup_db() in every test.
  • We need to call the teardown_db() in every test - and it still does not work when the test fails.
  • What if there is some work that needs to be done only once and not for every test?
import tempfile

def test_one():
    db_server = setup_db_server()
    db = setup_db()
    print(f"    test_one         {db}")
    assert True
    print("    test_one after")
    teardown_db(db)
    # teardown_db_server(db_server)

def test_two():
    db_server = setup_db_server()
    db = setup_db()
    print(f"    test_two         {db}")
    assert False
    print("    test_two after")
    teardown_db(db)
    # teardown_db_server(db_server)

def test_three():
    db_server = setup_db_server()
    db = setup_db()
    print(f"    test_three       {db}")
    assert True
    print("    test_three after")
    teardown_db(db)
    # teardown_db_server(db_server)

def setup_db():
    db = tempfile.TemporaryDirectory()
    ...
    print(f"setup_db             {db}")
    return db

def teardown_db(db):
    ...
    print(f"teardown_db          {db}")


def setup_db_server():
    print("setup db_server")
    if 'db_server' not in setup_db_server.__dict__:
        print("new db_serverironment")
        setup_db_server.db_server = tempfile.TemporaryDirectory()
    return setup_db_server.db_server

def teardown_db_server(db_server):
    print("teardown_db_server")

Output:

$ pytest -qqs test_functions.py
setup db_server
new db_serverironment
setup_db             <TemporaryDirectory '/tmp/tmpvct7ng6r'>
    test_one         <TemporaryDirectory '/tmp/tmpvct7ng6r'>
    test_one after
teardown_db          <TemporaryDirectory '/tmp/tmpvct7ng6r'>
setup db_server
setup_db             <TemporaryDirectory '/tmp/tmptmaql_a_'>
    test_two         <TemporaryDirectory '/tmp/tmptmaql_a_'>
setup db_server
setup_db             <TemporaryDirectory '/tmp/tmpwb18nwil'>
    test_three       <TemporaryDirectory '/tmp/tmpwb18nwil'>
    test_three after
teardown_db          <TemporaryDirectory '/tmp/tmpwb18nwil'>

PyTest Fixture setup and teardown xUnit style

  • setup_function
  • teardown_function
  • setup_module
  • teardown_module

There are two mechanism in PyTest to setup and teardown fixtures. One of them is the xUnit-style system that is also available in other languages such as Java and C#.

In this example there are 3 tests, 3 functions that are called test_SOMETHING. There are also two pairs of functions to setup and teardown the fixtures on a per-function and per-module level.

Before starting to run the tests of this file PyTest will run the setup_module function, and after it is done running the tests PyTest will run the teardonw_module function. This will happen even if one or more of the tests failed. These functions will be called once regradless of the number of tests we have in the module.

Before every test function PyTest will run the setup_function and after the test finisihed it will run the teardown_function. Regardless of the success or failure of the test.

So in our case where we have all 4 of the fixture functions implemented and we have 3 tests function the order can be seen on the next page.

In this example we also see one of the major issues of this style. The variable db that is set in the setup_module function must be marked as global in order to make it accessible in the test functions and in the teardown_module function.

import tempfile

def setup_module():
    global db_server
    db_server = tempfile.TemporaryDirectory()
    print(f"setup_module:         {db_server}")

def teardown_module():
    print(f"teardown_module       {db_server}")


def setup_function():
    global db
    db = tempfile.TemporaryDirectory()
    print(f"  setup_function                                              {db}")

def teardown_function():
    print(f"  teardown_function                                          {db}")


def test_one():
    print(f"    test_one          {db_server} {db}")
    assert True
    print("    test_one after")

def test_two():
    print(f"    test_two          {db_server} {db}")
    assert False
    print("    test_two after")

def test_three():
    print(f"    test_three        {db_server} {db}")
    assert True
    print("    test_three after")

See next slide for the output.

PyTest Fixture setup and teardown output

test_fixture.py .F.
$ pytest -sq test_fixture.py

Output:

setup_module:         <TemporaryDirectory '/tmp/tmpaq1r7lnj'>
  setup_function                                              <TemporaryDirectory '/tmp/tmpvynb1e5h'>
    test_one          <TemporaryDirectory '/tmp/tmpaq1r7lnj'> <TemporaryDirectory '/tmp/tmpvynb1e5h'>
    test_one after
  teardown_function                                           <TemporaryDirectory '/tmp/tmpvynb1e5h'>
  setup_function                                              <TemporaryDirectory '/tmp/tmp6cman2br'>
    test_two          <TemporaryDirectory '/tmp/tmpaq1r7lnj'> <TemporaryDirectory '/tmp/tmp6cman2br'>
  teardown_function                                           <TemporaryDirectory '/tmp/tmp6cman2br'>
  setup_function                                              <TemporaryDirectory '/tmp/tmpbi9pwo3j'>
    test_three        <TemporaryDirectory '/tmp/tmpaq1r7lnj'> <TemporaryDirectory '/tmp/tmpbi9pwo3j'>
    test_three after
.  teardown_function                                          <TemporaryDirectory '/tmp/tmpbi9pwo3j'>
teardown_module       <TemporaryDirectory '/tmp/tmpaq1r7lnj'>

Note, the teardown_function is executed even after failed tests.

PyTest: Fixture Class setup and teardown

  • setup_class
  • teardown_class
  • setup_method
  • teardown_method

In case you are using test classes then you can use another 2 pairs of functions, well actually methods, to setup and teardown the environment. In this case it is much easier to pass values from the setup to the test functions and to the teardown function, but we need to write the whole thing in OOP style.

Also note, the test functions are independent. They all see the atributes set in the setup_class, but the test functions cannot pass values to each other.

class TestClass():
    def setup_class(self):
        print("setup_class called once for the class")
        print(self)
        self.db = "mydb"
        self.test_counter = 0

    def teardown_class(self):
        print(f"teardown_class called once for the class {self.db}")

    def setup_method(self):
        self.test_counter += 1
        print(f"  setup_method called for every method {self.db} {self.test_counter}")
        print(self)

    def teardown_method(self):
        print(f"  teardown_method called for every method {self.test_counter}")


    def test_one(self):
        print("    one")
        assert True
        print("    one after")

    def test_two(self):
        print("    two")
        assert False
        print("    two after")

    def test_three(self):
        print("    three")
        assert True
        print("    three after")

PyTest: Fixture Class setup and teardown output

$ pytest -sq test_class.py

Output:

setup_class called once for the class
<class 'test_class.TestClass'>
  setup_method called for every method 1
<test_class.TestClass object at 0x7d5621762960>
    one
    one after
.  teardown_method called for every method 1
  setup_method called for every method mydb 1
<test_class.TestClass object at 0x7d5620ee6060>
    two
F  teardown_method called for every method 1
  setup_method called for every method 1
<test_class.TestClass object at 0x7d5620c1c830>
    three
    three after
.  teardown_method called for every method 1
teardown_class called once for the class mydb

What is Dependency injection?

def serve_bolognese(pasta, sauce):
    dish = mix(pasta, sauce)
    return dish
  1. Find function.
  2. Check parameters of the function.
  3. Prepare the appropriate objects.
  4. Call the function passing these objects.

Pytest fixture - tmpdir

  • tmpdir

  • Probably the simples fixture that PyTest can provide is the tmpdir.

  • Pytest will prepare a temporary directory and call the test function passing the path to the tmpdir.

  • PyTest will also clean up the temporary folder, though it will keep the 3 most recent ones. (this is configurable)

import os


def test_something(tmpdir):
    print(tmpdir)      # /private/var/folders/ry/z60xxmw0000gn/T/pytest-of-gabor/pytest-14/test_read0

    d = tmpdir.mkdir("subdir")
    fh = d.join("config.ini")
    fh.write("Some text")

    filename = os.path.join( fh.dirname, fh.basename )

    temp_dir = str(tmpdir)

    # ...

Pytest and tempdir

  • tmpdir

  • This is a simple application that reads and writes config files (ini file).

  • We can test the parse_file by preparing some input files and check if we get the expected data structure.

  • In order to test the save_file we need to be able to save a file somewhere.

  • Saving it in the current folder will create garbage files. (and the folder might be read-only in some environments).

  • For each test we'll have to come up with a separate filename so they won't collide.

  • Using a tmpdir solves this problem.

import re

def parse_file(filename):
    data = {}
    with open(filename) as fh:
        for row in fh:
            row = row.rstrip("\n")
            if re.search(r'=', row):
                k, v = re.split(r'\s*=\s*', row)
                data[k] = v
            else:
                pass # error reporting?
    return data

def save_file(filename, data):
    with open(filename, 'w') as fh:
        for k in data:
            fh.write("{}={}\n".format(k, data[k]))

if __name__ == '__main__':
    print(parse_file('a.cfg'))

{% embed include file="src/examples/pytest/a.cfg)

import mycfg
import os

def test_parse():
    data = mycfg.parse_file('a.cfg')
    assert data, {
        'name'  : 'Foo Bar',
        'email' : 'foo@bar.com',
    }

def test_example(tmpdir):
    original = {
        'name'  : 'My Name',
        'email' : 'me@home.com',
        'home'  : '127.0.0.1',
    }
    filename = str(tmpdir.join('abc.cfg'))
    assert not os.path.exists(filename)
    mycfg.save_file(filename, original)
    assert os.path.exists(filename)
    new = mycfg.parse_file(filename)
    assert new == original

Pytest CLI key-value store

  • This is a similar application - a file-base key-value store - where the data files is computed from the name of the program: store.json.
  • Runing two tests in parallel will make the tests collide by using the same data file.
import os
import json

def set(key, value):
    data = _read_data()
    data[key] = value
    _save_data(data)

def get(key):
    data = _read_data()
    return(data.get(key))

def _save_data(data):
    filename = _get_filename()
    with open(filename, 'w') as fh:
        json.dump(data, fh, sort_keys=True, indent=4)


def _read_data():
    filename = _get_filename()
    data = {}
    if os.path.exists(filename):
        with open(filename) as fh:
            data = json.load(fh)
    return data

def _get_filename():
    path = os.path.dirname(os.path.abspath(__file__))
    filename = os.path.join(path, 'store.json')
    return filename


if __name__ == '__main__':
    import sys
    if len(sys.argv) == 3:
        cmd, key = sys.argv[1:]
        if cmd == 'get':
            print(get(key))
            exit(0)

    if len(sys.argv) == 4:
        cmd, key, value = sys.argv[1:]
        if cmd == 'set':
            set(key, value)
            print('SET')
            exit(0)
    print(f"""Usage:
           {sys.argv[0]} set key value
           {sys.argv[0]} get key
    """)
import store

def test_store():
    store.set('color', 'Blue')
    assert store.get('color') == 'Blue'

    store.set('color', 'Red')
    assert store.get('color') == 'Red'

    store.set('size', '42')
    assert store.get('size') == '42'
    assert store.get('color') == 'Red'

Pytest testing key-value store - environment variable

  • We need to be able to set the name of the data file externally. e.g. Using an environment variable.
import os
import json

def set(key, value):
    data = _read_data()
    data[key] = value
    _save_data(data)

def get(key):
    data = _read_data()
    return(data.get(key))

def _save_data(data):
    filename = _get_filename()
    with open(filename, 'w') as fh:
        json.dump(data, fh, sort_keys=True, indent=4)


def _read_data():
    filename = _get_filename()
    data = {}
    if os.path.exists(filename):
        with open(filename) as fh:
            data = json.load(fh)
    return data

def _get_filename():
    path = os.environ.get('STORE_DIR', os.path.dirname(os.path.abspath(__file__)))
    filename = os.path.join(path, 'store.json')
    return filename


if __name__ == '__main__':
    import sys
    if len(sys.argv) == 3:
        cmd, key = sys.argv[1:]
        if cmd == 'get':
            print(get(key))
            exit(0)

    if len(sys.argv) == 4:
        cmd, key, value = sys.argv[1:]
        if cmd == 'set':
            set(key, value)
            print('SET')
            exit(0)
    print(f"""Usage:
           {sys.argv[0]} set key value
           {sys.argv[0]} get key
    """)
import store
import os

def test_store(tmpdir):
    os.environ['STORE_DIR'] = str(tmpdir) # str expected, not LocalPath
    print(tmpdir)
    store.set('color', 'Blue')
    assert store.get('color') == 'Blue'

    store.set('color', 'Red')
    assert store.get('color') == 'Red'

    store.set('size', '42')
    assert store.get('size') == '42'
    assert store.get('color') == 'Red'

Pytest testing key-value store - environment variable (outside)

import os
import json

path = os.environ.get('STORE_DIR', os.path.dirname(os.path.abspath(__file__)))
filename = os.path.join(path, 'store.json')

def set(key, value):
    data = _read_data()
    data[key] = value
    _save_data(data)

def get(key):
    data = _read_data()
    return(data.get(key))

def _save_data(data):
    with open(filename, 'w') as fh:
        json.dump(data, fh, sort_keys=True, indent=4)


def _read_data():
    data = {}
    if os.path.exists(filename):
        with open(filename) as fh:
            data = json.load(fh)
    return data



if __name__ == '__main__':
    import sys
    if len(sys.argv) == 3:
        cmd, key = sys.argv[1:]
        if cmd == 'get':
            print(get(key))
            exit(0)

    if len(sys.argv) == 4:
        cmd, key, value = sys.argv[1:]
        if cmd == 'set':
            set(key, value)
            print('SET')
            exit(0)
    print(f"""Usage:
           {sys.argv[0]} set key value
           {sys.argv[0]} get key
    """)
import os
import store

def test_store(tmpdir):
    os.environ['STORE_DIR'] = str(tmpdir) # str expected, not LocalPath
    print(os.environ['STORE_DIR'])
    store.set('color', 'Blue')
    assert store.get('color') == 'Blue'

    store.set('color', 'Red')
    assert store.get('color') == 'Red'

    store.set('size', '42')
    assert store.get('size') == '42'
    assert store.get('color') == 'Red'

Application that prints to STDOUT and STDERR

import sys

def welcome(to_out, to_err=None):
    print(f"STDOUT: {to_out}")
    if to_err:
        print(f"STDERR: {to_err}", file=sys.stderr)
from greet import welcome

welcome("Jane", "Joe")
print('---')
welcome("Becky")

Output:

STDERR: Joe
STDOUT: Jane
---
STDOUT: Becky

Pytest capture STDOUT and STDERR with capsys

  • capsys

Captures everything that is printed to STDOUT and STDERR so we can compare that to the expected output and error.

from greet import welcome

def test_myoutput(capsys):
    welcome("hello", "world")
    out, err = capsys.readouterr()
    assert out == "STDOUT: hello\n"
    assert err == "STDERR: world\n"

    welcome("next")
    out, err = capsys.readouterr()
    assert out == "STDOUT: next\n"
    assert err == ""
pytest test_greet.py

PyTest - write your own fixture

  • tmpdir and capsys are nice to have, but we will need more complex setup and teardown.

  • We can write any function to become fixture, we only need to decorate it with @pytest.fixture

  • We can implement fixture functions to act like the xUnit fixture we saw ealrier or using dependency injection as tmpdir and capsys work.

Pytest Fixture - autouse fixtures

  • yield

  • Similar to setup_function, teardown_function, setup_module, teardown_module

import pytest
import time

@pytest.fixture(autouse = True, scope="module")
def fix_module():
    answer = 42
    print(f"Module setup {answer}")
    yield
    print(f"Module teardown {answer}")


@pytest.fixture(autouse = True, scope="function")
def fix_function():
    start = time.time()
    print(f"  Function setup {start}")
    yield
    print(f"  Function teardown {start}")


def test_one():
    print("    Test one")
    assert True
    print("    Test one - 2nd part")

def test_two():
    print("    Test two")
    assert False
    print("    Test two - 2nd part")

Output:

Module setup 42
  Function setup 1612427609.9726396
    Test one
    Test one - 2nd part
  Function teardown 1612427609.9726396
  Function setup 1612427609.9741583
    Test two
  Function teardown 1612427609.9741583
Module teardown 42

Share fixtures among test files: conftest.py

  • conftest.py
import pytest

@pytest.fixture(autouse = True, scope="session")
def fix_session():
    print("\nSession setup")
    yield
    print("\nSession teardown")


@pytest.fixture(autouse = True, scope="module")
def fix_module():
    print("\n  Module setup")
    yield
    print("\n  Module teardown")


@pytest.fixture(autouse = True, scope="function")
def fix_function():
    print("\n    Function setup")
    yield
    print("\n    Function teardown")
def test_one():
    print("      Test Blue one")
    assert True


def test_two():
    print("      Test Blue two")
    assert False
def test_three():
    print("      Test Green Three")
    assert True
pytest -qs

Output:


Session setup

  Module setup
    Function setup
      Test Blue one
    Function teardown

    Function setup
      Test Blue two
    Function teardown
  Module teardown

  Module setup
    Function setup
      Test Green Three
    Function teardown
  Module teardown

Session teardown

Manual fixtures (dependency injection)

import pytest

@pytest.fixture()
def blue():
   print("Blue setup")
   yield
   print("Blue teardown")

@pytest.fixture()
def green():
   print("Green setup")
   yield
   print("Green teardown")

#def test_try(yellow):
#    print("yellow")

def test_one(blue, green):
   print("    Test one")


def test_two(green, blue):
   print("    Test two")
   assert False

Output:

Blue setup
Green setup
    Test one
Green teardown
Blue teardown

Green setup
Blue setup
    Test two
Blue teardown
Green teardown
  • We can't add fixtures to test_functions as decorators (as I was the case in NoseTest), we need to use dependency injection.

Pytest Fixture providing value

import pytest
import application


@pytest.fixture()
def app():
    print('app starts')
    myapp = application.App()
    return myapp


def test_add_user_foo(app):
    app.add_user("Foo")
    assert app.get_user() == 'Foo'

def test_add_user_bar(app):
    app.add_user("Bar")
    assert app.get_user() == 'Bar'

class App:
    def __init__(self):
        self.pi = 3.14
        # .. set up database
        print("__init__ of App")


    def add_user(self, name):
        print("Working on add_user({})".format(name))
        self.name = name

    def get_user(self):
        return self.name
$ pytest -sq

Output:

getapp starts
__init__ of App
Working on add_user(Foo)

getapp starts
__init__ of App
Working on add_user(Bar)

Pytest Fixture providing value with teardown

import pytest
import application


@pytest.fixture()
def myapp():
    print('myapp starts')
    app = application.App()

    yield app

    app.shutdown()
    print('myapp ends')

def test_add_user_foo(myapp):
    myapp.add_user("Foo")
    assert myapp.get_user() == 'Foo'

def test_add_user_bar(myapp):
    myapp.add_user("Bar")
    assert myapp.get_user() == 'Bar'

class App:
    def __init__(self):
        self.pi = 3.14
        # .. set up database
        print("__init__ of App")


    def shutdown(self):
        print("shutdown of App cleaning up database")


    def add_user(self, name):
        print("Working on add_user({})".format(name))
        #self.name = name

    def get_user(self):
        return self.name
$ pytest -sq

Output:

getapp starts
__init__ of App
Working on add_user(Foo)
shutdown of App cleaning up database
getapp ends


getapp starts
__init__ of App
Working on add_user(Bar)
shutdown of App cleaning up database
getapp ends

Pytest create fixture with file(s) - app and test

import json
import os


def _get_config_file():
    return os.environ.get('APP_CONFIG_FILE', 'config.json')

def _read_config():
    config_file = _get_config_file()
    with open(config_file) as fh:
        return json.load(fh)

def app(protocol):
    config = _read_config()
    # ... do stuff based on the config
    address = protocol + '://' + config['host'] + ':' + config['port']

    path = os.path.dirname(_get_config_file())
    outfile = os.path.join(path, 'out.txt')
    with open(outfile, 'w') as fh:
        fh.write(address)

    return address

{% embed include file="src/examples/pytest/configfile/config.json)

from myapp import app
result = app('https')
print(result)
https://szabgab.com:80
  • Test application
import json
import os

from myapp import app

def test_app_one(tmpdir):
    config_file = os.path.join(str(tmpdir), 'conf.json')
    with open(config_file, 'w') as fh:
        json.dump({'host' : 'code-maven.com', 'port' : '443'}, fh)
    os.environ['APP_CONFIG_FILE'] = config_file

    result = app('https')

    assert result == 'https://code-maven.com:443'
    outfile = os.path.join(str(tmpdir), 'out.txt')
    with open(outfile) as fh:
        output_in_file = fh.read()
    assert output_in_file == 'https://code-maven.com:443'


def test_app_two(tmpdir):
    config_file = os.path.join(str(tmpdir), 'conf.json')
    with open(config_file, 'w') as fh:
        json.dump({'host' : 'code-maven.com', 'port' : '443'}, fh)
    os.environ['APP_CONFIG_FILE'] = config_file

    result = app('http')

    assert result == 'http://code-maven.com:443'
    outfile = os.path.join(str(tmpdir), 'out.txt')
    with open(outfile) as fh:
        output_in_file = fh.read()
    assert output_in_file == 'http://code-maven.com:443'

Pytest create fixture with file(s) - helper function

import json
import os

from myapp import app

def test_app_one(tmpdir):
    setup_config(tmpdir)

    result = app('https')

    assert result == 'https://code-maven.com:443'
    output_in_file = read_file(tmpdir)
    assert output_in_file == 'https://code-maven.com:443'

def test_app_two(tmpdir):
    setup_config(tmpdir)

    result = app('http')

    assert result == 'http://code-maven.com:443'
    output_in_file = read_file(tmpdir)
    assert output_in_file == 'http://code-maven.com:443'

def setup_config(tmpdir):
    config_file = os.path.join(str(tmpdir), 'conf.json')
    with open(config_file, 'w') as fh:
        json.dump({'host' : 'code-maven.com', 'port' : '443'}, fh)
    os.environ['APP_CONFIG_FILE'] = config_file


def read_file(tmpdir):
    outfile = os.path.join(str(tmpdir), 'out.txt')
    with open(outfile) as fh:
        output_in_file = fh.read()
    return output_in_file

Pytest create fixture with file(s) - fixture

import pytest
import json
import os

from myapp import app

def test_app_one(outfile):
    result = app('https')

    assert result == 'https://code-maven.com:443'
    output_in_file = read_file(outfile)
    assert output_in_file == 'https://code-maven.com:443'

def test_app_two(outfile):
    result = app('http')

    assert result == 'http://code-maven.com:443'
    output_in_file = read_file(outfile)
    assert output_in_file == 'http://code-maven.com:443'


@pytest.fixture()
def outfile(tmpdir):
    #print(tmpdir)
    config_file = os.path.join(str(tmpdir), 'conf.json')
    with open(config_file, 'w') as fh:
        json.dump({'host' : 'code-maven.com', 'port' : '443'}, fh)
    os.environ['APP_CONFIG_FILE'] = config_file
    return os.path.join(str(tmpdir), 'out.txt')


def read_file(outfile):
    with open(outfile) as fh:
        output_in_file = fh.read()
    return output_in_file

Pytest with Docker - application

from flask import Flask, jsonify, request
import time
calcapp = Flask(__name__)

@calcapp.route("/")
def main():
    return 'Post JSON to /api/calc'

@calcapp.route("/api/calc")
def add():
    a = int(request.args.get('a', 0))
    b = int(request.args.get('b', 0))
    return jsonify({
        "a"        :  a,
        "b"        :  b,
        "add"      :  a+b,
    })

import app


def test_app():
    web = app.calcapp.test_client()

    rv = web.get('/')
    assert rv.status == '200 OK'
    assert b'Post JSON to /api/calc' == rv.data

def test_calc():
    web = app.calcapp.test_client()

    rv = web.get('/api/calc?a=10&b=2')
    assert rv.status == '200 OK'
    assert rv.headers['Content-Type'] == 'application/json'
    resp = rv.json
    assert resp == {
        "a"        :  10,
        "b"        :  2,
        "add"      :  12,
    }


Pytest with Docker - test

{% embed include file="src/examples/pytest/docker/Dockerfile) {% embed include file="src/examples/pytest/docker/.dockerignore)

import requests
import time
import os

def setup_module():
    global image_name
    image_name = 'test_image_' + str(int(time.time()*1000))
    print(f"image: {image_name}")
    print("setup_module ", os.system(f"docker build -t {image_name} ."))

def teardown_module():
    print("teardown_module ", os.system(f"docker rmi -f {image_name}"))


def setup_function():
    global port
    global container_name

    port = '5001'
    container_name = 'test_container_' + str(int(time.time()*1000))
    print(f"container: {container_name}")
    print("setup_function ", os.system(f"docker run --rm -d -v$(pwd):/workdir -p{port}:5000 --name {container_name} {image_name}"))
    time.sleep(1) # Let the Docker container start

def teardown_function():
    print("teardown_function ", os.system(f"docker stop -t 0 {container_name}"))

def test_app():
    url = f"http://localhost:{port}/api/calc?a=3&b=10"
    print(url)
    res = requests.get(url)
    assert res.status_code == 200
    assert res.json() == {'a': 3, 'add': 13, 'b': 10}

Pytest with Docker - improved

import pytest
import requests
import time
import os

@pytest.fixture(autouse = True, scope="module")
def image():
    image_name = 'test_image_' + str(int(time.time()*1000))
    print(f"image: {image_name}")
    print("setup_module ", os.system(f"docker build -t {image_name} ."))

    yield image_name

    print("teardown_module ", os.system(f"docker rmi -f {image_name}"))


@pytest.fixture()
def myport(image):
    port = '5001'
    container_name = 'test_container_' + str(int(time.time()*1000))
    print(f"container: {container_name}")
    print("setup_function ", os.system(f"docker run --rm -d -v$(pwd):/workdir -p{port}:5000 --name {container_name} {image}"))
    time.sleep(1) # Let the Docker container start

    yield port

    print("teardown_function ", os.system(f"docker stop -t 0 {container_name}"))

def test_app(myport):
    url = f"http://localhost:{myport}/api/calc?a=3&b=10"
    print(url)
    res = requests.get(url)
    assert res.status_code == 200
    assert res.json() == {'a': 3, 'add': 13, 'b': 10}

def test_app_again(myport):
    url = f"http://localhost:{myport}/api/calc?a=-1&b=1"
    print(url)
    res = requests.get(url)
    assert res.status_code == 200
    assert res.json() == {'a': -1, 'add': 0, 'b': 1}

Pytest fixture inject data

import pytest

@pytest.fixture()
def config():
    return {
       'name'  : 'Foo Bar',
       'email' : 'foo@bar.com',
    }

def test_some_data(config):
    assert True
    print(config)

Pytest fixture for MongoDB

# conftest.py

import pytest
import os, time

from app.common import get_mongo

@pytest.fixture(autouse = True)
def configuration():
    dbname = 'test_app_' + str(int(time.time()))
    os.environ['APP_DB_NAME'] = dbname

    yield

    get_mongo().drop_database(dbname)

Pytest parameters

import mymath
import pytest

def test_add_all():
    assert mymath.add(2, 3)  == 5
    assert mymath.add(-1, 1)  == 0

def test_add_1():
    assert mymath.add(2, 3)  == 5

def test_add_2():
    assert mymath.add(-1, 1)  == 0

@pytest.mark.parametrize("a,b,result", [
    (2, 3, 5),
    (-1, 1, 0),
])
def test_add(a, b, result):
    assert mymath.add(a, b)  == result


def test_div():
    assert mymath.div(6, 3)  == 2
    assert mymath.div(42, 1) == 42

Pytest parametrized fixture

Sometimes we would like to pass some parameters to the fixture. We can do this with one or more parameters.

import os
import pathlib
import time
import pytest


@pytest.fixture(autouse = True, scope="function", params=["name"])
def generate(name):
    print(f"Fixture before test using {name}")
    yield
    print(f"Fixture after test using {name}")

@pytest.mark.parametrize("name", ["apple"])
def test_with_param(name):
    print(f"Test using {name}")

@pytest.mark.parametrize("name", ["banana"])
def test_without_param():
    print(f"Test not using param")

Output:

Fixture before test using apple
Test using apple
Fixture after test using apple

Fixture before test using banana
Test not using param
Fixture after test using banana

Pytest parametrized fixture with dependency injection

import os
import pathlib
import time
import pytest


@pytest.fixture(params=["name"])
def generate(name):
    print(f"Fixture before test using {name}")
    yield
    print(f"Fixture after test using {name}")

@pytest.mark.parametrize("name", ["apple"])
def test_with_param(name, generate):
    print(f"Test using {name}")

@pytest.mark.parametrize("name", ["banana"])
def test_without_param(generate):
    print(f"Test not using param")

Output:

Fixture before test using apple
Test using apple
Fixture after test using apple
Fixture before test using banana
Test not using param
Fixture after test using banana

Pytest parametrized fixture to use Docker

I created a GitHub Action for the OSDC site generator which is running inside a Docker container. At one point I wanted the whole image creation and running in the image be part of the test.

import os
import pathlib
import time
import pytest


@pytest.fixture(autouse = True, scope="function", params=["name"])
def generate(name):
    image = f"osdc-test-{str(time.time())}"
    os.system(f'docker build -t {image} .')
    os.system(f'docker run --rm -w /data -v{os.getcwd()}/{name}:/data  {image}')
    yield
    os.system(f'docker rmi {image}')

@pytest.mark.parametrize("name", ["test1"])
def test_one(name):
    root = pathlib.Path(name)
    site = root.joinpath('_site')
    assert site.exists()
    assert site.joinpath('index.html').exists()
    pages = site.joinpath('osdc-skeleton')
    assert pages.exists()

    with pages.joinpath('about.html').open() as fh:
        html = fh.read()
    assert '<title>About</title>' in html


Pytest Mocking

Pytest: Mocking - why?

  • Testing environment that does not depend on external systems.

  • Faster tests (mock remote calls, mock whole databases, etc.)

  • Fake some code/application/API that does not exist yet.

  • Test error conditions in a system not under our control.

  • TDD, unit tests

  • Spaghetti code

  • Simulate hard to replicate cases

  • 3rd party APIs or applications

Pytest: Mocking - what?

  • Hard-coded path in code.
  • STDIN/STDOUT/STDERR.
  • External dependency (e.g. an API).
  • Random values.
  • Methods accessing a database.
  • Time.

Pytest: What is Mocking? - Test Doubles

Pytest: Monkey Patching

Pytest: Hard-coded path

import json

data_file = "/corporate/fixed/path/data.json"

def do_something():
    print(data_file)
    #with open(data_file) as fh:
    #    data = json.load(fh)
    #    ...

Pytest: Hard-coded path - manually replace attribute

import app

def test_app():
    app.data_file = 'test_1.json'    # manually overwrite

    res = app.do_something()       # it is now test_1.json
    ...

def test_again():
    res = app.do_something()      # it is still test_1.json
    ...

Pytest: Hard-coded path - monkeypatch attribute

  • monkeypatch
  • setattr
import app

def test_sum(monkeypatch):
    monkeypatch.setattr(app, 'data_file', 'test_1.json')

    res = app.do_something()    # It is now test_1.json
    ...


def test_again():
    res = app.do_something() # back to the original value
    ...

Pytest: Hard-coded path - monkeypatch attribute - tempdir

  • monkeypatch
  • setattr
  • tmpdif
import app

def test_sum(monkeypatch, tmpdir):
    fake_file = tmpdir.join('test_1.json')
    monkeypatch.setattr(app, 'data_file', fake_file)

    res = app.do_something()
    ...

def test_again():
    res = app.do_something()    # back to the original value
    ...

Pytest: Mocking slow external API call

import externalapi

def compute(x, y):
    xx = externalapi.remote_compute(x)
    yy = externalapi.remote_compute(y)
    result = (xx+yy) ** 0.5
    return result
import time

def remote_compute(x):
    time.sleep(5) # to imitate long running process
    return x*x

import mymath

print(mymath.compute(3, 4))

Pytest: Mocking slow external API call - manually replace function

import mymath

def mocked_remote_compute(x):
    print(f"mocked received {x}")
    if x == 3:
        return 9
    if x == 4:
        return 16

mymath.externalapi.remote_compute = mocked_remote_compute

def test_compute():
    assert mymath.compute(3, 4) == 5


def test_other():
    res = mymath.compute(2, 7)
    ...

Pytest: Mocking slow external API call - manually replace function - broken remote

import mymath

def mocked_remote_compute(x):
    if x == 3:
        return '9'
    if x == 4:
        return 16

mymath.externalapi.remote_compute = mocked_remote_compute

# What should we really expect here?
# I don't want to see a Python-level exception
# Maybe an exception of our own.
def test_compute_breaks():
    assert mymath.compute(3, 4) == 5

Pytest: Mocking slow external API call - monkeypatch

import mymath

def mocked_remote_compute(x):
    print(f"mocked received {x}")
    if x == 3:
        return 9
    if x == 4:
        return 16


def test_compute(monkeypatch):
    monkeypatch.setattr(mymath.externalapi, 'remote_compute', mocked_remote_compute)
    assert mymath.compute(3, 4) == 5
    ...

def test_other(monkeypatch):
    def mocked_remote_compute(x):
        print(f"other mocked received {x}")
        if x == 6:
            return 36
        if x == 8:
            return 64
    monkeypatch.setattr(mymath.externalapi, 'remote_compute', mocked_remote_compute)
    assert mymath.compute(6, 8) == 10
    ...

Pytest: Mocking STDIN

def ask_one():
    name = input("Please enter your name: ")
    print(f"Your name is {name}")

def ask_two():
    width = float(input("Please enter width: "))
    length = float(input("Please enter length: "))
    result = width * length
    print(f"{width}*{length} is {result}")
import app

app.ask_one()
#app.ask_two()

Pytest: Mocking STDIN manually mocking

import app
import io
import sys

def test_app(capsys):
    sys.stdin = io.StringIO('Foo')
    app.ask_one()
    out, err = capsys.readouterr()
    assert err == ''
    #print(out)
    assert out == 'Please enter your name: Your name is Foo\n'

def test_app_again(capsys):
    ...   # still the same handle
import app
import io
import sys

def test_app(capsys):
    sys.stdin = io.StringIO('3\n4')
    app.ask_two()
    out, err = capsys.readouterr()
    assert err == ''
    #print(out)
    assert out == 'Please enter width: Please enter length: 3.0*4.0 is 12.0\n'

Pytest: Mocking STDIN - monkeypatch

import app
import io
import sys

def test_one(capsys, monkeypatch):
    monkeypatch.setattr(sys, 'stdin', io.StringIO('Foo'))
    app.ask_one()
    out, err = capsys.readouterr()
    assert err == ''
    #print(out)
    assert out == 'Please enter your name: Your name is Foo\n'

def test_two(monkeypatch, capsys):
    monkeypatch.setattr(sys, 'stdin', io.StringIO('3\n4'))
    app.ask_two()
    out, err = capsys.readouterr()
    assert err == ''
    #print(out)
    assert out == 'Please enter width: Please enter length: 3.0*4.0 is 12.0\n'

Pytest: Mocking random numbes - the application

import random

class Game():
    def __init__(self):
        self.hidden = random.randint(1, 200)
        #print(hidden)

    def guess(self, guessed_number):
        if self.hidden == guessed_number:
            return 'match'
        if guessed_number < self.hidden:
            return 'too small'
        return 'too big'

Pytest: Mocking random numbes

import app

def test_game(monkeypatch):
    monkeypatch.setattr(app.random, 'randint', lambda x, y: 70)
    game = app.Game()
    print(game.hidden)
    response = game.guess(100)
    assert response == 'too big'

    response = game.guess(50)
    assert response == 'too small'

    response = game.guess(70)
    assert response == 'match'

Pytest: Mocking multiple random numbers

import random

def random_sum(n):
    total = 0
    for _ in range(n):
        current = random.randint(0, 10)
        #print(current)
        total += current
    return total

import app

result = app.random_sum(3)
print(result)
import app

def test_random_sum(monkeypatch):
    values = [2, 3, 4]
    monkeypatch.setattr(app.random, 'randint', lambda x, y: values.pop(0))
    result = app.random_sum(3)
    assert result == 9

Pytest: Mocking environment variables

import subprocess

def get_python_version():
    proc = subprocess.Popen(['python', '-V'],
        stdout = subprocess.PIPE,
        stderr = subprocess.PIPE,
    )

    out,err = proc.communicate()
    if proc.returncode:
        raise Exception(f"Error exit {proc.returncode}")
    #if err:
    #    raise Exception(f"Error {err}")
    if out:
        return out.decode('utf8') # In Python 3.8.6
    else:
        return err.decode('utf8') # In Python 2.7.18
import app
import os

def test_python():
    out = app.get_python_version()
    assert out == 'Python 3.8.6\n'

def test_in_path(monkeypatch):
    monkeypatch.setenv('PATH', '/usr/bin')
    out = app.get_python_version()
    assert out == 'Python 2.7.18\n'
    print(os.environ['PATH'])
    print()

def test_other():
    print(os.environ['PATH'])
    print()

def test_keep(monkeypatch):
    monkeypatch.setenv('PATH', '/usr/bin' + os.pathsep + os.environ['PATH'])
    print(os.environ['PATH'])

Pytest: Mocking time

There are several different problems with time

  • A login that should expire after 24 hours. We don't want to wait 24 hours.
  • Some code that must be executed on certain dates. (e.g. January 1st every year)

Pytest: Mocking time (test expiration)

In this application the user can "login" by providing their name and then call the access_page method within session_length seconds.

Because we know it internally users the time.time function to retreive the current time (in seconds since the epoch) we can replace that function with ours that will fake the time to be in the future.

import time

class App():
    session_length = 10

    def login(self, username):
        self.username = username
        self.start = time.time()

    def access_page(self, username):
        if self.username == username and self.start + self.session_length > time.time():
            return 'approved'
        else:
            return 'expired'

import app
import time


def test_app(monkeypatch):
    user = app.App()
    user.login('foo')
    assert user.access_page('foo') == 'approved'
    current = time.time()
    print(current)

    monkeypatch.setattr(app.time, 'time', lambda : current + 9)
    assert user.access_page('foo') == 'approved'

    monkeypatch.setattr(app.time, 'time', lambda : current + 11)
    assert user.access_page('foo') == 'expired'

Pytest: mocking specific timestamp with datetime

This function will return one of 3 strings based on the date: new_year on January 1st, leap_day on February 29, and regular on every other day. How can we test it?

import datetime

def daily_task():
    now = datetime.datetime.now()
    print(now)
    if now.month == 1 and now.day == 1:
        return 'new_year'
    if now.month == 2 and now.day == 29:
        return 'leap_day'
    return 'regular'
import app

task_name = app.daily_task()
print(task_name)

Pytest: mocking specific timestamp with datetime

import app
import datetime

def test_new_year(monkeypatch):
    mydt = datetime.datetime
    class MyDatetime():
        def now():
            return mydt(2000, 1, 1)

    monkeypatch.setattr(app.datetime, 'datetime', MyDatetime)
    task_name = app.daily_task()
    print(task_name)
    assert task_name == 'new_year'


def test_leap_year(monkeypatch):
    mydt = datetime.datetime
    class MyDatetime():
        def now():
            return mydt(2004, 2, 29)

    monkeypatch.setattr(app.datetime, 'datetime', MyDatetime)
    task_name = app.daily_task()
    print(task_name)
    assert task_name == 'leap_day'


def test_today(monkeypatch):
    mydt = datetime.datetime
    class MyDatetime():
        def now():
            return mydt(2004, 2, 28)

    monkeypatch.setattr(app.datetime, 'datetime', MyDatetime)
    task_name = app.daily_task()
    print(task_name)
    assert task_name == 'regular'

Pytest: mocking datetime.date.today

The datetime class has other methods to retreive the date (and I could not find how to mock the function deep inside).

import datetime

def get_today():
    return datetime.date.today()

import app

today = app.get_today()
print(type(today))
print(today)
import app
import datetime

def test_new_year(monkeypatch):
    mydt = datetime.date
    class MyDate():
        def today():
            return mydt(2000, 1, 1)

    monkeypatch.setattr(app.datetime, 'date', MyDate)
    today = app.get_today()
    #print(today)
    assert str(today) == '2000-01-01'

def test_leap_year(monkeypatch):
    mydt = datetime.date
    class MyDate():
        def today():
            return mydt(2004, 2, 29)

    monkeypatch.setattr(app.datetime, 'date', MyDate)
    today = app.get_today()
    #print(today)
    assert str(today) == '2004-02-29'


Pytest: mocking datetime date

import datetime
import math

def get_years_passed_category(date_string):
    date = datetime.date.fromisoformat(date_string)
    time_passed = datetime.date.today() - date
    years_passed = time_passed.days // 365
    years_passed_start_and_end_range_tuples_to_category = {
        (0, 1):    "Less than 1 year",
        (1, 5):    "1 - 5 years",
        (5, 10):   "5 - 10 years",
        (10, 20):  "10 - 20 years",
        (20, 30):  "20 - 30 years",
        (30, math.inf): "More than 30 years"
    }
    for (start, end), category in years_passed_start_and_end_range_tuples_to_category.items():
        if start <= years_passed < end:
            return category

    raise ValueError(f"Could not find a years_passed_category for '{date_string}'")
import app

for date in ['2000-01-01', '1990-06-02', '2020-01-01']:
    cat = app.get_years_passed_category(date)
    print(f"{date} : {cat}")

import app
import datetime
import pytest

def test_app(monkeypatch):
    mydt = datetime.date
    class MyDate():
        def today():
            return mydt(2021, 2, 15)
        def fromisoformat(date_str):
            return mydt.fromisoformat(date_str)

    monkeypatch.setattr(app.datetime, 'date', MyDate)

    assert app.get_years_passed_category('1990-06-02') == 'More than 30 years'
    assert app.get_years_passed_category('2000-01-01') == '20 - 30 years'
    assert app.get_years_passed_category('2011-01-01') == '10 - 20 years'
    assert app.get_years_passed_category('2016-01-01') == '5 - 10 years'
    assert app.get_years_passed_category('2020-01-01') == '1 - 5 years'
    assert app.get_years_passed_category('2021-02-14') == 'Less than 1 year'
    assert app.get_years_passed_category('2021-02-15') == 'Less than 1 year'


    with pytest.raises(Exception) as err:
        app.get_years_passed_category('2021-02-16')
    assert err.type == ValueError
    assert str(err.value) == "Could not find a years_passed_category for '2021-02-16'"

Pytest: One dimensional spacefight

import random

def play():
    debug = False
    move = False
    while True:
        print("\nWelcome to another Number Guessing game")
        hidden = random.randrange(1, 201)
        while True:
            if debug:
                print("Debug: ", hidden)
    
            if move:
                mv = random.randrange(-2, 3)
                hidden = hidden + mv
    
            user_input = input("Please enter your guess [x|s|d|m|n]: ")
            print(user_input)
    
            if user_input == 'x':
                print("Sad to see you leave early")
                return
    
            if user_input == 's':
                print("The hidden value is ", hidden)
                continue
    
            if user_input == 'd':
                debug = not debug
                continue
    
            if user_input == 'm':
                move = not move
                continue
    
            if user_input == 'n':
                print("Giving up, eh?")
                break
    
            guess = int(user_input)
            if guess == hidden:
                print("Hit!")
                break
    
            if guess < hidden:
                print("Your guess is too low")
            else:
                print("Your guess is too high")
    

if __name__ == '__main__':
    play()

Pytest: Mocking input and output in the game

import game
import sys
import io

def test_immediate_exit(monkeypatch, capsys):
    monkeypatch.setattr(sys, 'stdin', io.StringIO('x'))

    game.play()
    out, err = capsys.readouterr()
    assert err == ''

    expected = '''
Welcome to another Number Guessing game
Please enter your guess [x|s|d|m|n]: x
Sad to see you leave early
'''
    assert out == expected

Pytest: Mocking input and output in the game - no tools

import game

def test_immediate_exit():
    input_values = ['x']
    output = []

    def mock_input(s):
       output.append(s)
       return input_values.pop(0)
    game.input = mock_input
    game.print = lambda s : output.append(s)

    game.play()

    assert output == [
        '\nWelcome to another Number Guessing game',
        'Please enter your guess [x|s|d|m|n]: ',
        'x',
        'Sad to see you leave early',
    ]

Pytest: Mocking random in the game

import game
import random
import sys
import io

def test_immediate_exit(monkeypatch, capsys):
    input_values = '\n'.join(['30', '50', '42', 'x'])
    monkeypatch.setattr(sys, 'stdin', io.StringIO(input_values))

    monkeypatch.setattr(random, 'randrange', lambda a, b : 42)

    game.play()
    out, err = capsys.readouterr()

    assert out == '''
Welcome to another Number Guessing game
Please enter your guess [x|s|d|m|n]: 30
Your guess is too low
Please enter your guess [x|s|d|m|n]: 50
Your guess is too high
Please enter your guess [x|s|d|m|n]: 42
Hit!

Welcome to another Number Guessing game
Please enter your guess [x|s|d|m|n]: x
Sad to see you leave early
'''

Pytest: Mocking random in the game - no tools

import game
import random

def test_immediate_exit():
    input_values = ['30', '50', '42', 'x']
    output = []

    def mock_input(s):
       output.append(s)
       return input_values.pop(0)
    game.input = mock_input
    game.print = lambda s : output.append(s)
    random.randrange = lambda a, b : 42

    game.play()

    assert output == [
        '\nWelcome to another Number Guessing game',
        'Please enter your guess [x|s|d|m|n]: ',
        '30',
        'Your guess is too low',
        'Please enter your guess [x|s|d|m|n]: ',
        '50',
        'Your guess is too high',
        'Please enter your guess [x|s|d|m|n]: ',
        '42',
        'Hit!',
        '\nWelcome to another Number Guessing game',
        'Please enter your guess [x|s|d|m|n]: ',
        'x',
        'Sad to see you leave early',
    ]

Pytest: Flask app sending mail

from flask import Flask, request
import random

app = Flask(__name__)
db = {}

@app.route('/', methods=['GET'])
def home():
    return '''
          <form method="POST" action="/register">
          <input name="email">
          <input type="submit">
          </form>
          '''

@app.route('/register', methods=['POST'])
def register():
    email = request.form.get('email')
    code = str(random.random())
    if db_save(email, code):
        html = '<a href="/verify/{email}/{code}">here</a>'.format(email=email, code = code)
        sendmail({'to': email, 'subject': 'Registration', 'html': html })
        return 'OK'
    else:
        return 'FAILED'

@app.route('/verify/<email>/<code>', methods=['GET'])
def verify(email, code):
    if db_verify(email, code):
        sendmail({'to': email, 'subject': 'Welcome!', 'html': '' })
        return 'OK'
    else:
        return 'FAILED'

def sendmail(data):
    pass

def db_save(email, code):
   if email in db:
       return False
   db[email] = code
   return True

def db_verify(email, code):
    return email in db and db[email] == code

Pytest: Mocking Flask app sending mail

import app
import re

def test_main_page():
    aut = app.app.test_client()

    rv = aut.get('/')
    assert rv.status == '200 OK'
    assert '<form' in str(rv.data)
    assert not 'Welcome back!' in str(rv.data)


def test_verification(monkeypatch):
    aut = app.app.test_client()

    email = 'foo@example.com'

    messages = []
    monkeypatch.setattr('app.sendmail', lambda params: messages.append(params) )

    rv = aut.post('/register', data=dict(email = email ))
    assert rv.status == '200 OK'
    assert 'OK' in str(rv.data)
    print(messages)
    # [{'to': 'foo@example.com', 'subject': 'Registration', 'html': '<a href="/verify/python@example.com/0.81280014">here</a>'}]

    # Remove the html part that we will verify and use later
    html = messages[0].pop('html')

    # Check that the rest of the email is correct
    assert messages == [{'to': 'foo@example.com', 'subject': 'Registration'}]

    # This is the code that we would have received in the email:
    match = re.search(r'/(\d\.\d+)"', html)
    if match:
        code = match.group(1)
    print(code)

    # After the successful verification another email is sent.
    messages = []
    rv = aut.get('/verify/{email}/{code}'.format(email = email, code = code ))
    assert rv.status == '200 OK'
    assert 'OK' in str(rv.data)

    assert messages == [{'to': email, 'subject': 'Welcome!', 'html': ''}]

def test_invalid_verification(monkeypatch):
    aut = app.app.test_client()

    email = 'bar@example.com'

    messages = []
    monkeypatch.setattr('app.sendmail', lambda params: messages.append(params) )

    rv = aut.post('/register', data=dict(email = email ))
    assert rv.status == '200 OK'
    assert 'OK' in str(rv.data)

    messages = []
    # Test what happens if we use an incorrect code to verify the email address:
    rv = aut.get('/verify/{email}/{code}'.format(email = email, code = 'other' ))
    assert rv.status == '200 OK'
    assert 'FAILED' in str(rv.data)

    # No email was sent
    assert messages == []

Pytest: Mocking - collecting stats example

import requests

# An application that allows us to monitor keyword requency on some popular websites.
# The process:
#    - get the URLs from the database
#    - fetch the content of esch page
#    - get the frequency of keywords for each page
#    - get the precious values from the database
#    - update the database with the new values
#    - send e-mail reporting the changes.

def get_urls():
    #raise Excepton('accessing the database')
    return ['https://code-maven.com/']

def get_content(url, depth):
    #raise Exception(f'donwload content from {url}')
    return "Python Python Pytest Monkey patch Python"

def get_stats(text, limit=None):
    #raise Exception('getting stats from some text')
    return {}

def get_stats_from_db(url):
    #raise Exception('getting stats from database')
    return {}

def create_report(old, new):
    #raise Exception('create report')
    return ''

def send_report(report, subject, to):
    #raise Exception(f'send report to {to}')
    return ''

def main():
    depth = 3
    limit = 17
    boss = 'boss@code-maven.com'
    subject = 'Updated stats'
    urls = get_urls()
    for url in urls:
        content = get_content(url, depth)
        new_stats = get_stats(content, limit)
        old_stats = get_stats_from_db(url)
        report = create_report(old_stats, new_stats)
        send_report(report, subject, boss)

if __name__ == '__main__':
    main()

Pytest command line options

PyTest: Run tests in parallel with xdist

  • xdist
$ pip install pytest-xdist
$ pytest -n NUM
import time

def test_dog():
    time.sleep(2)

def test_cat():
    time.sleep(2)

import time

def test_blue():
    time.sleep(2)

def test_green():
    time.sleep(2)

pytest          8.04 sec
pytest -n 2     4.64 sec
pytest -n 4     3.07 sec

PyTest: Order of tests

Pytest runs the test in the same order as they are found in the test module:

def test_one():
    assert True

def test_two():
    assert True

def test_three():
    assert True

pytest -v

test_order.py::test_one PASSED
test_order.py::test_two PASSED
test_order.py::test_three PASSED

PyTest: Randomize Order of tests

Install pytest-random-order

pip install pytest-random-order

And from now we can use the --random-order flag to run the tests in a random order.

pytest -v --random-order

test_order.py::test_two PASSED
test_order.py::test_three PASSED
test_order.py::test_one PASSED

PyTest: Force default order

If for some reason we would like to make sure the order remains the same, in a given module, we can add the following two lines of code.

import pytest
pytestmark = pytest.mark.random_order(disabled=True)
import pytest
pytestmark = pytest.mark.random_order(disabled=True)

def test_one():
    assert True

def test_two():
    assert True

def test_three():
    assert True


PyTest test discovery

Running pytest will find test files and in the files test functions.

  • test_*.py files
  • *_test.py files
  • test_* functions
  • TestSomething class
  • test_* methods
examples/pytest/discovery
.
├── db
│   └── test_db.py
├── other_file.py
├── test_one.py
└── two_test.py

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- /home/gabor/venv3/bin/python
cachedir: .pytest_cache
plugins: json-report-1.2.4, random-order-1.0.4, flake8-1.0.6, forked-1.3.0, dash-1.17.0, metadata-1.11.0, xdist-2.2.1
collecting ... collected 3 items

test_one.py::test_1 PASSED                                               [ 33%]
two_test.py::test_2 PASSED                                               [ 66%]
db/test_db.py::test_db PASSED                                            [100%]

============================== 3 passed in 0.02s ===============================

PyTest test discovery - ignore some tests

$ pytest


$ pytest --ignore venv3/
test_mymod_1.py .
test_mymod_2.py .F

Pytest dry-run - collect-only

  • Find all the test files, test classes, test functions that will be executed.
  • But don't run them...
  • ... but they are still loaded into memory so any code in the "body" of the files is executed.
pytest --collect-only

PyTest select tests by name

  • --collect-only

  • -k

  • --collect-only - only list the tests, don't run them yet.

  • -k select by name


def test_database_read():
    assert True

def test_database_write():
    assert True

def test_database_forget():
    assert True

def test_ui_access():
    assert True

def test_ui_forget():
    assert True
pytest --collect-only test_by_name.py
    test_database_read
    test_database_write
    test_database_forget
    test_ui_access
    test_ui_forget
pytest --collect-only -k database test_by_name.py
    test_database_forget
    test_database_read
    test_database_write
pytest --collect-only -k ui test_by_name.py
    test_ui_access
    test_ui_forget
pytest --collect-only -k forget test_by_name.py
    test_database_forget
    test_ui_forget
pytest --collect-only -k "forget or read" test_by_name.py
    test_database_read
    test_database_forget
    test_ui_forget

Pytest use markers to select tests

  • -m select by mark
import pytest

@pytest.mark.long
def test_database_read():
    assert True

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest
plugins: flake8-1.0.6, dash-1.17.0
collected 1 item

test_marker.py .                                                         [100%]

=============================== warnings summary ===============================
test_marker.py:3
  test_marker.py:3: PytestUnknownMarkWarning: Unknown pytest.mark.long - is this a typo?
    You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.long

-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================= 1 passed, 1 warning in 0.00s =========================
  • We need to declare them in the pytest.ini to avoid the warning

{% embed include file="src/examples/pytest/markers/pytest.ini)

PyTest select tests by marker

  • --collect-only

  • -m

  • @pytest.mark

  • Use the @pytest.mark.name decorator to tag the tests.

import pytest

@pytest.mark.smoke
def test_database_read():
    assert True

@pytest.mark.security
@pytest.mark.smoke
def test_database_write():
    assert True

@pytest.mark.security
def test_database_forget():
    assert True

@pytest.mark.smoke
def test_ui_access():
    assert True

@pytest.mark.security
def test_ui_forget():
    assert True

pytest --collect-only -m security test_by_marker.py
    test_ui_forget
    test_database_write
    test_database_forget
pytest --collect-only -m smoke test_by_marker.py
    test_database_read
    test_ui_access
    test_database_write

No test selected

If you run pytest and it cannot find any tests, for example because you used some selector and not test matched it, then Pytest will exit with exit code 5.

This is considered a failure by every tool, including Jenkins and other CI systems.

On the other hand you won't see any failed test reported. After all if no tests are run, then none of them fails. This can be confusing.

$ pytest -k long test_by_marker.py

============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/gabor/work/slides/python/examples/pytest, configfile: pytest.ini
plugins: flake8-1.0.6, dash-1.17.0
collected 5 items / 5 deselected

============================ 5 deselected in 0.00s =============================
$ echo $?
5

Pytest reporting in JUnit XML or JSON format

import pytest

def test_blue():
    pass

def test_red():
    assert 1 == 2

@pytest.mark.skip(reason="So we can show skip reporting")
def test_purple():
    assert 1 == 3

@pytest.mark.xfail(reason = "To show xfail that really fails")
def test_orange():
    1 == 4

@pytest.mark.xfail(reason = "To show xfail that passes")
def test_green():
    pass

Pytest reporting in JUnit XML format

  • e.g. for Jenkins integration
  • See usage
pytest --junitxml report.xml

{% embed include file="src/examples/pytest/reporting/report.xml)

To make the XML more himan-readable:

cat report.xml | python -c 'import sys;import xml.dom.minidom;s=sys.stdin.read();print(xml.dom.minidom.parseString(s).toprettyxml())'

Pytest reporting in JSON format

pip install pytest-json-report

pytest --json-report --json-report-file=report.json --json-report-indent=4

Recommended to also add

--json-report-omit=log
pytest -s --json-report --json-report-file=report.json --log-cli-level=INFO

Pytest JSON report

{% embed include file="src/examples/pytest/reporting/report.json)

Add extra command line parameters to Pytest - conftest - getoption

  • conftest

  • In this case the option expects a value

  • And we need to use getoption to get the value

  • See Parser

  • See argparse

def pytest_addoption(parser):
    parser.addoption("--demo")
    parser.addoption("--noisy", action='store_true')
def test_me(request):
    print(request.config.getoption("--demo"))
    print(request.config.getoption("--noisy"))

pytest -s
None
False
pytest -s --demo Hello --noisy
Hello
True

Add extra command line parameters to Pytest - as a fixture

  • conftest

  • We can also create a fixture that will read the parameter

import pytest

def pytest_addoption(parser):
    parser.addoption("--demo")

@pytest.fixture
def mydemo(request):
    return request.config.getoption("--demo")
def test_me(mydemo):
    print(mydemo)
pytest -s
test_one.py None
pytest -s --demo Hello
test_one.py Hello

Add extra command line parameters to Pytest - used in the autouse fixtures

  • conftest
#import pytest
#
def pytest_addoption(parser):
    parser.addoption("--demo")
#
#@pytest.fixture
#def demo(request):
#    return request.config.getoption("--demo")
import pytest

@pytest.fixture(autouse = True, scope="module")
def module_demo(request):
    demo = request.config.getoption("--demo")
    print(f"Module {demo}")
    return demo


@pytest.fixture(autouse = True, scope="function")
def func_demo(request):
    demo = request.config.getoption("--demo")
    print(f"Func {demo}")
    return demo


def test_me():
    pass

def test_two():
    pass
pytest -s --demo Hello

Module Hello
Func Hello
Func Hello

PyTest: Test Coverage

def fib(n):
    if n < 1:
        raise ValueError(f'Invalid parameter was given {n}')
    a, b = 1, 1
    for _ in range(1, n):
        a, b = b, a+b
    return a

def add(x, y):
    return x + y

def area(x, y):
    if x > 0 and y > 0:
        return x * y
    else:
        return None
import mymod

def test_add():
    assert mymod.add(2, 3) == 5

def test_area():
    assert mymod.area(2, 3) == 6
    assert mymod.area(-2, 3) == None

def test_fib():
    assert mymod.fib(1) == 1
pip install pytest-cov

pytest --cov=mymod --cov-report html --cov-branch

Open htmlcov/index.html

Pytest and flake8

import sys

def add(a):
    return a

def add(x, y):
   z = 42
   sum = x+y
   return sum

print = 42
import mymod

def test_add():
    assert mymod.add(2, 3) == 5

{% embed include file="src/examples/pytest/flake/.flake8)

pip install flake8
pip install pytest-flake8
pip install flake8-builtins

flake8
rm -rf .pytest_cache/
pytest --flake8

Pytest and mypy

pip install mypy
pip install pytest-mypy

mypy mymod.py

pytest --mypy
import sys

z = "23"
z = int(z)

def add(x, y):
   return x + y
import mymod

x = "23"
x = int(x)

def test_add():
    assert mymod.add(2, 3) == 5

Excluding files when using mypy works, but that does not exclude them when using pytest --mypy

{% embed include file="src/examples/pytest/mypy/mypy.ini)

Not even this:

{% embed include file="src/examples/pytest/mypy/pytest.ini)

Pytest - other

Testing with Pytest

A module called mymath with two functions: add and div.


def add(x, y):
    """Adding two numbers

    >>> add(2, 3)
    5

    """
    return x + y

def div(x, y):
    """Dividing two numbers

    >>> div(8, 2)
    4
    >>> div(8, 0)
    Traceback (most recent call last):
    ...
    ZeroDivisionError: integer division or modulo by zero

    """
    return x / y

Testing functions

import mymath

def test_math():
    assert mymath.add(2, 3)  == 5
    assert mymath.div(6, 3)  == 2
    assert mymath.div(42, 1) == 42
    assert mymath.add(-1, 1) == 0

Testing class and methods

import mymath

class TestMath():
    def test_math(self):
        assert mymath.add(2, 3)  == 5
        assert mymath.div(6, 3)  == 2
        assert mymath.div(42, 1) == 42
        assert mymath.add(-1, 1) == 0

Pytest - execute

pytest test_mymath.py
============================= test session starts ==============================
platform darwin -- Python 3.6.3, pytest-3.3.0, py-1.5.2, pluggy-0.6.0
rootdir: /Users/gabor/work/training/python, inifile:
collected 1 item

examples/pytest/test_mymath.py .                                         [100%]

=========================== 1 passed in 0.01 seconds ===========================

Pytest - execute

pytest
python -m pytest

Pytest simple module to be tested

An anagram is a pair of words containing the exact same letters in different order. For example:

  • listen silent
  • elvis lives
def is_anagram(a_word, b_word):
    return sorted(a_word) == sorted(b_word)

Pytest simple tests - success

  • assert
  • pytest
from mymod_1 import is_anagram

def test_anagram():
    assert is_anagram("elvis", "lives")
    assert is_anagram("silent", "listen")
    assert not is_anagram("one", "two")

Pytest simple tests - success output

$ pytest test_mymod_1.py

===================== test session starts ======================
platform darwin -- Python 3.5.2, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
rootdir: /examples/python/pt, inifile:
collected 1 items

test_mymod_1.py .

=================== 1 passed in 0.03 seconds ===================

Pytest simple tests - failure

  • Failure reported by user: is_anagram("anagram", "nag a ram") is expected to return true.
  • We write a test case to reproduce the problem. It should fail now.
from mymod_1 import is_anagram

def test_anagram():
    assert is_anagram("elvis", "lives")
    assert is_anagram("silent", "listen")
    assert not is_anagram("one", "two")

def test_multiword_anagram():
    assert is_anagram("ana gram", "naga ram")
    assert is_anagram("anagram", "nag a ram")

Pytest simple tests - failure output

$ pytest test_mymod_2.py

===================== test session starts ======================
platform darwin -- Python 3.5.2, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
rootdir: /examples/python/pt, inifile:
collected 2 items

test_mymod_2.py .F

=========================== FAILURES ===========================
____________________ test_multiword_anagram ____________________

    def test_multiword_anagram():
       assert is_anagram("ana gram", "naga ram")
>      assert is_anagram("anagram", "nag a ram")
E      AssertionError: assert False
E       +  where False = is_anagram('anagram', 'nag a ram')

test_mymod_2.py:10: AssertionError
============== 1 failed, 1 passed in 0.09 seconds ==============

PyTest bank deposit

class NegativeDeposite(Exception):
    pass

class Bank:
    def __init__(self, start):
        self.balance = start

    def deposit(self, money):
        if money < 0:
            raise NegativeDeposite('Cannot deposit negative sum')
        self.balance += money
        return

PyTest expected exceptions (bank deposit)

import pytest
from banks import Bank, NegativeDeposite


def test_negative_deposit():
    b = Bank(10)
    with pytest.raises(Exception) as exinfo:
        b.deposit(-1)
    assert exinfo.type == NegativeDeposite
    assert str(exinfo.value) == 'Cannot deposit negative sum'
pytest test_bank.py

test_bank.py .

PyTest expected exceptions (bank deposit) - no exception happens

Pytest properly reports that there was no exception where an exception was expected.

class NegativeDeposite(Exception):
    pass

class Bank:
    def __init__(self, start):
        self.balance = start

    def deposit(self, money):
        #if money < 0:
        #    raise NegativeDeposite('Cannot deposit negative sum')
        self.balance += money
        return

    def test_negative_deposit():
        b = Bank(10)
        with pytest.raises(NegativeDeposite) as e:
>           b.deposit(-1)
E           Failed: DID NOT RAISE <class 'Exception'>

PyTest expected exceptions (bank deposit) - different exception is raised

class NegativeDeposite(Exception):
    pass

class Bank:
    def __init__(self, start):
        self.balance = start

    def deposit(self, money):
        if money < 0:
            raise ValueError('Cannot deposit negative sum')
        self.balance += money
        return


    def test_negative_deposit():
        b = Bank(10)
        with pytest.raises(Exception) as exinfo:
            b.deposit(-1)
>       assert exinfo.type == NegativeDeposite
E       AssertionError: assert <class 'ValueError'> == NegativeDeposite
E        +  where <class 'ValueError'> = <ExceptionInfo ValueError tblen=2>.type

PyTest expected exceptions - divide

  • Some older slides I kept them around
import pytest

def divide(a, b):
    if b == 0:
        raise ValueError('Cannot divide by Zero')
    return a / b

def test_zero_division():
    with pytest.raises(ValueError) as err:
        divide(1, 0)
    assert str(err.value) == 'Cannot divide by Zero'

#divide(3, 0)

PyTest expected exceptions output

$ pytest test_exceptions.py

test_exceptions.py .

PyTest expected exceptions (text changed)

import pytest

def divide(a, b):
    if b == 0:
        raise ValueError('Cannot divide by Null')
    return a / b

def test_zero_division():
    with pytest.raises(ValueError) as e:
        divide(1, 0)
    assert str(e.value) == 'Cannot divide by Zero' 

PyTest expected exceptions (text changed) output

$ pytest test_exceptions_text_changed.py


    def test_zero_division():
        with pytest.raises(ValueError) as e:
            divide(1, 0)
>       assert str(e.value) == 'Cannot divide by Zero'
E       AssertionError: assert 'Cannot divide by Null' == 'Cannot divide by Zero'
E         - Cannot divide by Null
E         ?                  ^^^^
E         + Cannot divide by Zero
E         ?                  ^^^^

PyTest expected exceptions (other exception)

import pytest

def divide(a, b):
#    if b == 0:
#        raise ValueError('Cannot divide by Zero')
    return a / b

def test_zero_division():
    with pytest.raises(ValueError) as e:
        divide(1, 0)
    assert str(e.value) == 'Cannot divide by Zero' 

PyTest expected exceptions (other exception) output

    $ pytest test_exceptions_failing.py

    def test_zero_division():
        with pytest.raises(ValueError) as e:
>           divide(1, 0)

test_exceptions_failing.py:10:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

a = 1, b = 0

    def divide(a, b):
    #    if b == 0:
    #        raise ValueError('Cannot divide by Zero')
>       return a / b
E       ZeroDivisionError: division by zero

PyTest expected exceptions (no exception)

import pytest

def divide(a, b):
    if b == 0:
        return None
    return a / b

def test_zero_division():
    with pytest.raises(ValueError) as e:
        divide(1, 0)
    assert str(e.value) == 'Cannot divide by Zero' 

PyTest expected exceptions (no exception) output

    def test_zero_division():
        with pytest.raises(ValueError) as e:
>           divide(1, 0)
E           Failed: DID NOT RAISE <class 'ValueError'>

PyTest compare short lists - output

import configparser
import os


def test_read_ini(tmpdir):
    print(tmpdir)      # /private/var/folders/ry/z60xxmw0000gn/T/pytest-of-gabor/pytest-14/test_read0
    d = tmpdir.mkdir("subdir")
    fh = d.join("config.ini")
    fh.write("""
[application]
user  =  foo
password = secret
""")

    print(fh.basename) # data.txt
    print(fh.dirname)  # /private/var/folders/ry/z60xxmw0000gn/T/pytest-of-gabor/pytest-14/test_read0/subdir
    filename = os.path.join( fh.dirname, fh.basename )

    config = configparser.ConfigParser()
    config.read(filename)

    assert config.sections() == ['application']
    assert config['application'], {
       "user" : "foo",
       "password" : "secret"
    }

Testing Master Mind

import random


def main():
    hidden = [str(x) for x in random.sample(range(1, 7), 4)]
    #print(hidden)
    
    while(True):
        print("Please enter 4 digits")
        guess = list(input())
        if len(guess)!=4:
            continue    
        res = ""
        for h, g in zip(hidden, guess):
           #print(h, g)
           if h == g:
              res += "b"
           elif g in hidden:
              res += "w"
           #print(res)
        if res=='bbbb':
           print("Congrats!") 
           break   
        print(''.join(sorted(res)))


if __name__ == "__main__":
    main() 
import master_mind as mm
import random

def test_mm():
    random.sample = lambda a, b: [1,2,3,4]
    input_values = ['1234']
    output = []

    def mock_input():
       #output.append(s)
       return input_values.pop(0)
    mm.input = mock_input
    mm.print = lambda *s : output.append(s)

    mm.main()

    assert output == [
        ("Please enter 4 digits",),
        ('Congrats!',),
    ] 


def test_wrong():
    random.sample = lambda a, b: [1,2,3,4]
    input_values = ['1235', '1234']
    output = []

    def mock_input():
       #output.append(s)
       return input_values.pop(0)
    mm.input = mock_input
    mm.print = lambda *s : output.append(s)

    mm.main()

    assert output == [
        ("Please enter 4 digits",),
        ("bbb",),
        ("Please enter 4 digits",),
        ('Congrats!',),
    ] 


Module Fibonacci

def fibonacci_number(n):
    if n==1:
        return 1
    if n==2:
        return 1
    if n==3:
        return 5

    return 'unimplemented'

def fibonacci_list(n):
    if n == 1:
        return [1]
    if n == 2:
        return [1, 1]
    if n == 3:
        return [1, 1, 5]
    raise Exception('unimplemented')

PyTest - assertion

import mymath

def test_fibonacci():
    assert mymath.fibonacci(1) == 1

$ py.test test_fibonacci_ok.py
============================= test session starts ==============================
platform darwin -- Python 2.7.5 -- py-1.4.20 -- pytest-2.5.2
collected 1 items

test_fibonacci_ok.py .

=========================== 1 passed in 0.01 seconds ===========================

PyTest - failure

import mymath

def test_fibonacci():
    assert mymath.fibonacci(1) == 1
    assert mymath.fibonacci(2) == 1
    assert mymath.fibonacci(3) == 2

$ py.test test_fibonacci.py
============================== test session starts ==============================
platform darwin -- Python 2.7.5 -- py-1.4.20 -- pytest-2.5.2
collected 1 items 

test_fibonacci.py F

=================================== FAILURES ====================================
________________________________ test_fibonacci _________________________________

    def test_fibonacci():
        assert mymath.fibonacci(1) == 1
        assert mymath.fibonacci(2) == 1
>       assert mymath.fibonacci(3) == 2
E       assert 5 == 2
E        +  where 5 = <function fibonacci at 0x10a024500>(3)
E        +    where <function fibonacci at 0x10a024500> = mymath.fibonacci

test_fibonacci.py:6: AssertionError
=========================== 1 failed in 0.02 seconds ============================

PyTest - list

import fibo

def test_fibonacci_number():
    assert fibo.fibonacci_number(1) == 1
    assert fibo.fibonacci_number(2) == 1
    assert fibo.fibonacci_number(3) == 2
    assert fibo.fibonacci_number(4) == 2

def test_fibo():
    assert fibo.fibonacci_list(1) == [1]
    assert fibo.fibonacci_list(2) == [1, 1]
    assert fibo.fibonacci_list(3) == [1, 1, 2]
$ py.test test_fibo.py 
========================== test session starts ===========================
platform darwin -- Python 2.7.5 -- py-1.4.20 -- pytest-2.5.2
collected 1 items 

test_fibo.py F

================================ FAILURES ================================
_______________________________ test_fibo ________________________________

    def test_fibo():
        assert mymath.fibo(1) == [1]
        assert mymath.fibo(2) == [1, 1]
>       assert mymath.fibo(3) == [1, 1, 2]
E       assert [1, 1, 5] == [1, 1, 2]
E         At index 2 diff: 5 != 2

test_fibo.py:6: AssertionError
======================== 1 failed in 0.01 seconds ========================

Pytest: monkeypatching time

import time

def now():
    return time.time()

import app
import time

def test_one():
    our_real_1 = time.time()
    their_real_1 = app.now()
    assert abs(our_real_1 - their_real_1) < 0.00001

    app.time.time = lambda : our_real_1 + 100

    our_real_2 = time.time()
    print (our_real_2 - our_real_1)
    #their_real_2 = app.now()
    #assert abs(our_real_2 - their_real_2) >= 100


from time import time

def now():
    return time()

import app
import time

def test_one():
    our_real_1 = time.time()
    their_real_1 = app.now()
    assert abs(our_real_1 - their_real_1) < 0.0001

    app.time = lambda : our_real_1 + 100

    our_real_2 = time.time()
    assert abs(our_real_2 - our_real_1) < 0.0001

    their_real_2 = app.now()
    assert abs(our_real_2 - their_real_2) >= 99

def test_two():
    our_real_1 = time.time()
    their_real_1 = app.now()
    assert abs(our_real_1 - their_real_1) < 0.0001




import app
import time

def test_one(monkeypatch):
    our_real_1 = time.time()
    their_real_1 = app.now()
    assert abs(our_real_1 - their_real_1) < 0.0001

    monkeypatch.setattr(app, 'time', lambda : our_real_1 + 100)

    our_real_2 = time.time()
    assert abs(our_real_2 - our_real_1) < 0.0001

    their_real_2 = app.now()
    assert abs(our_real_2 - their_real_2) >= 99

def test_two():
    our_real_1 = time.time()
    their_real_1 = app.now()
    assert abs(our_real_1 - their_real_1) < 0.00001


PyTest: no random order

pytest -p no:random-order -v
#import pytest

def pytest_addoption(parser):
    parser.addoption("--demo")

#@pytest.fixture
#def demo(request):
#    return request.config.getoption("--demo")
import pytest

@pytest.fixture
def mydemo(request):
    demo = request.config.getoption("--demo")
    print(f"In fixture {demo}")
    return demo


def test_me(mydemo):
    print(f"In test {mydemo}")