py.test is a command line tool to collect, run and report about automated tests. It runs well on Linux, Windows and OSX and on Python 2.4 through to 3.1 versions. It is used in many projects, ranging from running 10 thousands of tests to a few inlined tests on a command line script. As of version 1.2 you can also generate a no-dependency py.test-equivalent standalone script that you can distribute along with your application.
py.test delegates almost all aspects of its operation to plugins. It is suprisingly easy to add command line options or do other kind of add-ons and customizations. This can be done per-project or by distributing a global plugin. One can can thus modify or add aspects for purposes such as:
Through the use of the separately released pytest-xdist plugin you can seemlessly distribute runs to multiple CPUs or remote computers through SSH and sockets. This plugin also offers a --looponfailing mode which will continously re-run only failing tests in a subprocess.
py.test supports many testing methods conventionally used in the Python community. It runs traditional unittest.py, doctest.py, supports xUnit style setup and nose specific setups and test suites. It offers minimal no-boilerplate model for configuring and deploying tests written as simple Python functions or methods. It also integrates coverage testing with figleaf or Javasript unit- and functional testing.
py.test can produce JUnitXML style output as well as formatted "resultlog" files that can be postprocessed by Continous Integration systems such as Hudson or Buildbot easily. It also provides command line options to control test configuration lookup behaviour or ignoring certain tests or directories.
By default, all python modules with a test_*.py filename are inspected for finding tests:
py.test offers the unique funcargs mechanism for setting up and passing project-specific objects to Python test functions. Test Parametrization happens by triggering a call to the same test function with different argument values. For doing fixtures using the funcarg mechanism makes your test and setup code more efficient and more readable. This is especially true for functional tests which might depend on command line options and a setup that needs to be shared across a whole test run.
By default, py.test captures all writes to stdout/stderr. Output from print statements as well as from subprocesses is captured. When a test fails, the associated captured outputs are shown. This allows you to put debugging print statements in your code without being overwhelmed by all the output that might be generated by tests that do not fail.
py.test allows to use the standard python assert statement for verifying expectations and values in Python tests. For example, you can write the following in your tests:
assert hasattr(x, 'attribute')
to state that your object has a certain attribute. In case this assertion fails you will see the value of x. Intermediate values are computed by executing the assert expression a second time. If you execute code with side effects, e.g. read from a file like this:
assert f.read() != '...'
then you may get a warning from pytest if that assertions first failed and then succeeded.
In order to write assertions about exceptions, you use one of two forms:
py.test.raises(Exception, func, *args, **kwargs) py.test.raises(Exception, "func(*args, **kwargs)")
both of which execute the specified function with args and kwargs and asserts that the given Exception is raised. The reporter will provide you with helpful output in case of failures such as no exception or wrong exception.
A lot of care is taken to present useful failure information and in particular nice and concise Python tracebacks. This is especially useful if you need to regularly look at failures from nightly runs, i.e. are detached from the actual test running session. Here are example tracebacks for a number of failing test functions. You can modify traceback printing styles through the command line. Using the --pdb` option you can automatically activate a PDB Python debugger when a test fails.
py.test has advanced support for skipping tests or expecting failures on tests on certain platforms. Apart from the minimal py.test style also unittest- and nose-style tests can make use of this feature.
py.test --looponfailing allows to run a test suite, memorize all failures and then loop over the failing set of tests until they all pass. It will re-start running the tests when it detects file changes in your project.
You can selectively run tests by specifiying a keyword on the command line. Examples:
py.test -k test_simple py.test -k "-test_simple"
will run all tests matching (or not matching) the "test_simple" keyword. Note that you need to quote the keyword if "-" is recognized as an indicator for a commandline option. Lastly, you may use:
py.test. -k "test_simple:"
which will run all tests after the expression has matched once, i.e. all tests that are seen after a test that matches the "test_simple" keyword.
By default, all filename parts and class/function names of a test function are put into the set of keywords for a given test. You can specify additional kewords like this:
@py.test.mark.webtest def test_send_http(): ...
and then use those keywords to select tests. See the pytest_keyword plugin for more information.