Skip to content

Commit 9cbccbb

Browse files
committed
📝 Update pytest section
1 parent 22c6c96 commit 9cbccbb

File tree

7 files changed

+508
-1
lines changed

7 files changed

+508
-1
lines changed
Lines changed: 212 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,212 @@
1+
Command line options
2+
====================
3+
4+
In :ref:`dynamic-fixture-scope`, we have already seen how the fixture scope can
5+
be changed using a command line option. Now let’s take a closer look at the
6+
command line options.
7+
8+
Passing different values to a test function
9+
-------------------------------------------
10+
11+
Suppose you want to write a test that depends on a command line option. You can
12+
achieve this using the following pattern:
13+
14+
.. code-block:: python
15+
:caption: test_example.py
16+
17+
def test_db(items_db, db_path, cmdopt):
18+
if cmdopt == "json":
19+
print("Save as JSON file")
20+
elif cmdopt == "sqlite":
21+
print("Save in a SQLite database")
22+
assert items_db.path() == db_path
23+
24+
For this to work, the command line option must be added and ``cmdopt`` must be
25+
provided via a fixture function:
26+
27+
.. code-block:: python
28+
:caption: conftest.py
29+
30+
import pytest
31+
32+
33+
def pytest_addoption(parser):
34+
parser.addoption(
35+
"--cmdopt",
36+
action="store",
37+
default="json",
38+
help="Store data as JSON file or in a SQLite database",
39+
)
40+
41+
42+
@pytest.fixture
43+
def cmdopt(request):
44+
return request.config.getoption("--cmdopt")
45+
46+
You can then call up your tests, for example, with:
47+
48+
.. code-block:: console
49+
50+
$ pytest --sqlite
51+
52+
In addition, you can add a simple validation of the input by listing the
53+
options:
54+
55+
.. code-block:: python
56+
:caption: conftest.py
57+
:emphasize-lines: 7
58+
59+
def pytest_addoption(parser):
60+
parser.addoption(
61+
"--cmdopt",
62+
action="store",
63+
default="json",
64+
help="Store data as JSON file or in a SQLite database",
65+
choices=("json", "sqlite"),
66+
)
67+
68+
This is how we receive feedback on an incorrect argument:
69+
70+
.. code-block:: console
71+
72+
$ pytest --postgresql
73+
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
74+
pytest: error: argument --cmdopt: invalid choice: 'postgresql' (choose from json, sqlite)
75+
76+
If you want to provide more detailed error messages, you can use the ``type``
77+
parameter and raise ``pytest.UsageError``:
78+
79+
.. code-block:: python
80+
:caption: conftest.py
81+
:emphasize-lines: -6, 15
82+
83+
def type_checker(value):
84+
msg = "cmdopt must specify json or sqlite"
85+
if not value.startswith("json" or "sqlite"):
86+
raise pytest.UsageError(msg)
87+
88+
return value
89+
90+
91+
def pytest_addoption(parser):
92+
parser.addoption(
93+
"--cmdopt",
94+
action="store",
95+
default="json",
96+
help="Store data as JSON file or in a SQLite database",
97+
type=type_checker,
98+
)
99+
100+
However, command line options often need to be processed outside of the test and
101+
more complex objects need to be passed.
102+
103+
Adding command line options dynamically
104+
---------------------------------------
105+
106+
With :ref:`addopts`, you can add static command line options to your project.
107+
However, you can also change the command line arguments dynamically before they
108+
are processed:
109+
110+
.. code-block:: python
111+
:caption: conftest.py
112+
113+
import sys
114+
115+
116+
def pytest_load_initial_conftests(args):
117+
if "xdist" in sys.modules:
118+
import multiprocessing
119+
120+
num = max(multiprocessing.cpu_count() / 2, 1)
121+
args[:] = ["-n", str(num)] + args
122+
123+
If you have installed the :ref:`xdist-plugin` plugin, test runs will always be
124+
performed with a number of subprocesses close to your CPU.
125+
126+
Command line option for skipping tests
127+
--------------------------------------
128+
129+
Below, we add a :file:`conftest.py` file with a command line option
130+
``--runslow`` to control the skipping of tests marked with ``pytest.mark.slow``:
131+
132+
.. code-block:: python
133+
:caption: conftest.py
134+
135+
import pytest
136+
137+
138+
def pytest_addoption(parser):
139+
parser.addoption(
140+
"--runslow", action="store_true", default=False, help="run slow tests"
141+
)
142+
143+
144+
def pytest_collection_modifyitems(config, items):
145+
if config.getoption("--runslow"):
146+
# If --runslow is specified on the CLI, slow tests are not skipped.
147+
return
148+
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
149+
for item in items:
150+
if "slow" in item.keywords:
151+
item.add_marker(skip_slow)
152+
153+
If we now write a test with the ``@pytest.mark.slow`` decorator, a skipped
154+
‘slow’ test will be displayed when pytest is called:
155+
156+
.. code-block:: pytest
157+
158+
$ uv run pytest
159+
============================= test session starts ==============================
160+
...
161+
test_example.py s. [100%]
162+
163+
=========================== short test summary info ============================
164+
SKIPPED [1] test_example.py:8: need --runslow option to run
165+
========================= 1 passed, 1 skipped in 0.05s =========================
166+
167+
Extend test report header
168+
-------------------------
169+
170+
Additional information can be easily provided in a ``pytest -v`` run:
171+
172+
.. code-block:: python
173+
:caption: conftest.py
174+
175+
import sys
176+
177+
178+
def pytest_report_header(config):
179+
gil = sys._is_gil_enabled()
180+
return f"Is GIL enabled? {gil}"
181+
182+
183+
.. code-block:: pytest
184+
:emphasize-lines: 5
185+
186+
$ uv run pytest -v
187+
============================= test session starts ==============================
188+
platform darwin -- Python 3.14.0b4, pytest-8.4.1, pluggy-1.6.0
189+
cachedir: .pytest_cache
190+
Is GIL enabled? False
191+
rootdir: /Users/veit/sandbox/items
192+
configfile: pyproject.toml
193+
plugins: anyio-4.9.0, Faker-37.4.0, cov-6.2.1
194+
...
195+
============================== 2 passed in 0.04s ===============================
196+
197+
Determine test duration
198+
-----------------------
199+
200+
If you have a large test suite that runs slowly, you will probably want to use
201+
``-vv --durations`` to find out which tests are the slowest.
202+
203+
.. code-block:: pytest
204+
205+
$ uv run pytest -vv --durations=3
206+
============================= test session starts ==============================
207+
...
208+
============================= slowest 3 durations ==============================
209+
0.02s setup tests/api/test_add.py::test_add_from_empty
210+
0.00s call tests/cli/test_help.py::test_help[add]
211+
0.00s call tests/cli/test_help.py::test_help[update]
212+
============================== 83 passed in 0.17s ==============================

docs/test/pytest/config.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,9 @@ marker per line is permitted.
7676
This example is a simple :file:`pytest.ini` file that I use in almost all my
7777
projects. Let’s briefly go through the individual lines:
7878

79-
``addopts =``
79+
.. _addopts:
80+
81+
``addopts``
8082
allows you to specify the pytest options that we always want to execute in
8183
this project.
8284
``--strict-markers``

docs/test/pytest/fixtures.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -557,6 +557,8 @@ We have seen how different fixture scopes work and how different scopes can be
557557
used in different fixtures. However, you may need to define a scope at runtime.
558558
This is possible with dynamic scoping.
559559

560+
.. _dynamic-fixture-scope:
561+
560562
Set fixture scope dynamically
561563
-----------------------------
562564

@@ -660,6 +662,9 @@ scope:
660662
661663
The database is now set up before each test function and then dismantled again.
662664

665+
.. seealso::
666+
* :doc:`command-line-options`
667+
663668
``autouse`` for fixtures that are always used
664669
---------------------------------------------
665670

docs/test/pytest/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,5 +60,6 @@ You can install pytest in `virtual environments <venv>` with:
6060
markers
6161
plugins
6262
config
63+
command-line-options
6364
debug
6465
coverage

docs/test/pytest/markers.rst

Lines changed: 141 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -873,6 +873,147 @@ Let’s run the tests now to make sure everything is working properly:
873873
874874
============================== 3 passed in 0.09s ===============================
875875
876+
Generating markers
877+
------------------
878+
879+
Suppose you have a test suite that marks tests for specific platforms, namely
880+
``pytest.mark.darwin``, ``pytest.mark.win32``, and so on, and you also have
881+
tests that run on all platforms and do not have a specific marker. If you are
882+
looking for a way to run only the tests for your specific platform, you can use
883+
the following:
884+
885+
.. code-block:: python
886+
:caption: conftest.py
887+
888+
import sys
889+
890+
import pytest
891+
892+
ALL = {"win32", "darwin", "linux"}
893+
894+
895+
def pytest_setup(item):
896+
supported_platforms = ALL.intersection(
897+
mark.name for mark in item.iter_markers()
898+
)
899+
pf = sys.platform
900+
if supported_platforms and pf not in supported_platforms:
901+
pytest.skip(f"cannot run on platform {pf}")
902+
903+
This means that tests are skipped if they have been specified for another
904+
platform. Now let's create a small test file to show what this looks like:
905+
906+
.. code-block:: python
907+
:caption: test_platform.py
908+
909+
import pytest
910+
911+
912+
def test_foo_everywhere():
913+
pass
914+
915+
916+
@pytest.mark.win32
917+
def test_foo_on_win32():
918+
pass
919+
920+
921+
@pytest.mark.darwin
922+
def test_foo_on_darwin():
923+
pass
924+
925+
926+
@pytest.mark.linux
927+
def test_foo_on_linux():
928+
pass
929+
930+
Now we can run pytest and see the reasons for the skipped tests:
931+
932+
.. code-block:: pytest
933+
934+
$ uv run pytest -rs tests/test_platform.py
935+
============================= test session starts ==============================
936+
platform darwin -- Python 3.14.0b4, pytest-8.4.1, pluggy-1.6.0
937+
...
938+
collected 4 items
939+
940+
tests/test_platform.py ..ss [100%]
941+
942+
=========================== short test summary info ============================
943+
SKIPPED [2] tests/conftest.py:20: cannot run on platform darwin
944+
========================= 2 passed, 2 skipped in 0.03s =========================
945+
946+
or more specifically:
947+
948+
.. code-block:: pytest
949+
950+
$ uv run pytest -m darwin -rs tests/test_platform.py
951+
============================= test session starts ==============================
952+
platform darwin -- Python 3.14.0b4, pytest-8.4.1, pluggy-1.6.0
953+
...
954+
collected 4 items / 3 deselected / 1 selected
955+
956+
tests/test_platform.py . [100%]
957+
958+
======================= 1 passed, 3 deselected in 0.02s ========================
959+
960+
Markers based on test names
961+
---------------------------
962+
963+
Alternatively, markers can also be specified using the names of the test
964+
functions by implementing a hook that automatically defines markers:
965+
966+
.. code-block:: python
967+
:caption: test_platform.py
968+
969+
def test_foo_everywhere():
970+
pass
971+
972+
973+
def test_foo_on_win32():
974+
pass
975+
976+
977+
def test_foo_on_darwin():
978+
pass
979+
980+
981+
def test_foo_on_linux():
982+
pass
983+
984+
Now we dynamically define three markers in :file:`conftest.py` in
985+
`pytest_collection_modifyitems
986+
<https://docs.pytest.org/en/latest/reference/reference.html#pytest.hookspec.pytest_collection_modifyitems>`_:
987+
988+
.. code-block:: python
989+
:caption: conftest.py
990+
991+
import pytest
992+
993+
994+
def pytest_collection_modifyitems(items):
995+
for item in items:
996+
if "win32" in item.nodeid:
997+
item.add_marker(pytest.mark.win32)
998+
elif "darwin" in item.nodeid:
999+
item.add_marker(pytest.mark.darwin)
1000+
elif "linux" in item.nodeid:
1001+
item.add_marker(pytest.mark.linux)
1002+
1003+
Now we can use the ``-m`` option to select a set:
1004+
1005+
.. code-block:: pytest
1006+
1007+
$ uv run pytest -m darwin
1008+
============================= test session starts ==============================
1009+
platform darwin -- Python 3.14.0, pytest-9.0.1, pluggy-1.6.0
1010+
...
1011+
collected 4 items / 3 deselected / 1 selected
1012+
1013+
tests/test_platform.py . [100%]
1014+
1015+
======================= 1 passed, 3 deselected in 0.00s ========================
1016+
8761017
List markers
8771018
------------
8781019

0 commit comments

Comments
 (0)