154
154
cmd_object.run() method directly. This is a lot faster than
155
155
subprocesses and generates the same logging output as running it in a
156
156
subprocess (which invoking the method directly does not).
158
158
3. Only test the one command in a single test script. Use the bzrlib
159
159
library when setting up tests and when evaluating the side-effects of
160
160
the command. We do this so that the library api has continual pressure
174
174
Per-implementation tests are tests that are defined once and then run
175
175
against multiple implementations of an interface. For example,
176
``test_transport_implementations.py`` defines tests that all Transport
177
implementations (local filesystem, HTTP, and so on) must pass.
179
They are found in ``bzrlib/tests/*_implementations/test_*.py``,
180
``bzrlib/tests/per_*/*.py``, and
181
``bzrlib/tests/test_*_implementations.py``.
176
``per_transport.py`` defines tests that all Transport implementations
177
(local filesystem, HTTP, and so on) must pass. They are found in
178
``bzrlib/tests/per_*/*.py``, and ``bzrlib/tests/per_*.py``.
183
180
These are really a sub-category of unit tests, but an important one.
182
Along the same lines are tests for extension modules. We generally have
183
both a pure-python and a compiled implementation for each module. As such,
184
we want to run the same tests against both implementations. These can
185
generally be found in ``bzrlib/tests/*__*.py`` since extension modules are
186
usually prefixed with an underscore. Since there are only two
187
implementations, we have a helper function
188
``bzrlib.tests.permute_for_extension``, which can simplify the
189
``load_tests`` implementation.
227
233
The execution stops as soon as an expected output or an expected error is not
230
236
When no output is specified, any ouput from the command is accepted
231
and execution continue.
237
and execution continue.
233
239
If an error occurs and no expected error is specified, the execution stops.
343
367
The test exists but is known to fail, for example this might be
344
368
appropriate to raise if you've committed a test for a bug but not
345
369
the fix for it, or if something works on Unix but not on Windows.
347
371
Raising this allows you to distinguish these failures from the
348
372
ones that are not expected to fail. If the test would fail
349
373
because of something we don't expect or intend to fix,
353
377
KnownFailure should be used with care as we don't want a
354
378
proliferation of quietly broken tests.
380
ModuleAvailableFeature
381
A helper for handling running tests based on whether a python
382
module is available. This can handle 3rd-party dependencies (is
383
``paramiko`` available?) as well as stdlib (``termios``) or
384
extension modules (``bzrlib._groupcompress_pyx``). You create a
385
new feature instance with::
387
MyModuleFeature = ModuleAvailableFeature('bzrlib.something')
390
def test_something(self):
391
self.requireFeature(MyModuleFeature)
392
something = MyModuleFeature.module
356
395
We plan to support three modes for running the test suite to control the
357
396
interpretation of these results. Strict mode is for use in situations
358
397
like merges to the mainline and releases where we want to make sure that
369
408
UnavailableFeature fail pass pass
370
409
KnownFailure fail pass pass
371
410
======================= ======= ======= ========
374
413
Test feature dependencies
375
414
-------------------------