6
The Importance of Testing
7
=========================
9
Reliability is a critical success factor for any version control system.
10
We want Breezy to be highly reliable across multiple platforms while
11
evolving over time to meet the needs of its community.
13
In a nutshell, this is what we expect and encourage:
15
* New functionality should have test cases. Preferably write the
16
test before writing the code.
18
In general, you can test at either the command-line level or the
19
internal API level. See `Writing tests`_ below for more detail.
21
* Try to practice Test-Driven Development: before fixing a bug, write a
22
test case so that it does not regress. Similarly for adding a new
23
feature: write a test case for a small version of the new feature before
24
starting on the code itself. Check the test fails on the old code, then
25
add the feature or fix and check it passes.
27
By doing these things, the Breezy team gets increased confidence that
28
changes do what they claim to do, whether provided by the core team or
29
by community members. Equally importantly, we can be surer that changes
30
down the track do not break new features or bug fixes that you are
33
As of September 2009, Breezy ships with a test suite containing over
34
23,000 tests and growing. We are proud of it and want to remain so. As
35
community members, we all benefit from it. Would you trust version control
36
on your project to a product *without* a test suite like Breezy has?
39
Running the Test Suite
40
======================
42
As of Breezy 2.1, you must have the testtools_ library installed to run
45
.. _testtools: https://launchpad.net/testtools/
47
To test all of Breezy, just run::
51
With ``--verbose`` brz will print the name of every test as it is run.
53
This should always pass, whether run from a source tree or an installed
54
copy of Breezy. Please investigate and/or report any failures.
57
Running particular tests
58
------------------------
60
Currently, brz selftest is used to invoke tests.
61
You can provide a pattern argument to run a subset. For example,
62
to run just the blackbox tests, run::
64
./brz selftest -v blackbox
66
To skip a particular test (or set of tests), use the --exclude option
67
(shorthand -x) like so::
69
./brz selftest -v -x blackbox
71
To ensure that all tests are being run and succeeding, you can use the
72
--strict option which will fail if there are any missing features or known
75
./brz selftest --strict
77
To list tests without running them, use the --list-only option like so::
79
./brz selftest --list-only
81
This option can be combined with other selftest options (like -x) and
82
filter patterns to understand their effect.
84
Once you understand how to create a list of tests, you can use the --load-list
85
option to run only a restricted set of tests that you kept in a file, one test
86
id by line. Keep in mind that this will never be sufficient to validate your
87
modifications, you still need to run the full test suite for that, but using it
88
can help in some cases (like running only the failed tests for some time)::
90
./brz selftest -- load-list my_failing_tests
92
This option can also be combined with other selftest options, including
93
patterns. It has some drawbacks though, the list can become out of date pretty
94
quick when doing Test Driven Development.
96
To address this concern, there is another way to run a restricted set of tests:
97
the --starting-with option will run only the tests whose name starts with the
98
specified string. It will also avoid loading the other tests and as a
99
consequence starts running your tests quicker::
101
./brz selftest --starting-with breezy.blackbox
103
This option can be combined with all the other selftest options including
104
--load-list. The later is rarely used but allows to run a subset of a list of
105
failing tests for example.
110
To test only the brz core, ignoring any plugins you may have installed,
113
./brz --no-plugins selftest
115
Disabling crash reporting
116
-------------------------
118
By default Breezy uses apport_ to report program crashes. In developing
119
Breezy it's normal and expected to have it crash from time to time, at
120
least because a test failed if for no other reason.
122
Therefore you should probably add ``debug_flags = no_apport`` to your
123
``breezy.conf`` file (in ``~/.config/breezy/`` on Unix), so that failures just
124
print a traceback rather than writing a crash file.
126
.. _apport: https://launchpad.net/apport/
129
Test suite debug flags
130
----------------------
132
Similar to the global ``-Dfoo`` debug options, brz selftest accepts
133
``-E=foo`` debug flags. These flags are:
135
:allow_debug: do *not* clear the global debug flags when running a test.
136
This can provide useful logging to help debug test failures when used
137
with e.g. ``brz -Dhpss selftest -E=allow_debug``
139
Note that this will probably cause some tests to fail, because they
140
don't expect to run with any debug flags on.
146
Breezy can optionally produce output in the machine-readable subunit_
147
format, so that test output can be post-processed by various tools. To
148
generate a subunit test stream::
150
$ ./brz selftest --subunit
152
Processing such a stream can be done using a variety of tools including:
154
* The builtin ``subunit2pyunit``, ``subunit-filter``, ``subunit-ls``,
155
``subunit2junitxml`` from the subunit project.
157
* tribunal_, a GUI for showing test results.
159
* testrepository_, a tool for gathering and managing test runs.
161
.. _subunit: https://launchpad.net/subunit/
162
.. _tribunal: https://launchpad.net/tribunal/
168
Breezy ships with a config file for testrepository_. This can be very
169
useful for keeping track of failing tests and doing general workflow
170
support. To run tests using testrepository::
174
To run only failing tests::
176
$ testr run --failing
178
To run only some tests, without plugins::
180
$ test run test_selftest -- --no-plugins
182
See the testrepository documentation for more details.
184
.. _testrepository: https://launchpad.net/testrepository
187
Running tests in parallel
188
-------------------------
190
Breezy can use subunit to spawn multiple test processes. There is
191
slightly more chance you will hit ordering or timing-dependent bugs but
194
$ ./brz selftest --parallel=fork
196
Note that you will need the Subunit library
197
<https://launchpad.net/subunit/> to use this, which is in
198
``python-subunit`` on Ubuntu.
201
Running tests from a ramdisk
202
----------------------------
204
The tests create and delete a lot of temporary files. In some cases you
205
can make the test suite run much faster by running it on a ramdisk. For
209
$ sudo mount -t tmpfs none /ram
210
$ TMPDIR=/ram ./brz selftest ...
212
You could also change ``/tmp`` in ``/etc/fstab`` to have type ``tmpfs``,
213
if you don't mind possibly losing other files in there when the machine
214
restarts. Add this line (if there is none for ``/tmp`` already)::
216
none /tmp tmpfs defaults 0 0
218
With a 6-core machine and ``--parallel=fork`` using a tmpfs doubles the
219
test execution speed.
225
Normally you should add or update a test for all bug fixes or new features
229
Where should I put a new test?
230
------------------------------
232
breezy's tests are organised by the type of test. Most of the tests in
233
brz's test suite belong to one of these categories:
236
- Blackbox (UI) tests
237
- Per-implementation tests
240
A quick description of these test types and where they belong in breezy's
241
source follows. Not all tests fall neatly into one of these categories;
242
in those cases use your judgement.
248
Unit tests make up the bulk of our test suite. These are tests that are
249
focused on exercising a single, specific unit of the code as directly
250
as possible. Each unit test is generally fairly short and runs very
253
They are found in ``breezy/tests/test_*.py``. So in general tests should
254
be placed in a file named test_FOO.py where FOO is the logical thing under
257
For example, tests for merge3 in breezy belong in breezy/tests/test_merge3.py.
258
See breezy/tests/test_sampler.py for a template test script.
264
Tests can be written for the UI or for individual areas of the library.
265
Choose whichever is appropriate: if adding a new command, or a new command
266
option, then you should be writing a UI test. If you are both adding UI
267
functionality and library functionality, you will want to write tests for
268
both the UI and the core behaviours. We call UI tests 'blackbox' tests
269
and they belong in ``breezy/tests/blackbox/*.py``.
271
When writing blackbox tests please honour the following conventions:
273
1. Place the tests for the command 'name' in
274
breezy/tests/blackbox/test_name.py. This makes it easy for developers
275
to locate the test script for a faulty command.
277
2. Use the 'self.run_brz("name")' utility function to invoke the command
278
rather than running brz in a subprocess or invoking the
279
cmd_object.run() method directly. This is a lot faster than
280
subprocesses and generates the same logging output as running it in a
281
subprocess (which invoking the method directly does not).
283
3. Only test the one command in a single test script. Use the breezy
284
library when setting up tests and when evaluating the side-effects of
285
the command. We do this so that the library api has continual pressure
286
on it to be as functional as the command line in a simple manner, and
287
to isolate knock-on effects throughout the blackbox test suite when a
288
command changes its name or signature. Ideally only the tests for a
289
given command are affected when a given command is changed.
291
4. If you have a test which does actually require running brz in a
292
subprocess you can use ``run_brz_subprocess``. By default the spawned
293
process will not load plugins unless ``--allow-plugins`` is supplied.
296
Per-implementation tests
297
~~~~~~~~~~~~~~~~~~~~~~~~
299
Per-implementation tests are tests that are defined once and then run
300
against multiple implementations of an interface. For example,
301
``per_transport.py`` defines tests that all Transport implementations
302
(local filesystem, HTTP, and so on) must pass. They are found in
303
``breezy/tests/per_*/*.py``, and ``breezy/tests/per_*.py``.
305
These are really a sub-category of unit tests, but an important one.
307
Along the same lines are tests for extension modules. We generally have
308
both a pure-python and a compiled implementation for each module. As such,
309
we want to run the same tests against both implementations. These can
310
generally be found in ``breezy/tests/*__*.py`` since extension modules are
311
usually prefixed with an underscore. Since there are only two
312
implementations, we have a helper function
313
``breezy.tests.permute_for_extension``, which can simplify the
314
``load_tests`` implementation.
320
We make selective use of doctests__. In general they should provide
321
*examples* within the API documentation which can incidentally be tested. We
322
don't try to test every important case using doctests |--| regular Python
323
tests are generally a better solution. That is, we just use doctests to make
324
our documentation testable, rather than as a way to make tests. Be aware that
325
doctests are not as well isolated as the unit tests, if you need more
326
isolation, you're likely want to write unit tests anyway if only to get a
327
better control of the test environment.
329
Most of these are in ``breezy/doc/api``. More additions are welcome.
331
__ http://docs.python.org/lib/module-doctest.html
333
There is an `assertDoctestExampleMatches` method in
334
`breezy.tests.TestCase` that allows you to match against doctest-style
335
string templates (including ``...`` to skip sections) from regular Python
342
``breezy/tests/script.py`` allows users to write tests in a syntax very
343
close to a shell session, using a restricted and limited set of commands
344
that should be enough to mimic most of the behaviours.
346
A script is a set of commands, each command is composed of:
348
* one mandatory command line,
349
* one optional set of input lines to feed the command,
350
* one optional set of output expected lines,
351
* one optional set of error expected lines.
353
Input, output and error lines can be specified in any order.
355
Except for the expected output, all lines start with a special
356
string (based on their origin when used under a Unix shell):
358
* '$ ' for the command,
360
* nothing for output,
363
Comments can be added anywhere, they start with '#' and end with
366
The execution stops as soon as an expected output or an expected error is not
369
If output occurs and no output is expected, the execution stops and the
370
test fails. If unexpected output occurs on the standard error, then
371
execution stops and the test fails.
373
If an error occurs and no expected error is specified, the execution stops.
375
An error is defined by a returned status different from zero, not by the
376
presence of text on the error stream.
378
The matching is done on a full string comparison basis unless '...' is used, in
379
which case expected output/errors can be less precise.
383
The following will succeeds only if 'brz add' outputs 'adding file'::
388
If you want the command to succeed for any output, just use::
394
or use the ``--quiet`` option::
398
The following will stop with an error::
402
If you want it to succeed, use::
405
2> brz: ERROR: unknown command "not-a-command"
407
You can use ellipsis (...) to replace any piece of text you don't want to be
410
$ brz branch not-a-branch
411
2>brz: ERROR: Not a branch...not-a-branch/".
413
This can be used to ignore entire lines too::
419
# And here we explain that surprising fourth line
426
You can check the content of a file with cat::
431
You can also check the existence of a file with cat, the following will fail if
432
the file doesn't exist::
436
You can run files containing shell-like scripts with::
438
$ brz test-script <script>
440
where ``<script>`` is the path to the file containing the shell-like script.
442
The actual use of ScriptRunner within a TestCase looks something like
445
from breezy.tests import script
447
def test_unshelve_keep(self):
449
script.run_script(self, '''
451
$ brz shelve -q --all -m Foo
454
$ brz unshelve -q --keep
461
You can also test commands that read user interaction::
463
def test_confirm_action(self):
464
"""You can write tests that demonstrate user confirmation"""
465
commands.builtin_command_registry.register(cmd_test_confirm)
466
self.addCleanup(commands.builtin_command_registry.remove, 'test-confirm')
469
2>Really do it? [y/n]:
474
To avoid having to specify "-q" for all commands whose output is
475
irrelevant, the run_script() method may be passed the keyword argument
476
``null_output_matches_anything=True``. For example::
478
def test_ignoring_null_output(self):
481
$ brz ci -m 'first revision' --unchanged
484
""", null_output_matches_anything=True)
490
`breezy.tests.test_import_tariff` has some tests that measure how many
491
Python modules are loaded to run some representative commands.
493
We want to avoid loading code unnecessarily, for reasons including:
495
* Python modules are interpreted when they're loaded, either to define
496
classes or modules or perhaps to initialize some structures.
498
* With a cold cache we may incur blocking real disk IO for each module.
500
* Some modules depend on many others.
502
* Some optional modules such as `testtools` are meant to be soft
503
dependencies and only needed for particular cases. If they're loaded in
504
other cases then brz may break for people who don't have those modules.
506
`test_import_tariff` allows us to check that removal of imports doesn't
509
This is done by running the command in a subprocess with
510
``PYTHON_VERBOSE=1``. Starting a whole Python interpreter is pretty slow,
511
so we don't want exhaustive testing here, but just enough to guard against
512
distinct fixed problems.
514
Assertions about precisely what is loaded tend to be brittle so we instead
515
make assertions that particular things aren't loaded.
517
Unless selftest is run with ``--no-plugins``, modules will be loaded in
518
the usual way and checks made on what they cause to be loaded. This is
519
probably worth checking into, because many brz users have at least some
520
plugins installed (and they're included in binary installers).
522
In theory, plugins might have a good reason to load almost anything:
523
someone might write a plugin that opens a network connection or pops up a
524
gui window every time you run 'brz status'. However, it's more likely
525
that the code to do these things is just being loaded accidentally. We
526
might eventually need to have a way to make exceptions for particular
529
Some things to check:
531
* non-GUI commands shouldn't load GUI libraries
533
* operations on brz native formats sholudn't load foreign branch libraries
535
* network code shouldn't be loaded for purely local operations
537
* particularly expensive Python built-in modules shouldn't be loaded
538
unless there is a good reason
541
Testing locking behaviour
542
-------------------------
544
In order to test the locking behaviour of commands, it is possible to install
545
a hook that is called when a write lock is: acquired, released or broken.
546
(Read locks also exist, they cannot be discovered in this way.)
548
A hook can be installed by calling breezy.lock.Lock.hooks.install_named_hook.
549
The three valid hooks are: `lock_acquired`, `lock_released` and `lock_broken`.
556
lock.Lock.hooks.install_named_hook('lock_acquired',
557
locks_acquired.append, None)
558
lock.Lock.hooks.install_named_hook('lock_released',
559
locks_released.append, None)
561
`locks_acquired` will now receive a LockResult instance for all locks acquired
562
since the time the hook is installed.
564
The last part of the `lock_url` allows you to identify the type of object that is locked.
566
- brzDir: `/branch-lock`
567
- Working tree: `/checkout/lock`
568
- Branch: `/branch/lock`
569
- Repository: `/repository/lock`
571
To test if a lock is a write lock on a working tree, one can do the following::
573
self.assertEndsWith(locks_acquired[0].lock_url, "/checkout/lock")
575
See breezy/tests/commands/test_revert.py for an example of how to use this for
582
In our enhancements to unittest we allow for some addition results beyond
583
just success or failure.
585
If a test can't be run, it can say that it's skipped by raising a special
586
exception. This is typically used in parameterized tests |--| for example
587
if a transport doesn't support setting permissions, we'll skip the tests
588
that relating to that. ::
591
return self.branch_format.initialize(repo.brzdir)
592
except errors.UninitializableFormat:
593
raise tests.TestSkipped('Uninitializable branch format')
595
Raising TestSkipped is a good idea when you want to make it clear that the
596
test was not run, rather than just returning which makes it look as if it
599
Several different cases are distinguished:
602
Generic skip; the only type that was present up to brz 0.18.
605
The test doesn't apply to the parameters with which it was run.
606
This is typically used when the test is being applied to all
607
implementations of an interface, but some aspects of the interface
608
are optional and not present in particular concrete
609
implementations. (Some tests that should raise this currently
610
either silently return or raise TestSkipped.) Another option is
611
to use more precise parameterization to avoid generating the test
615
The test can't be run because a dependency (typically a Python
616
library) is not available in the test environment. These
617
are in general things that the person running the test could fix
618
by installing the library. It's OK if some of these occur when
619
an end user runs the tests or if we're specifically testing in a
620
limited environment, but a full test should never see them.
622
See `Test feature dependencies`_ below.
625
The test exists but is known to fail, for example this might be
626
appropriate to raise if you've committed a test for a bug but not
627
the fix for it, or if something works on Unix but not on Windows.
629
Raising this allows you to distinguish these failures from the
630
ones that are not expected to fail. If the test would fail
631
because of something we don't expect or intend to fix,
632
KnownFailure is not appropriate, and TestNotApplicable might be
635
KnownFailure should be used with care as we don't want a
636
proliferation of quietly broken tests.
640
We plan to support three modes for running the test suite to control the
641
interpretation of these results. Strict mode is for use in situations
642
like merges to the mainline and releases where we want to make sure that
643
everything that can be tested has been tested. Lax mode is for use by
644
developers who want to temporarily tolerate some known failures. The
645
default behaviour is obtained by ``brz selftest`` with no options, and
646
also (if possible) by running under another unittest harness.
648
======================= ======= ======= ========
649
result strict default lax
650
======================= ======= ======= ========
651
TestSkipped pass pass pass
652
TestNotApplicable pass pass pass
653
UnavailableFeature fail pass pass
654
KnownFailure fail pass pass
655
======================= ======= ======= ========
658
Test feature dependencies
659
-------------------------
661
Writing tests that require a feature
662
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
664
Rather than manually checking the environment in each test, a test class
665
can declare its dependence on some test features. The feature objects are
666
checked only once for each run of the whole test suite.
668
(For historical reasons, as of May 2007 many cases that should depend on
669
features currently raise TestSkipped.)
673
class TestStrace(TestCaseWithTransport):
675
_test_needs_features = [StraceFeature]
677
This means all tests in this class need the feature. If the feature is
678
not available the test will be skipped using UnavailableFeature.
680
Individual tests can also require a feature using the ``requireFeature``
683
self.requireFeature(StraceFeature)
685
The old naming style for features is CamelCase, but because they're
686
actually instances not classses they're now given instance-style names
689
Features already defined in ``breezy.tests`` and ``breezy.tests.features``
697
- UnicodeFilenameFeature
699
- CaseInsensitiveFilesystemFeature.
700
- chown_feature: The test can rely on OS being POSIX and python
702
- posix_permissions_feature: The test can use POSIX-style
703
user/group/other permission bits.
706
Defining a new feature that tests can require
707
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
709
New features for use with ``_test_needs_features`` or ``requireFeature``
710
are defined by subclassing ``breezy.tests.Feature`` and overriding the
711
``_probe`` and ``feature_name`` methods. For example::
713
class _SymlinkFeature(Feature):
716
return osutils.has_symlinks()
718
def feature_name(self):
721
SymlinkFeature = _SymlinkFeature()
723
A helper for handling running tests based on whether a python
724
module is available. This can handle 3rd-party dependencies (is
725
``paramiko`` available?) as well as stdlib (``termios``) or
726
extension modules (``breezy._groupcompress_pyx``). You create a
727
new feature instance with::
729
# in breezy/tests/features.py
730
apport = tests.ModuleAvailableFeature('apport')
733
# then in breezy/tests/test_apport.py
734
class TestApportReporting(TestCaseInTempDir):
736
_test_needs_features = [features.apport]
740
-----------------------
742
Translations are disabled by default in tests. If you want to test
743
that code is translated you can use the ``ZzzTranslations`` class from
746
self.overrideAttr(i18n, '_translations', ZzzTranslations())
748
And check the output strings look like ``u"zz\xe5{{output}}"``.
750
To test the gettext setup and usage you override i18n.installed back
751
to self.i18nInstalled and _translations to None, see
752
test_i18n.TestInstall.
755
Testing deprecated code
756
-----------------------
758
When code is deprecated, it is still supported for some length of time,
759
usually until the next major version. The ``applyDeprecated`` helper
760
wraps calls to deprecated code to verify that it is correctly issuing the
761
deprecation warning, and also prevents the warnings from being printed
764
Typically patches that apply the ``@deprecated_function`` decorator should
765
update the accompanying tests to use the ``applyDeprecated`` wrapper.
767
``applyDeprecated`` is defined in ``breezy.tests.TestCase``. See the API
768
docs for more details.
771
Testing exceptions and errors
772
-----------------------------
774
It's important to test handling of errors and exceptions. Because this
775
code is often not hit in ad-hoc testing it can often have hidden bugs --
776
it's particularly common to get NameError because the exception code
777
references a variable that has since been renamed.
779
.. TODO: Something about how to provoke errors in the right way?
781
In general we want to test errors at two levels:
783
1. A test in ``test_errors.py`` checking that when the exception object is
784
constructed with known parameters it produces an expected string form.
785
This guards against mistakes in writing the format string, or in the
786
``str`` representations of its parameters. There should be one for
787
each exception class.
789
2. Tests that when an api is called in a particular situation, it raises
790
an error of the expected class. You should typically use
791
``assertRaises``, which in the Breezy test suite returns the exception
792
object to allow you to examine its parameters.
794
In some cases blackbox tests will also want to check error reporting. But
795
it can be difficult to provoke every error through the commandline
796
interface, so those tests are only done as needed |--| eg in response to a
797
particular bug or if the error is reported in an unusual way(?) Blackbox
798
tests should mostly be testing how the command-line interface works, so
799
should only test errors if there is something particular to the cli in how
800
they're displayed or handled.
806
The Python ``warnings`` module is used to indicate a non-fatal code
807
problem. Code that's expected to raise a warning can be tested through
810
The test suite can be run with ``-Werror`` to check no unexpected errors
813
However, warnings should be used with discretion. It's not an appropriate
814
way to give messages to the user, because the warning is normally shown
815
only once per source line that causes the problem. You should also think
816
about whether the warning is serious enought that it should be visible to
817
users who may not be able to fix it.
820
Interface implementation testing and test scenarios
821
---------------------------------------------------
823
There are several cases in Breezy of multiple implementations of a common
824
conceptual interface. ("Conceptual" because it's not necessary for all
825
the implementations to share a base class, though they often do.)
826
Examples include transports and the working tree, branch and repository
829
In these cases we want to make sure that every implementation correctly
830
fulfils the interface requirements. For example, every Transport should
831
support the ``has()`` and ``get()`` and ``clone()`` methods. We have a
832
sub-suite of tests in ``test_transport_implementations``. (Most
833
per-implementation tests are in submodules of ``breezy.tests``, but not
834
the transport tests at the moment.)
836
These tests are repeated for each registered Transport, by generating a
837
new TestCase instance for the cross product of test methods and transport
838
implementations. As each test runs, it has ``transport_class`` and
839
``transport_server`` set to the class it should test. Most tests don't
840
access these directly, but rather use ``self.get_transport`` which returns
841
a transport of the appropriate type.
843
The goal is to run per-implementation only the tests that relate to that
844
particular interface. Sometimes we discover a bug elsewhere that happens
845
with only one particular transport. Once it's isolated, we can consider
846
whether a test should be added for that particular implementation,
847
or for all implementations of the interface.
849
See also `Per-implementation tests`_ (above).
852
Test scenarios and variations
853
-----------------------------
855
Some utilities are provided for generating variations of tests. This can
856
be used for per-implementation tests, or other cases where the same test
857
code needs to run several times on different scenarios.
859
The general approach is to define a class that provides test methods,
860
which depend on attributes of the test object being pre-set with the
861
values to which the test should be applied. The test suite should then
862
also provide a list of scenarios in which to run the tests.
864
A single *scenario* is defined by a `(name, parameter_dict)` tuple. The
865
short string name is combined with the name of the test method to form the
866
test instance name. The parameter dict is merged into the instance's
871
load_tests = load_tests_apply_scenarios
873
class TestCheckout(TestCase):
875
scenarios = multiply_scenarios(
876
VaryByRepositoryFormat(),
880
The `load_tests` declaration or definition should be near the top of the
881
file so its effect can be seen.
887
We have a rich collection of tools to support writing tests. Please use
888
them in preference to ad-hoc solutions as they provide portability and
889
performance benefits.
892
TestCase and its subclasses
893
~~~~~~~~~~~~~~~~~~~~~~~~~~~
895
The ``breezy.tests`` module defines many TestCase classes to help you
899
A base TestCase that extends the Python standard library's
900
TestCase in several ways. TestCase is build on
901
``testtools.TestCase``, which gives it support for more assertion
902
methods (e.g. ``assertContainsRe``), ``addCleanup``, and other
903
features (see its API docs for details). It also has a ``setUp`` that
904
makes sure that global state like registered hooks and loggers won't
905
interfere with your test. All tests should use this base class
906
(whether directly or via a subclass). Note that we are trying not to
907
add more assertions at this point, and instead to build up a library
908
of ``breezy.tests.matchers``.
910
TestCaseWithMemoryTransport
911
Extends TestCase and adds methods like ``get_transport``,
912
``make_branch`` and ``make_branch_builder``. The files created are
913
stored in a MemoryTransport that is discarded at the end of the test.
914
This class is good for tests that need to make branches or use
915
transports, but that don't require storing things on disk. All tests
916
that create brzdirs should use this base class (either directly or via
917
a subclass) as it ensures that the test won't accidentally operate on
918
real branches in your filesystem.
921
Extends TestCaseWithMemoryTransport. For tests that really do need
922
files to be stored on disk, e.g. because a subprocess uses a file, or
923
for testing functionality that accesses the filesystem directly rather
924
than via the Transport layer (such as dirstate).
926
TestCaseWithTransport
927
Extends TestCaseInTempDir. Provides ``get_url`` and
928
``get_readonly_url`` facilities. Subclasses can control the
929
transports used by setting ``vfs_transport_factory``,
930
``transport_server`` and/or ``transport_readonly_server``.
933
See the API docs for more details.
939
When writing a test for a feature, it is often necessary to set up a
940
branch with a certain history. The ``BranchBuilder`` interface allows the
941
creation of test branches in a quick and easy manner. Here's a sample
944
builder = self.make_branch_builder('relpath')
945
builder.build_commit()
946
builder.build_commit()
947
builder.build_commit()
948
branch = builder.get_branch()
950
``make_branch_builder`` is a method of ``TestCaseWithMemoryTransport``.
952
Note that many current tests create test branches by inheriting from
953
``TestCaseWithTransport`` and using the ``make_branch_and_tree`` helper to
954
give them a ``WorkingTree`` that they can commit to. However, using the
955
newer ``make_branch_builder`` helper is preferred, because it can build
956
the changes in memory, rather than on disk. Tests that are explictly
957
testing how we work with disk objects should, of course, use a real
960
Please see breezy.branchbuilder for more details.
962
If you're going to examine the commit timestamps e.g. in a test for log
963
output, you should set the timestamp on the tree, rather than using fuzzy
970
The ``TreeBuilder`` interface allows the construction of arbitrary trees
971
with a declarative interface. A sample session might look like::
973
tree = self.make_branch_and_tree('path')
974
builder = TreeBuilder()
975
builder.start_tree(tree)
976
builder.build(['foo', "bar/", "bar/file"])
977
tree.commit('commit the tree')
978
builder.finish_tree()
980
Usually a test will create a tree using ``make_branch_and_memory_tree`` (a
981
method of ``TestCaseWithMemoryTransport``) or ``make_branch_and_tree`` (a
982
method of ``TestCaseWithTransport``).
984
Please see breezy.treebuilder for more details.
989
PreviewTrees are based on TreeTransforms. This means they can represent
990
virtually any state that a WorkingTree can have, including unversioned files.
991
They can be used to test the output of anything that produces TreeTransforms,
992
such as merge algorithms and revert. They can also be used to test anything
993
that takes arbitrary Trees as its input.
997
# Get an empty tree to base the transform on.
998
b = self.make_branch('.')
999
empty_tree = b.repository.revision_tree(_mod_revision.NULL_REVISION)
1000
tt = TransformPreview(empty_tree)
1001
self.addCleanup(tt.finalize)
1002
# Empty trees don't have a root, so add it first.
1003
root = tt.new_directory('', ROOT_PARENT, 'tree-root')
1004
# Set the contents of a file.
1005
tt.new_file('new-file', root, 'contents', 'file-id')
1006
preview = tt.get_preview_tree()
1007
# Test the contents.
1008
self.assertEqual('contents', preview.get_file_text('file-id'))
1010
PreviewTrees can stack, with each tree falling back to the previous::
1012
tt2 = TransformPreview(preview)
1013
self.addCleanup(tt2.finalize)
1014
tt2.new_file('new-file2', tt2.root, 'contents2', 'file-id2')
1015
preview2 = tt2.get_preview_tree()
1016
self.assertEqual('contents', preview2.get_file_text('file-id'))
1017
self.assertEqual('contents2', preview2.get_file_text('file-id2'))
1020
Temporarily changing state
1021
~~~~~~~~~~~~~~~~~~~~~~~~~~
1023
If your test needs to temporarily mutate some global state, and you need
1024
it restored at the end, you can say for example::
1026
self.overrideAttr(osutils, '_cached_user_encoding', 'latin-1')
1028
This should be used with discretion; sometimes it's better to make the
1029
underlying code more testable so that you don't need to rely on monkey
1033
Observing calls to a function
1034
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1036
Sometimes it's useful to observe how a function is called, typically when
1037
calling it has side effects but the side effects are not easy to observe
1038
from a test case. For instance the function may be expensive and we want
1039
to assert it is not called too many times, or it has effects on the
1040
machine that are safe to run during a test but not easy to measure. In
1041
these cases, you can use `recordCalls` which will monkey-patch in a
1042
wrapper that records when the function is called.
1045
Temporarily changing environment variables
1046
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1048
If yout test needs to temporarily change some environment variable value
1049
(which generally means you want it restored at the end), you can use::
1051
self.overrideEnv('brz_ENV_VAR', 'new_value')
1053
If you want to remove a variable from the environment, you should use the
1054
special ``None`` value::
1056
self.overrideEnv('PATH', None)
1058
If you add a new feature which depends on a new environment variable, make
1059
sure it behaves properly when this variable is not defined (if applicable) and
1060
if you need to enforce a specific default value, check the
1061
``TestCase._cleanEnvironment`` in ``breezy.tests.__init__.py`` which defines a
1062
proper set of values for all tests.
1067
Our base ``TestCase`` class provides an ``addCleanup`` method, which
1068
should be used instead of ``tearDown``. All the cleanups are run when the
1069
test finishes, regardless of whether it passes or fails. If one cleanup
1070
fails, later cleanups are still run.
1072
(The same facility is available outside of tests through
1073
``breezy.cleanup``.)
1079
Generally we prefer automated testing but sometimes a manual test is the
1080
right thing, especially for performance tests that want to measure elapsed
1081
time rather than effort.
1083
Simulating slow networks
1084
------------------------
1086
To get realistically slow network performance for manually measuring
1087
performance, we can simulate 500ms latency (thus 1000ms round trips)::
1089
$ sudo tc qdisc add dev lo root netem delay 500ms
1091
Normal system behaviour is restored with ::
1093
$ sudo tc qdisc del dev lo root
1095
A more precise version that only filters traffic to port 4155 is::
1097
tc qdisc add dev lo root handle 1: prio
1098
tc qdisc add dev lo parent 1:3 handle 30: netem delay 500ms
1099
tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip dport 4155 0xffff flowid 1:3
1100
tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip sport 4155 0xffff flowid 1:3
1102
and to remove this::
1104
tc filter del dev lo protocol ip parent 1: pref 3 u32
1105
tc qdisc del dev lo root handle 1:
1107
You can use similar code to add additional delay to a real network
1108
interface, perhaps only when talking to a particular server or pointing at
1109
a VM. For more information see <http://lartc.org/>.
1112
.. |--| unicode:: U+2014
1115
vim: ft=rst tw=74 ai et sw=4