3 This directory contains the test suite for notmuch.
5 When fixing bugs or enhancing notmuch, you are strongly encouraged to
6 add tests in this directory to cover what you are trying to fix or
11 Some tests require external dependencies to run. Without them, they
12 will be skipped, or (rarely) marked failed. Please install these, so
13 that you know if you break anything.
24 The easiest way to run tests is to say "make test", (or simply run the
25 notmuch-test script). Either command will run all available tests.
27 Alternately, you can run a specific subset of tests by simply invoking
28 one of the executable scripts in this directory, (such as ./search,
29 ./reply, etc). Note that you will probably want "make test-binaries"
30 before running individual tests.
32 The following command-line options are available when running tests:
35 This may help the person who is developing a new test.
36 It causes the command defined with test_debug to run.
39 This causes the test to immediately exit upon the first
43 Execute notmuch with valgrind and exit with status
44 126 on errors (just like regular tests, this will only stop
45 the test script when running under -i). Valgrind errors
46 go to stderr, so you might want to pass the -v option, too.
48 Since it makes no sense to run the tests with --valgrind and
49 not see any output, this option implies --verbose. For
50 convenience, it also implies --tee.
53 In addition to printing the test output to the terminal,
54 write it to files named 't/test-results/$TEST_NAME.out'.
55 As the names depend on the tests' file names, it is safe to
56 run the tests with this option in parallel.
59 This runs the testsuites specified under a separate directory.
60 However, caution is advised, as not all tests are maintained
61 with this relocation in mind, so some tests may behave
64 Pointing this argument at a tmpfs filesystem can improve the
65 speed of the test suite for some users.
67 Certain tests require precomputed databases to complete. You can fetch these
70 make download-test-databases
72 If you do not download the test databases, the relevant tests will be
75 When invoking the test suite via "make test" any of the above options
76 can be specified as follows:
78 make test OPTIONS="--verbose"
80 You can choose an emacs binary (and corresponding emacsclient) to run
81 the tests in one of the following ways.
83 TEST_EMACS=my-special-emacs TEST_EMACSCLIENT=my-emacsclient make test
84 TEST_EMACS=my-special-emacs TEST_EMACSCLIENT=my-emacsclient ./emacs
85 make test TEST_EMACS=my-special-emacs TEST_EMACSCLIENT=my-emacsclient
87 Some tests may require a c compiler. You can choose the name and flags similarly
90 make test TEST_CC=gcc TEST_CFLAGS="-g -O2"
95 Normally, when new script starts and when test PASSes you get a message
96 printed on screen. This printing can be disabled by setting the
97 NOTMUCH_TEST_QUIET variable to a non-null value. Message on test
98 failures and skips are still printed.
102 If, for any reason, you need to skip one or more tests, you can do so
103 by setting the NOTMUCH_SKIP_TESTS variable to the name of one or more
108 $ NOTMUCH_SKIP_TESTS="search reply" make test
110 Even more fine-grained skipping is possible by appending a test number
111 (or glob pattern) after the section name. For example, the first
112 search test and the second reply test could be skipped with:
114 $ NOTMUCH_SKIP_TESTS="search.1 reply.2" make test
116 Note that some tests in the existing test suite rely on previous test
117 items, so you cannot arbitrarily skip any test and expect the
118 remaining tests to be unaffected.
120 Currently we do not consider skipped tests as build failures. For
121 maximum robustness, when setting up automated build processes, you
122 should explicitely skip tests, rather than relying on notmuch's
123 detection of missing prerequisites. In the future we may treat tests
124 unable to run because of missing prerequisites, but not explicitely
125 skipped by the user, as failures.
129 The test script is written as a shell script. It should start with
130 the standard "#!/usr/bin/env bash" with copyright notices, and an
131 assignment to variable 'test_description', like this:
135 # Copyright (c) 2005 Junio C Hamano
138 test_description='xxx test (option --frotz)
140 This test exercises the "notmuch xxx" command when
141 given the option --frotz.'
145 After assigning test_description, the test script should source
146 test-lib.sh like this:
148 . ./test-lib.sh || exit 1
150 This test harness library does the following things:
152 - If the script is invoked with command line argument --help
153 (or -h), it shows the test_description and exits.
155 - Creates a temporary directory with default notmuch-config and a
156 mail store with a corpus of mail, (initially, 50 early messages
157 sent to the notmuch list). This directory is
158 test/tmp.<test-basename>. The path to notmuch-config is exported in
159 NOTMUCH_CONFIG environment variable and mail store path is stored
160 in MAIL_DIR variable.
162 - Defines standard test helper functions for your scripts to
163 use. These functions are designed to make all scripts behave
164 consistently when command line arguments --verbose (or -v),
165 --debug (or -d), and --immediate (or -i) is given.
169 Your script will be a sequence of tests, using helper functions
170 from the test harness library. At the end of the script, call
175 There are a handful helper functions defined in the test harness
176 library for your script to use.
178 test_expect_success <message> <script>
180 This takes two strings as parameter, and evaluates the
181 <script>. If it yields success, test is considered
182 successful. <message> should state what it is testing.
184 test_begin_subtest <message>
186 Set the test description message for a subsequent test_expect_equal
187 invocation (see below).
189 test_subtest_known_broken
191 Mark the current test as broken. Such tests are expected to fail.
192 Unlike the normal tests, which say "PASS" on success and "FAIL" on
193 failure, these will say "FIXED" on success and "BROKEN" on failure.
194 Failures from these tests won't cause -i (immediate) to stop. A
195 test must call this before any test_expect_* function.
197 test_expect_equal <output> <expected>
199 This is an often-used convenience function built on top of
200 test_expect_success. It uses the message from the last
201 test_begin_subtest call, so call before calling
202 test_expect_equal. This function generates a successful test if
203 both the <output> and <expected> strings are identical. If not, it
204 will generate a failure and print the difference of the two
207 test_expect_equal_file <file1> <file2>
209 Identical to test_expect_equal, except that <file1> and <file2>
210 are files instead of strings. This is a much more robust method to
211 compare formatted textual information, since it also notices
212 whitespace and closing newline differences.
214 test_expect_equal_json <output> <expected>
216 Identical to test_expect_equal, except that the two strings are
217 treated as JSON and canonicalized before equality testing. This is
218 useful to abstract away from whitespace differences in the expected
219 output and that generated by running a notmuch command.
223 This takes a single argument, <script>, and evaluates it only
224 when the test script is started with --debug command line
225 argument. This is primarily meant for use during the
226 development of a new test script.
228 test_emacs <emacs-lisp-expressions>
230 This function executes the provided emacs lisp script within
231 emacs. The script can be a sequence of emacs lisp expressions,
232 (that is, they will be evaluated within a progn form). Emacs
233 stdout and stderr is not available, the common way to get output
234 is to save it to a file. There are some auxiliary functions
235 useful in emacs tests provided in test-lib.el. Do not use `setq'
236 for setting variables in Emacs tests because it affects other
237 tests that may run in the same Emacs instance. Use `let' instead
238 so the scope of the changed variables is limited to a single test.
240 test_emacs_expect_t <emacs-lisp-expressions>
242 This function executes the provided emacs lisp script within
243 emacs in a manner similar to 'test_emacs'. The expressions should
244 return the value `t' to indicate that the test has passed. If the
245 test does not return `t' then it is considered failed and all data
246 returned by the test is reported to the tester.
250 Your test script must have test_done at the end. Its purpose
251 is to summarize successes and failures in the test script and
252 exit with an appropriate error code.
254 There are also a number of notmuch-specific auxiliary functions and
255 variables which are useful in writing tests:
259 Generates a message with an optional template. Most tests will
260 actually prefer to call add_message. See below.
264 Generate a message and add it to the database (by calling "notmuch
265 new"). It is sufficient to simply call add_message with no
266 arguments if you don't care about the content of the message. If
267 more control is needed, arguments can be provide to specify many
268 different header values for the new message. See the documentation
269 within test-lib.sh or refer to many example calls within existing
274 This function should be called at the beginning of a test file
275 when a test needs to operate on a non-empty body of messages. It
276 will initialize the mail database to a known state of 50 sample
277 messages, (culled from the early history of the notmuch mailing
280 notmuch_counter_reset
281 $notmuch_counter_command
282 notmuch_counter_value
284 These allow to count how many times notmuch binary is called.
285 notmuch_counter_reset() function generates a script that counts
286 how many times it is called and resets the counter to zero. The
287 function sets $notmuch_counter_command variable to the path to the
288 generated script that should be called instead of notmuch to do
289 the counting. The notmuch_counter_value() function prints the
290 current counter value.
292 There are also functions which remove various environment-dependent
293 values from notmuch output; these are useful to ensure that test
294 results remain consistent across different machines.
296 notmuch_search_sanitize
297 notmuch_show_sanitize
298 notmuch_show_sanitize_all
299 notmuch_json_show_sanitize
301 All these functions should receive the text to be sanitized as the
302 input of a pipe, e.g.
303 output=`notmuch search "..." | notmuch_search_sanitize`