Next: srfi testing spec advanced, Up: srfi testing spec [Index]
Let’s start with a simple example. This is a complete self–contained test suite.
;; Initialize and give a name to a simple testsuite. (test-begin "vec-test") (define v (make-vector 5 99)) ;; Require that an expression evaluate to true. (test-assert (vector? v)) ;; Test that an expression is eqv? to some other expression. (test-eqv 99 (vector-ref v 2)) (vector-set! v 2 7) (test-eqv 7 (vector-ref v 2)) ;; Finish the testsuite, and report results. (test-end "vec-test")
This test suite could be saved in its own source file. Nothing else is needed: We do not require any top–level forms, so it is easy to wrap an existing program or test to this form, without adding indentation. It is also easy to add new tests, without having to name individual tests (though that is optional).
Test cases are executed in the context of a test runner, which is a
object that accumulates and reports test results. This specification
defines how to create and use custom test runners, but implementations
should also provide a default test runner. It is suggested (but not
required) that loading the above file in a top–level environment will
cause the tests to be executed using an implementation–specified
default test runner, and test-end
will cause a summary to be
displayed in an implementation–specified manner.
Primitive test cases test that a given condition is true. They may have
a name. The core test case form is test-assert
.
Evaluate the ?expression. The test passes if the result is true; if the result is false, a test failure is reported. The test also fails if an exception is raised, assuming the implementation has a way to catch exceptions.
How the failure is reported depends on the test runner environment. The ?test-name is a string that names the test case. (Though the ?test-name is a string literal in the examples, it is an expression. It is evaluated only once.) It is used when reporting errors, and also when skipping tests, as described below.
It is an error to invoke test-assert
if there is no current test
runner.
The following forms may be more convenient than using test-assert directly:
This is equivalent to:
(test-assert ?test-name (eqv? ?expected ?test-expr))
Similarly test-equal
and test-eq
are shorthand for
test-assert combined with equal? or eq?, respectively:
(test-equal [test-name] expected test-expr) (test-eq [test-name] expected test-expr)
Here is a simple example:
(define (mean x y) (/ (+ x y) 2.0)) (test-eqv 4 (mean 3 5))
For testing approximate equality of inexact reals we can use
test-approximate
:
(test-approximate [test-name] expected test-expr error)
This is equivalent to (except that each argument is only evaluated once):
(test-assert [test-name] (and (>= test-expr (- expected error)) (<= test-expr (+ expected error))))
We need a way to specify that evaluation should fail. This verifies that errors are detected when required.
Evaluating ?test-expr is expected to signal an error. The kind of error is indicated by ?error-type.
If the ?error-type is left out, or it is #t
, it means “some
kind of unspecified error should be signaled”. For example:
(test-error #t (vector-ref '#(1 2) 9))
This specification leaves it implementation–defined (or for a future
specification) what form ?test-error may take, though all
implementations must allow #t
. Some implementations may support
SRFI-35’s conditions, but these are only standardized for
SRFI-36’s I/O conditions, which are seldom useful in test suites.
An implementation may also allow implementation–specific “exception
types”. For example Java–based implementations may allow the names of
Java exception classes:
;; Kawa-specific example (test-error <java.lang.IndexOutOfBoundsException> (vector-ref '#(1 2) 9))
An implementation that cannot catch exceptions should skip ?test-error forms.
Testing syntax is tricky, especially if we want to check that invalid syntax is causes an error. The following utility function can help.
Parse string (using read
) and evaluate the result. The
result of evaluation is returned by test-read-eval-string
. An
error is signalled if there are unread characters after the read is
done. For example:
(test-read-eval-string "(+ 3 4)")
Evaluates to 7.
(test-read-eval-string "(+ 3 4")
Signals an error.
(test-read-eval-string "(+ 3 4) ")
Signals an error, because there is extra “junk” (i.e. a space) after the list is read.
The test-read-eval-string
used in tests:
(test-equal 7 (test-read-eval-string "(+ 3 4)")) (test-error (test-read-eval-string "(+ 3")) (test-equal #\newline (test-read-eval-string "#\\newline")) (test-error (test-read-eval-string "#\\newlin")) ;; Skip the next 2 tests unless srfi-62 is available. (test-skip (cond-expand (srfi-62 0) (else 2))) (test-equal 5 (test-read-eval-string "(+ 1 #;(* 2 3) 4)")) (test-equal '(x z) (test-read-string "(list 'x #;'y 'z)"))
A test group is a named sequence of forms containing test cases, expressions, and definitions. Entering a group sets the test group name; leaving a group restores the previous group name. These are dynamic (run time) operations, and a group has no other effect or identity. Test groups are informal groupings: they are neither Scheme values, nor are they syntactic forms.
A test group may contain nested inner test groups. The test group path is a list of the currently–active (entered) test group names, oldest (outermost) first.
Enter a new test group. The ?suite-name becomes the current test group name, and is added to the end of the test group path. Portable test suites should use a sting literal for ?suite-name; the effect of expressions or other kinds of literals is unspecified.
RATIONALE In some ways using symbols would be preferable. However, we want human–readable names, and standard Scheme does not provide a way to include spaces or mixed–case text in literal symbols.
The optional ?count must match the number of test cases executed by this group. (Nested test groups count as a single test case for this count.) This extra test may be useful to catch cases where a test doesn’t get executed because of some unexpected error.
Additionally, if there is no currently executing test runner, one is installed in an implementation–defined manner.
Leave the current test group. An error is reported if the ?suite-name does not match the current test group name.
Additionally, if the matching test-begin
installed a new
test–runner, then the test-end
will deinstall it, after
reporting the accumulated test results in an implementation–defined
manner.
Equivalent to:
(if (not (test-to-skip% ?suite-name)) (dynamic-wind (lambda () (test-begin ?suite-name)) (lambda () ?decl-or-expr ...) (lambda () (test-end ?suite-name))))
This is usually equivalent to executing the ?decl-or-exprs within
the named test group. However, the entire group is skipped if it
matched an active test-skip
(see later). Also, the
test-end
is executed in case of an exception.
Execute each of the ?decl-or-expr forms in order (as in a
<body>
), and then execute the ?cleanup-form. The latter
should be executed even if one of a ?decl-or-expr forms raises an
exception (assuming the implementation has a way to catch exceptions).
For example:
(test-group-with-cleanup "test-file" (define f (open-output-file "log")) (do-a-bunch-of-tests f) (close-output-port f))
Next: srfi testing spec advanced, Up: srfi testing spec [Index]