There is really no excuse not to write tests!
Automatic testing of software during development is probably the most important Quality Assurance measure. Tests can validate the final system, which is nice for your users. However, most (Prolog) developers forget that it is not just a burden during development.
Tests are written in pure Prolog and enclosed within the directives
begin_tests/1,2
and end_tests/1.
They can be embedded inside a normal source module, or be placed in a
separate test-file that loads the files to be tested. Code inside a test
box is normal Prolog code. The entry points are defined by rules using
the head test(Name)
or
test(Name, Options)
, where Name is a ground term
and
Options is a list describing additional properties of the
test. Here is a very simple example:
:- begin_tests(lists). :- use_module(library(lists)). test(reverse) :- reverse([a,b], [b,a]). :- end_tests(lists).
The optional second argument of the test-head defines additional processing options. Defined options are:
fixme(Fixme)
.blocked(Reason)
, but the test it executed
anyway. If it fails, a -
is printed instead of
the .
character. If it passes a +
and if it passes with a choicepoint, !
. A
summary is printed at the end of the test run and the goal
test_report(fixme)
can be used to get details.setup
option. The only difference is that failure of a condition skips the
test and is considered an error when using the setup
option.create_file(Tmp) :- tmp_file(plunit, Tmp), open(Tmp, write, Out), write(Out, 'hello(World).\n'), close(Out). test(read, [ setup(create_file(Tmp)), cleanup(delete_file(Tmp)) ]) :- read_file_to_terms(Tmp, Terms, []), Term = hello(_).
cleanup
option to create and destroy the required
execution environment.name (forall bindings =
<vars> )
,
where
<vars> indicates the bindings of variables in Generator.nondet(k)
eyword. AnswerTerm
is compared to Value using the comparison operator Cmp. Cmp
is typically one of =/2, ==/2, =:=/2 or =@=/2,1The
=@= predicate (denoted structural equivalence) is the same as variant/2
in SICStus. but any test can be used. This is the same as
inserting the test at the end of the conjunction, but it allows the test
engine to distinguish between failure of copy_term/2
and producing the wrong value. Multiple variables must be combined in an
arbitrary compound term. E.g. A1-A2 == v1-v2
test(copy, [ true(Copy =@= hello(X,X)) ]) :- copy_term(hello(Y,Y), Copy).
true(AnswerTerm Cmp Value)
if Cmp
is one of the comparison operators given above.subsumes_term(Error,
ThrownError)
. I.e., the thrown error must be more specific than
the specified Error. See
subsumes_term/2.error(Error, _Context)
. See keyword throws
(as well as predicate throw/1
and library(error)) for details.true(AnswerTerm Cmp Values)
, but used for
non-deterministic predicates. Each element is compared using Cmp.
Order matters. For example:
test(or, all(X == [1,2])) :- ( X = 1 ; X = 2 ).
all(AnswerTerm Cmp Instances)
, but before
testing both the bindings of AnswerTerm and Instances
are sorted using
sort/2. This
removes duplicates and places both sets in the same order.2The
result is only well-defined of Cmp is ==
.false
(default), true
or error
. See the Prolog flag occurs_check
for details.
begin_tests(Name, [])
.Defined options are:
occurs_check
option.
The test-body is ordinary Prolog code. Without any options, the body
must be designed to succeed deterministically. Any other result
is considered a failure. One of the options fail
, true
,
throws
, all
or set
can be used to
specify a different expected result. See section
2 for details. In this section we illustrate typical test-scenarios
by testing SWI-Prolog built-in and library predicates.
Deterministic predicates are predicates that must succeed exactly once and, for well behaved predicates, leave no choicepoints. Typically they have zero or more input- and zero or more output arguments. The test goal supplies proper values for the input arguments and verifies the output arguments. Verification can use test-options or be explicit in the body. The tests in the example below are equivalent.
test(add) :- A is 1 + 2, A =:= 3. test(add, [true(A =:= 3)]) :- A is 1 + 2.
The test engine verifies that the test-body does not leave a choicepoint. We illustrate that using the test below:
test(member) :- member(b, [a,b,c]).
Although this test succeeds, member/2 leaves a choicepoint which is reported by the test subsystem. To make the test silent, use one of the alternatives below.
test(member) :- member(b, [a,b,c]), !. test(member, [nondet]) :- member(b, [a,b,c]).
Semi-deterministic predicates are predicates that either fail or
succeed exactly once and, for well behaved predicates, leave no
choicepoints. Testing such predicates is the same as testing
deterministic predicates. Negative tests must be specified using the
option
fail
or by negating the body using \+/1
.
test(is_set) :- \+ is_set([a,a]). test(is_set, [fail]) :- is_set([a,a]).
Non-deterministic predicates succeed zero or more times. Their
results are tested either using findall/3
or setof/3
followed by a value-check or using the all
or set
options. The following are equivalent tests:
test(member) :- findall(X, member(X, [a,b,c]), Xs), Xs == [a,b,c]. test(member, all(X == [a,b,c])) :- member(X, [a,b,c]).
Error-conditions are tested using the option throws(Error)
or by wrapping the test in a catch/3.
The following tests are equivalent:
test(div0) :- catch(A is 1/0, error(E, _), true), E =@= evaluation_error(zero_divisor). test(div0, [error(evaluation_error(zero_divisor))]) :- A is 1/0.
PlUnit is designed to cooperate with the assertion/1 test provided by library(debug).3This integration was suggested by Günter Kniesel. If an assertion fails in the context of a test, the test framework reports this and considers the test failed, but does not trap the debugger. Using assertion/1 in the test-body is attractive for two scenarios:
Below is a simple example, showing two failing assertions. The first line of the failure message gives the test. The second reports the location of the assertion.4If known. The location is determined by analysing the stack. The second failure shows a case where this does not work because last-call optimization has already removed the context of the test-body. If the assertion call originates from a different file this is reported appropriately. The last line gives the actually failed goal.
:- begin_tests(test). test(a) :- A is 2^3, assertion(float(A)), assertion(A == 9). :- end_tests(test).
?- run_tests. % PL-Unit: test ERROR: /home/jan/src/pl-devel/linux/t.pl:5: test a: assertion at line 7 failed Assertion: float(8) ERROR: /home/jan/src/pl-devel/linux/t.pl:5: test a: assertion failed Assertion: 8==9 . done % 2 assertions failed
Test-units can be embedded in normal Prolog source-files.
Alternatively, tests for a source-file can be placed in another file
alongside the file to be tested. Test files use the extension .plt
.
The predicate
load_test_files/1
can load all files that are related to source-files loaded into the
current project.
To run tests from the Prolog prompt, first load the program and then
run run_tests/0
or run_tests(+Unit)
.
?- gtrace, run_tests(lists:member).
To identify nonterminating tests, interrupt the looping process with Control-C. The test name and location will be displayed.
To run a file's tests from the command line, run the following
command, replacing your/file.pl
with the path to your file.
swipl -g run_tests -t halt your/file.pl
Prolog will (1) load the file you specify, as well as any modules it depends on; (2) run all tests in those files, and (3) exit with status 0 or 1 depending on whether the test suite succeeds or fails.
If you want to test multiple files, you can pass multiple ..pl
files.
Most applications do not want the test-suite to end up in the final application. There are several ways to achieve this. One is to place all tests in separate files and not to load the tests when creating the production environment. Alternatively, use the directive below before loading the application.
:- set_test_options([load(never)]).
never
,
everything between begin_tests/1
and end_tests/1
is simply ignored. When always
, tests are always loaded.
Finally, when using the default value normal
, tests are
loaded if the code is not compiled with optimisation turned on.manual
, tests can only
be run using run_tests/0
or run_tests/1.
Using make
, tests will be run for reloaded files, but not
for files loaded the first time. Using
make(all)
make/0
will run all test-suites, not only those that belong to files that are
reloaded.tty
(default if there is a console) or
log
. tty
uses terminal control to overwrite
successful tests, allowing the user to see the currently running tests
and output from failed tests. This is the default of the output is a
tty.
log
prints a full log of the executed tests and their
result and is intended for non-interactive usage.always
, emit all output as it is produced, if never
,
suppress all output and if on_failure
, emit the output if
the test fails.occurs_check
flag during
testing.true
(default false
), cleanup report at the
end of
run_tests/1.
Used to improve cooperation with memory debuggers such as dmalloc..plt
test-files that belong to the currently loaded
sources.fixme
, providing
details on how the fixme-flagged tests proceeded.
Prolog is an interactive environment. Where users of non-interactive systems tend to write tests as code, Prolog developers tend to run queries interactively during development. This interactive testing is generally faster, but the disadvantage is that the tests are lost at the end of the session. The test-wizard tries to combine the advantages. It collects toplevel queries and saves them to a specified file. Later, it extracts these queries from the file and locates the predicates that are tested by the queries. It runs the query and creates a test clause from the query.
Auto-generating test cases is experimentally supported through the
library library(test_wizard)
. We briefly introduce the
functionality using examples. First step is to log the queries into a
file. This is accomplished with the commands below. Queries.pl
is the name in which to store all queries. The user can choose any
filename for this purpose. Multiple Prolog instances can share the same
name, as data is appended to this file and write is properly locked to
avoid file corruption.
:- use_module(library(test_wizard)). :- set_prolog_flag(log_query_file, 'Queries.pl').
Next, we will illustrate using the library by testing the predicates
from library library(lists)
. To generate test cases we just
make calls on the terminal. Note that all queries are recorded and the
system will select the appropriate ones when generating the test unit
for a particular module.
?- member(b, [a,b]). Yes ?- reverse([a,b], [b|A]). A = [a] ; No
Now we can generate the test-cases for the module list using make_tests/3:
?- make_tests(lists, 'Queries.pl', current_output). :- begin_tests(lists). test(member, [nondet]) :- member(b, [a, b]). test(reverse, [true(A==[a])]) :- reverse([a, b], [b|A]). :- end_tests(lists).
The purpose of this module is to find which part of the program has been used by a certain goal. Usage is defined in terms of clauses for which the head unification succeeded. For each clause we count how often it succeeded and how often it failed. In addition we track all call sites, creating goal-by-goal annotated clauses.
The result is represented as a list of clause-references. As the references to clauses of dynamic predicates cannot be guaranteed, these are omitted from the result.
Using show_coverage/2
with the option annotate(true)
, implied by
ext(Ext)
or dir(Dir)
, the analysis creates a
line-by-line copy of the source files that is annotated with how many
times this line was executed and with what logical results. These
annotations rely on relating executable code to source locations which
is shared by the source level debugger. Source level rewrites due to
term or goal expansion may harm the results.
ext
or dir
option are specified.
### Clause was never executed. ++N Clause was entered N times and always succeeded --N Clause was entered N times and never succeeded +N-M Clause has succeeded N times and failed M times +N*M Clause was entered N times and succeeded M times
All call sites are annotated using the same conventions,
except that ---
is used to annotate subgoals that were
never called.
true
(default), add line numbers to the annotated file.true
.
For example, run a goal and create annotated files in a directory
cov
using:
?- show_coverage(mygoal, [dir(cov)]).
less
may be used with the -r
flag. Alternatively, programs such as ansi2html
(Linux) may
be used to convert the files to HTML. It would probably be better to
integrate the output generation with library(pldoc/doc_htmlsrc)
.
One of the reasons to have tests is to simplify migrating code between Prolog implementations. Unfortunately creating a portable test-suite implies a poor integration into the development environment. Luckily, the specification of the test-system proposed here can be ported quite easily to most Prolog systems sufficiently compatible to SWI-Prolog to consider porting your application. Most important is to have support for term_expansion/2.
In the current system, test units are compiled into sub-modules of the module in which they appear. Few Prolog systems allow for sub-modules and therefore ports may have to fall-back to inject the code in the surrounding module. This implies that support predicates used inside the test unit should not conflict with predicates of the module being tested.
The directory of plunit.pl
and swi.pl
must
be in the
library
search-path. With PLUNITDIR replaced accordingly,
add the following into your .sicstusrc
or sicstus.ini
.
:- set_prolog_flag(language, iso). % for maximal compatibility library_directory('PLUNITDIR').
The current version runs under SICStus 3. Open issues:
plunit.pl
. Both coverage analysis and the test
generation wizard currently require SWI-Prolog.
load
option normal
is the same as always
.
Use set_test_options(load, never)
to avoid loading the test
suites.
run
option is not supported.
There are two approaches for testing. In one extreme the tests are written using declarations dealing with setup, cleanup, running and testing the result. In the other extreme a test is simply a Prolog goal that is supposed to succeed. We have chosen to allow for any mixture of these approaches. Written down as test/1 we opt for the simple succeeding goal approach. Using options to the test the user can choose for a more declarative specification. The user can mix both approaches.
The body of the test appears at the position of a clause-body. This simplifies identification of the test body and ensures proper layout and colouring support from the editor without the need for explicit support of the unit test module. Only clauses of test/1 and test/2 may be marked as non-called in environments that perform cross-referencing.