-
Notifications
You must be signed in to change notification settings - Fork 37
/
test-utils.sh
804 lines (664 loc) · 22.9 KB
/
test-utils.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
# test-utils.sh
# Copyright 2006-2022 Alan K. Stebbens <[email protected]>
TEST_UTILS_VERSION="test-utils.sh v1.9"
[[ "$TEST_UTILS_SH" = "$TEST_UTILS_VERSION" ]] && return
TEST_UTILS_SH="$TEST_UTILS_VERSION"
export PATH=.:$HOME/lib:$PATH
source list-utils.sh
source help-util.sh
test_help() {
help_pager <<'EOF'
The `test-utils.sh` library provides an infrasructure for test-driven
development (TDD) of `bash` scripts.
Usage:
source test-utils.sh
test_NAME1() {
start_test
... # perform operations and test the results
end_test
}
test_NAME2() {
start_test
... # perform operations and test the results
end_test
}
init_tests [ARGUMENTS]
run_tests
summarize_tests
Description:
A *run* is a collection of *tests* (within a single file); each test has a name.
A *test* is a set of related operations with *checks* on the results.
A *check*` tests or compares values, which quietly succeeds, or results in an
error. The error message can be provided, or a default error message is used.
At the end of each test, the number of checks and errors is recorded for
later summarization.
At the end of the run, all checks and error counts are summarized.
While the tests and checks are being performed, output is occuring to show the
progress. There are three modes of output: _terse_, _errors-only_, and
_detailed_.
Terse mode shows each test name followed by the number of checks, and how many
of those checks had errors. Terse mode is the default.
In errors-only mode, successful tests still show the same as terse mode, but
tests with error checks show the error message followed by a stack dump
indicating the location of the error. Errors-mode is indicated by the `-e`
option when invoking the test script.
In details mode, the tests and checks are run in verbose mode, showing both
successful checks and errors. Details mode is indicated by the `-d` option.
When invoking the test script, the command line argument can be used to pass a
`PATTERN` that is used to match a subset of the test names. By default, all
tests with the pattern `test_*` are run. For example, if the pattern `basic`
is used, all tests with the string `basic` will be run, and no others.
In order to be discovered for automatic test runs, the tests functions must
have the function name begin with "test_". A corrolary to that is do not
name any functions with the 'test_' prefix unless it is intended for them to
be discovered and run as part of a test run.
A common technique for test naming is: `test_NN_some_descriptive_name`, where
`NN` is a number. This allows easy reference by the `NN` to selectively run a
test or tests.
Below are the tests that are currently supported:
check_value VAL ERROR
check_empty VAL ERROR
Expression tests
check_true "EXPR" ERROR
check_false "EXPR" ERROR
Array item tests
check_size LIST SIZE ERROR # same as check_size_eq
check_size_eq LIST SIZE ERROR
check_size_ne LIST SIZE ERROR
check_size_lt LIST SIZE ERROR
check_size_le LIST SIZE ERROR
check_size_gt LIST SIZE ERROR
check_size_ge LIST SIZE ERROR
check_item LIST INDEX VAL ERROR
check_item_equal LIST INDEX VAL ERROR
check_item_unequal LIST INDEX NONVAL ERROR
Hash tests
check_key HASH KEY ERROR
check_no_key HASH KEY ERROR
check_key_value HASH KEY VALUE ERROR
String tests
check_equal VAL1 VAL2 ERROR
check_unequal VAL1 VAL2 ERROR
check_match VAL1 REGEXP ERROR
check_nomatch VAL1 REGEXP ERROR
Numeric tests
check_eq N1 N2 ERROR
check_ne N1 N2 ERROR
check_lt N1 N2 ERROR
check_le N1 N2 ERROR
check_gt N1 N2 ERROR
check_ge N1 N2 ERROR
Output tests
check_output [NAME] EXPRESSION [ERROR]
Evaluate `EXPRESSION` and compare its output against a previously collected
reference output. If the output matches, the test succeeds. If the output
does not match, print `ERROR` or a default error message.
Use `NAME` as the unique identifier for files in which the `stdout`, `stderr`,
and reference output is identified. If NAME is not supplied, a name is
generated from the alphanumeric characters of the EXPRESSION.
Reference output can be created and retained by the `-k` (`$keep`) option
when the test is run.
The first time a new check_output test is evaluated, there will not be a
collected reference output to compare against, and the test will fail.
All of these output checkers temporarily redirect STDOUT and STDERR to
temporary files under the 'test' directory, with suffixes of `.tmp.out` and
`.tmp.err`, for STDOUT and STDERR, respectively. Normally these temporary
files are removed automatically after each test run. However, if a test run is
interrupted, these temporary files may be left behind.
check_out [NAME] EXPRESSION [ERROR]
check_out_none [NAME] EXPRESSION [ERROR]
check_err [NAME] EXPRESSION [ERROR]
check_err_none [NAME] EXPRESSION [ERROR]
The above functions check that `STDOUT` or `STDERR` is or is not empty when
evaluating `EXPRESSION`, or show the `ERROR` (or default) message.
check_match_out [NAME] EXPRESSION PATTERN [ERROR]
check_match_err [NAME] EXPRESSION PATTERN [ERROR]
The above matching functions check that the `STDOUT`, or `STDERR` of the evaluated
`EXPRESSION` matches `PATTERN` (which is a string or a regular expression), or
show the `ERROR` (or a default error message).
check_nomatch_out [NAME] EXPRESSION PATTERN [ERROR]
check_nomatch_err [NAME] EXPRESSION PATTERN [ERROR]
Check that the `STDOUT` or `STDERR` of the evaluated `EXPRESSION` does not
match `PATTERN` (a string or a regular expression), or show the `ERROR`.
In all cases, both the `NAME` and the `ERROR` message are optional.
EOF
}
help_test() { test_help ; }
TEST_usage() {
help_pager 1>&2 <<EOF
usage: ${0##*/} [opts] [TEST-PATTERN ...]
Run tests with options controlling behavior.
If one or more TEST-PATTERNs are given, those tests not matching the given
patterns are excluded from being run.
All functions beginning with "test_" are included in the list of tests to run.
The tests are run in alphabetic order, unless the -r option is given to cause
them to be run in random order.
The "check_output" function compares stdout/stderr against the reference copies
captured with -k (keep) option.
Options
-h show help
-d show test status details
-e show verbose messages only on errors
-k keep test stdout/stderr for future test reference
-n don't make any changes (norun mode)
-r randomize the order of the tests
-v be verbose everywhere
EOF
exit
}
init_tests() {
TEST_errors=0
TEST_checks=0
TEST_tests=0
TESTS=()
TEST_check_status=()
test_details= verbose_errors= test_randomize= test_verbose= test_keep_ref_output=
if [[ $# -gt 0 ]]; then
set -- "$@"
while getopts 'deknvrh' opt ; do
case "$opt" in
d) test_details=1 ;;
e) verbose_errors=1 ;;
k) test_keep_ref_output=1 ;;
h) TEST_usage ;;
n) norun=1 ;;
r) test_randomize=1 ;;
v) test_verbose=1 ;;
esac
done
shift $(( OPTIND - 1 ))
TEST_patterns=( "$@" )
if (( test_keep_ref_output )); then
printf "Saving stdout/stderr for future reference.\n"
fi
fi
gather_tests
}
start_test() {
TEST_errors_start=$TEST_errors
TEST_checks_start=$TEST_checks
if [[ "$TEST_name" != "${FUNCNAME[1]}" ]]; then
(( TEST_tests++ ))
TEST_name="${FUNCNAME[1]}"
fi
}
TEST_check_start() {
local check_name x
# find the first function name up the call stack that does NOT begin with "TEST_"
for ((x=1; x<${#FUNCNAME}; x++)) ; do
check_name="${FUNCNAME[$x]}"
if [[ "$check_name" != TEST_* ]]; then
break
fi
done
(( TEST_checks++ ))
TEST_check_status[$TEST_checks]='?'
TEST_update_status "$check_name" $TEST_checks
}
# checkend OK "ERROR" ["ERROR_ACTION"]
# returns true (0) no error; false (1) for errors
TEST_check_end() {
if [[ -n "$1" ]]; then
TEST_check_status[$TEST_checks]='.'
if (( test_verbose )); then
echo 1>&2 -n " ok"
else
TEST_update_status
fi
return 0
else
TEST_check_status[$TEST_checks]='!'
(( TEST_errors++ ))
if (( test_verbose || verbose_errors )) ; then
echo 1>&2 " error"
[[ -n "$3" ]] && eval "$3" # maybe take action on error
TEST_error_dump "$2"
else
echo -n 1>&2 $'\b'"!"
fi
return 1
fi
}
end_test() {
(( test_verbose )) || TEST_update_status
echo 1>&2 ''
}
TEST_print_name() {
printf 1>&2 "%*s: " $TEST_max_width "${1:-$TEST_name}"
}
TEST_print_status() {
local checks errors
(( checks = TEST_checks - TEST_checks_start ))
(( errors = TEST_errors - TEST_errors_start ))
printf 1>&2 "%4d checks, %4d errors: " $checks $errors
if (( ! test_details && ! test_verbose )) ; then
local x st last_st=' '
for((x=TEST_checks_start; x<${#TEST_check_status[@]}; x++)) ; do
st="${TEST_check_status[$x]}"
if [[ "$st" != "$last_st" ]]; then
echo 1>&2 -n "$st"
fi
last_st="$st"
done
elif (( ! test_verbose )) ; then
local x
for((x=TEST_checks_start; x<${#TEST_check_status[@]}; x++)) ; do
echo 1>&2 -n "${TEST_check_status[$x]}"
done
fi
}
# TEST_update_status [CHECKNAME CHECKNO]
TEST_update_status() {
if (( test_verbose )); then
echo 1>&2 ''
else
echo -n 1>&2 $'\r'
fi
TEST_print_name
TEST_print_status
if [[ $# -gt 0 && -n "$test_verbose" ]]; then
printf 1>&2 "check %d: %s" $2 "$1"
fi
}
##############################
#
# These are internal test checking functions. The prefix "TEST_" keeps them
# from showing up in the error dumps
# TEST_check EXPR [TRUE-VALUE] [FALSE-VALUE] [ERROR]
TEST_check() {
TEST_check_start
local test_ok=$3
eval "if $1 ; then test_ok=${2:-1} ; fi"
TEST_check_end "$test_ok" "$4"
}
# TEST_check_expr "EXPR" "ERROR"
TEST_check_expr() { TEST_check "$1" 1 '' "$2" ; }
# TEST_check_size_func VAR FUNC VALUE [ERROR]
TEST_check_size_func() {
local insize=`__list_size $1`
TEST_check_test $insize $2 $3 "${4:-"Size check failed; got: $insize; should be: $3"}"
}
# TEST_check_item_func VAR INDEX OPERATOR VALUE [error]
# Check a specific item of VAR at INDEX for OPERATOR VALUE
TEST_check_item_func() {
local val
eval "val=\"\${$1[$2]}\""
TEST_check_test "$val" $3 "$4" "${5:-"Item check failed; got '$val', should be '$4'"}"
}
# TEST_check_key VAR KEY [ERROR]
# TEST_check_no_key VAR KEY [ERROR]
# Check that a key exists (with a non-empty value), or does not exist, in a hash VAR
TEST_check_key() { TEST_check_test2 -n "\${$1[$2]}" "$3" ; }
TEST_check_no_key() { TEST_check_test2 -z "\${$1[$2]}" "$3" ; }
# TEST_check_key_value VAR KEY VALUE [ERROR]
TEST_check_key_value() { TEST_check_test "\${$1['$2']}\"" '==' "$3" "$4" ; }
# TEST_check_test LVAL OP RVAL [ERROR]
# TEST_check_test2 OP VAL [ERROR]
TEST_check_test() { TEST_check_expr "test \"$1\" $2 \"$3\"" "$4" ; }
TEST_check_test2() { TEST_check_expr "test $1 \"$2\"" "$3" ; }
TEST_check_test3() { TEST_check_test "$@" ; }
########
# These are the "customer" check funcs
# check_true EXPR [ERROR]
check_true() { TEST_check "$1" 1 '' "$2" ; }
# check_false EXPR [ERROR]
check_false() { TEST_check "$1" '' 1 "$2" ; }
# check_size_eq VAR VAL [ERROR]
# check_size_ne VAR VAL [ERROR]
# check_size_ge VAR VAL [ERROR]
# check_size_gt VAR VAL [ERROR]
# check_size_le VAR VAL [ERROR]
# check_size_lt VAR VAL [ERROR]
check_size_eq() { TEST_check_size_func "$1" -eq $2 "$3" ; }
check_size_ne() { TEST_check_size_func "$1" -ne $2 "$3" ; }
check_size_ge() { TEST_check_size_func "$1" -ge $2 "$3" ; }
check_size_gt() { TEST_check_size_func "$1" -gt $2 "$3" ; }
check_size_le() { TEST_check_size_func "$1" -le $2 "$3" ; }
check_size_lt() { TEST_check_size_func "$1" -lt $2 "$3" ; }
# check_size VAR VAL ERROR
#
# Check that the array VAR has size VAL
check_size() { check_size_eq "$@" ; }
# check_item_equal VAR INDEX VAL ERROR
# check_item_unequal VAR INDEX VAL ERROR
check_item_equal() { TEST_check_item_func $1 "$2" '=' "$3" "$4" ; }
check_item_unequal() { TEST_check_item_func $1 "$2" '!=' "$3" "$4" ; }
check_item() { check_item_equal "$@" ; }
# check_key VAR KEY [ERROR]
check_key() { TEST_check_key "$@" ; }
check_no_key() { TEST_check_no_key "$@" ; }
# check_key_value VAR KEY VALUE [ERROR]
check_key_value() { TEST_check_key_value "$@" ; }
# check_value VALUE [ERROR]
# check_empty VALUE [ERROR]
#
# Check that VALUE is empty or not empty.
check_value() { TEST_check_test2 -n "$1" "$2" ; }
check_empty() { TEST_check_test2 -z "$1" "$2" ; }
# TEST_check_func VALUE FUNC VALUE2 [ERROR]
# TEST_check_func() {
# TEST_check_start
# local test_ok=0
# eval "if [[ \"$1\" $2 \"$3\" ]]; then test_ok=1 ; fi"
# if (( ! test_ok )) && [[ -z "$4" ]]; then
# echo 1>&2 "Check failed for \"$2\": '$1' vs '$3'"
# fi
# TEST_check_end "$ok" "$4"
# }
# These are the string tests
# check_equal VAL1 VAL2 [ERROR]
# check_unequal VAL1 VAL2 [ERROR]
# check_match VAL REGEXP [ERROR]
# check_nomatch VAL REGEXP [ERROR]
check_equal() { TEST_check_test "$1" = "$2" "$3" ; }
check_unequal() { TEST_check_test "$1" != "$2" "$3" ; }
check_match() { TEST_check_test "$1" =~ "$2" "$3" ; }
check_nomatch() { ! TEST_check_test "$1" =~ "$2" "$3" ; }
# check_OP VAL0 VAL2 [ERROR]
# These are the numeric tests
check_lt() { TEST_check_test "$1" -lt "$2" "$3" ; }
check_le() { TEST_check_test "$1" -le "$2" "$3" ; }
check_eq() { TEST_check_test "$1" -eq "$2" "$3" ; }
check_ne() { TEST_check_test "$1" -ne "$2" "$3" ; }
check_ge() { TEST_check_test "$1" -ge "$2" "$3" ; }
check_gt() { TEST_check_test "$1" -gt "$2" "$3" ; }
# check_output [NAME] EXPRESSION [ERROR]
#
# Run EXPRESSION and capture the both stdout & stderr, under NAME. Compare
# them against previously stored output under the same NAME, if any. Report
# differences.
#
# If there is no previously stored output, save it if -k (keep) is set.
#
# Be wary of comparying time-varying output, such as dates & times: they will
# always cause differences.
check_output() {
local name expr errm
if (( $# == 1 )); then
expr="$1" name="${1//[^a-zA-Z0-9_-]/}"
elif (( $# > 1 )) ; then
name="$1" expr="$2"
fi
TEST_check_start
local test_out_ok= test_err_ok= test_ok=1
local out="test/$name.out"
local err="test/$name.err"
local outref="$out.ref"
local errref="$err.ref"
local diffout="$out.diff"
local differr="$err.diff"
if (( test_keep_ref_output )); then
out="$outref" err="$errref"
fi
if (( $# > 2 )); then
errm="$3"
else
errm="$name test failed; diffs in $diffout and $differr"
fi
eval "$expr 1>$out 2>$err"
[[ -f "$outref" ]] || touch "$outref"
[[ -f "$errref" ]] || touch "$errref"
TEST_compare_output $outref $out $diffout "test_out_ok=1" "test_ok="
TEST_compare_output $errref $err $differr "test_err_ok=1" "test_ok="
TEST_check_end "$test_ok" "$errm"
}
# TEST_compare_output ref out diff GOODEXPR ERROREXPR
#
# Used by "check_output" to compare current and reference output. If the
# comparison is successful (no changes), evaluate GOODEXPR, otherwise, evaluate
# ERROREXPR.
TEST_compare_output() {
local ref="$1"
local out="$2"
local diff="$3"
if \diff -w -U 0 $ref $out >$diff ; then
eval "$4"
if (( ! test_keep_ref_output )); then
\rm -f "$out" # remove temp files
fi
\rm "$diff"
else
eval "$5"
# show diffs on errors with -d
if (( $test_details )); then
echo 1>&2 "\n$diff"
\cat 1>&2 $diff
echo 1>&2 ""
fi
fi
}
# these check functions check STDOUT and STDERR for output and/or no-output.
# check_out [NAME] EXPR [ERROR]
check_out() {
local name expr errm test_ok out err
TEST_check_io_setup "$@"
TEST_check_start
TEST_check_io_test '-s $out' 'output to STDOUT did not occur.'
TEST_check_end "$test_ok" "$errm"
}
# check_out_none NAME EXPR ERROR
check_out_none() {
local name expr errm test_ok out err
TEST_check_io_setup "$@"
TEST_check_start
TEST_check_io_test '! -s $out' 'output to STDOUT occurred.'
TEST_check_end "$test_ok" "$errm"
}
# check_err NAME EXPR ERROR
check_err() {
local name expr errm test_ok out err
TEST_check_io_setup "$@"
TEST_check_start
TEST_check_io_test '-s $err' 'output to STDERR did not occur.'
TEST_check_end "$test_ok" "$errm"
}
# check_err_none NAME EXPR ERROR
check_err_none() {
local name expr errm test_ok out err
TEST_check_io_setup "$@"
TEST_check_start
TEST_check_io_test '! -s $err' 'output to STDERR occurred.'
TEST_check_end "$test_ok" "$errm"
}
# The TEST_* functions below support the STDOUT, STDERR check functions above.
# TEST_check_io_setup [NAME] EXPR [ERROR]
# sets name, expr, errm, out, err, and test_ok=1
TEST_check_io_setup() {
case $# in
1) name="${1//[^a-zA-Z0-9_-]/}" expr="$1" errm= ;;
2) name="$1" expr="$2" errm= ;;
3) name="$1" expr="$2" errm="$3" ;;
esac
out="test/$name.tmp.out"
err="test/$name.tmp.err"
}
# TEST_check_outerr_test CONDITION DEFAULT_ERROR
# expr, out, err, errm must have valid values
# returns with test_ok and errm set
TEST_check_io_test() {
local cond="$1" goodval="$2" badval="$3"
if [[ -n "$errm" ]] ; then
errm="$name test failed; $errm"
else
errm="$name test failed; $4"
fi
test_ok=1
eval "$expr 1>$out 2>$err"
[[ $cond ]] || test_ok=
\rm -f $err
\rm -f $out
}
# these check methods match a pattern against STDOUT or STDERR
# check_match_out [NAME] EXPR PATTERN [ERROR]
check_match_out() {
local name expr pat errm test_ok out err
TEST_check_io_match_setup "$@"
TEST_check_start
TEST_check_io_matcher "$out" "STDOUT did not match '$pat'" 0
TEST_check_end "$test_ok" "$errm"
}
# check_nomatch_out NAME EXPR PATTERN ERROR
check_nomatch_out() {
local name expr pat errm test_ok out err
TEST_check_io_match_setup "$@"
TEST_check_start
TEST_check_io_matcher "$out" "STDOUT matched '$pat'" 1
TEST_check_end "$test_ok" "$errm"
}
# check_match_err NAME EXPR PATTERN ERROR
check_match_err() {
local name expr pat errm test_ok out err
TEST_check_io_match_setup "$@"
TEST_check_start
TEST_check_io_matcher "$err" "STDERR did not match '$pat'" 0
TEST_check_end "$test_ok" "$errm"
}
# check_nomatch_err NAME EXPR PATTERN ERROR
check_nomatch_err() {
local name expr pat errm test_ok out err
TEST_check_io_match_setup "$@"
TEST_check_start
TEST_check_io_matcher "$err" "STDERR matched '$pat'" 1
TEST_check_end "$test_ok" "$errm"
}
# the TEST_check_io_match* functions below support the check_(out/err) matching functions above.
# TEST_check_io_match_setup [NAME] EXPR PATTERN [ERROR]
TEST_check_io_match_setup() {
case $# in
1) echo 1>&2 "\nMissing argument(s) on check method!" ; exit 1;;
2) name="${1//[^a-zA-Z0-9_-]/}" expr="$1" pat="$2" errm= ;;
3) name="$1" expr="$2" pat="$3" errm= ;;
4) name="$1" expr="$2" pat="$3" errm="$4" ;;
esac
out="test/$name.tmp.out"
err="test/$name.tmp.err"
}
# TEST_check_io_matcher FILENAME ERROR NEGATE
# expr, out, err, pat must be set. errm is optionally set.
# NEGATE is 1 or 0.
# returns with test_ok and errm set
TEST_check_io_matcher() {
local file="$1" def_errm="$2" negate="$3"
if [[ -n "$errm" ]] ; then
errm="$name test failed; $errm"
else
errm="$name test failed; $def_errm"
fi
test_ok=1
eval "$expr 1>$out 2>$err"
if (( negate )) ; then
! \grep --silent "$pat" $file >2/dev/null || test_ok=
else
\grep --silent "$pat" $file >2/dev/null || test_ok=
fi
\rm -f $err
\rm -f $out
}
# called on any check error
# TEST_error_dump ERROR
#
# Dump the function stack (but not those beginning with "check_")
TEST_error_dump() {
local func source lineno stacksize
if [[ -n "$1" ]]; then
echo 1>&2 "Error: $1:"
else
echo 1>&2 "Error at:"
fi
stacksize=${#FUNCNAME[*]}
for (( i=1; i < stacksize; i++ )); do
func="${FUNCNAME[$i]}"
source="${BASH_SOURCE[$i]}"
lineno="${BASH_LINENO[$i]}"
case "$func" in
TEST_*) continue ;; # don't process TEST_ funcs
esac
printf 1>&2 " %s:%s:%s()\n" "$source" "$lineno" "$func"
done
}
##########################################################################
TESTS=()
# Filter the TESTS array with the TEST_patterns array. If names from the
# former aren't matched by any patterns from the latter, remove it from the
# TESTS array.
filter_tests() {
local deletes=()
local name nx
for ((nx=0; nx<${#TESTS[@]}; nx++)) ; do
name="${TESTS[nx]}"
local delete= # assume the name will NOT be deleted
if (( ${#TEST_patterns[@]} > 0 )); then
# we have patterns to match against. Now assume we didn't match it
delete=1
local pat
for pat in "${TEST_patterns[@]}" ; do
if [[ "$name" =~ $pat ]]; then
delete= ; break
fi
done
fi
if (( delete )) ; then
deletes+=( $nx )
fi
done
if (( ${#deletes[@]} > 0 )) ; then
# deletions must be done in descending index order
local x
for ((x=${#deletes[@]} - 1; x >= 0; x--)) ; do
nx=${deletes[x]}
unset TESTS[$nx]
done
fi
}
# gather_tests -- find all tests with the prefix "test_"
# filter out those not matching TEST_patterns (if any)
# in alphabetic order, or random order (if -r).
gather_tests() {
if [[ "${#TESTS[@]}" -eq 0 ]]; then
# match all functions begining with 'test_' except 'test_help'
TESTS=( `compgen -A function -X test_help test_` )
filter_tests
local clause
case ${#TEST_patterns[@]} in
0) clause= ;;
1) clause=" matching pattern '${TEST_patterns[0]}'" ;;
*) clause=" matching given patterns: ${TEST_patterns[@]}" ;;
esac
printf 1>&2 "%d tests discovered%s\n" ${#TESTS[@]} "$clause"
TEST_max_width=0
local tname
for tname in "${TESTS[@]}" ; do
if (( ${#tname} > TEST_max_width )); then
TEST_max_width=${#tname}
fi
done
if (( $test_randomize )) ; then
randomize_tests
printf 1>&2 "The tests will be run in random order.\n"
fi
fi
}
# randomize_tests -- place the items in random order
randomize_tests() {
local newtests=()
local x
while (( ${#TESTS[*]} > 0 )) ; do
x=`jot -r 1 0 $(( ${#TESTS[*]} - 1 ))`
newtests=( "${newtests[@]}" "${TESTS[$x]}" )
unset TESTS[$x]
TESTS=( "${TESTS[@]}" )
done
TESTS=( "${newtests[@]}" )
}
run_tests() {
gather_tests
local a_test
for a_test in "${TESTS[@]}" ; do
eval "$a_test"
done
}
summarize_tests() {
echo 1>&2 ''
printf 1>&2 "%d tests, %d checks, %d errors\n" $TEST_tests $TEST_checks $TEST_errors
if [[ $TEST_errors -gt 0 ]] ; then exit $TEST_errors ; fi
}