-
Notifications
You must be signed in to change notification settings - Fork 37
Running tests against merge branch
So everything compiles and you are able to bring up a cluster. Now we need to resolve test failures.
We have unit tests for our src/backend/
C code, and also for our Python utility libraries.
We use the CMockery unit test framework to test some of our src/backend/
C code. The CMockery framework is located in src/test/unit
(has a README.txt) and the actual tests are in their respective src/backend/<module>/test/
directory. You can run all the unit tests by running make unittest-check
in the top-level Greenplum directory or by going to an individual test directory and running make check
(you can also run the individual Makefile targets from the test's Makefile).
When the test fails, you'll need to look at the *_test.c file and update accordingly. Sometimes, you will need to utilize the debugger to see what is happening. For example:
lldb cdbappendonlyxlog.t
> b file_test.c:<line number> // put breakpoint on a single unit test
> run
We use the Python unittest2 framework along with PyMock. The tests are found in gpMgmt/bin/*/test_*.py
. The ones that are named test_unit*.py
are pure unit tests while the test_cluster*.py
ones require a running Greenplum cluster. To run the tests, you can do the following:
In gpMgmt/bin/:
1. make check
This will try to install all the Python unit test stuff... even if you already have it installed.
2. python -m unittest discover --verbose -s `pwd`/gppylib -p "test_unit_*.py"
This is the preferred way of running since you can give individual test names and/or regex without the unnecessary bloat of always installing Python modules and using the weird gpunit runner. NOTE: This is Greenplum Developer Jimmy Yih's personal opinion.
This is the main test aggregate Makefile target that Concourse runs for each ICW job after compilation. It is highly suggested that you look at the Makefile target and just run the individual components to focus your test fixing. Here are the main ones:
This is the absolute main test directory and should be your first destination after getting the cluster up and running. To run the tests, you just need to run make installcheck-good
(it will compile some test objects and run two schedules: parallel_schedule and greenplum_schedule. It is suggested to run and fix the parallel_schedule first before running the entire thing. Here are some useful commands for this:
Run parallel_schedule:
./pg_regress --init-file=./init_file --dlpath=. --load-extension=gp_inject_fault --schedule=parallel_schedule
Run an individual test:
./pg_regress --init-file=./init_file --dlpath=. --load-extension=gp_inject_fault <test1 test2 test3...>
Run the entire thing:
make installcheck-good
For initially fixing parallel_schedule failures, I would suggest running without mirrors to avoid any possible replication issues and focus only on the actual test failures. Also, some tests are dependent on earlier tests so be careful when running individual tests. One thing to note as well is that some tests have multiple answer files (e.g. ORCA answer files).
This test directory handles concurrency testing. The tests are written in spec files rather than SQL files. Run the tests by running make installcheck
. Most of these tests are from upstream Postgres with few from Greenplum. Some new Postgres specs may not work if it involves some kind of UPDATE concurrency (Greenplum takes an ExclusiveLock instead of RowExclusiveLock for UPDATE commands when Global Deadlock Detector is off). For these cases, we have mainly disabled the tests in the isolation_schedule and added a FIXME.
This test directory handles Greenplum concurrency testing. They should mostly work since they mostly test Greenplum code that is generally assumed not touched during a merge iteration. Run the tests by running make installcheck
.
There are some tests that rely on some gpcontrib modules. If you did not compile with flag --enable-debug-extensions
, some tests may fail from missing extension. You can just go to gpcontrib and manually install them (notably gpcontrib/gp_inject_fault/
and gpcontrib/gp_debug_numsegments/
).
These tests are located in gpMgmt/test/
. Luckily, the documentation for this testing has been greatly improved. Just read gpMgmt/test/README
and gpMgmt/Makefile.behave
to figure things out. As a warning, some tests require a multi-node cluster. If you want a more simpler way of running and have Behave 1.2.4+ installed, you can just simply run behave test/behave/mgmt_utils/<utility name>.feature
which is more straight-forward in my (Jimmy Yih) opinion.
For any questions, please ask at [email protected].