Skip to content

mkoura/dump2polarion

Repository files navigation

dump2polarion

Build status Coverage report Version Supported Python versions Code style: black

Capabilities of the dump2polarion library

  • generating XML files for XUnit, Testcase and Requirements Importers
  • submitting XML files to Importers
  • verifying that the import was successful
  • saving the import log files
  • reading Test Cases data from SVN repository with checked out Polarion project

The library supports all features of the Importers (even iterations). The export to XML files can be customized per project - lookup method, what results will be included in the XUnit (e.g. only PASSed tests), etc.

The library doesn't use the legacy webservices API, all the operations are performed using Polarion Importers.

All Polarion projects are internally stored in a SVN repo and since Polarion importers don't have any API for querying Polarion, reading Test Cases data from a SVN repo is one of the two methods available for getting data out of Polarion. The second method is parsing the log file produced by the specific importer (e.g. Test Cases importer) -- this method is also supported by the library.

polarion_dumper.py script

Script for importing tests results recorded in the CSV, SQLite, junit-report.xml (generated by pytest) or Ostriz JSON input file to Polarion using the XUnit Importer.

Can be also used for submit of pre-generated XUnit, Test Case or Requirement XML files to corresponding Polarion Importer.

By default the script waits until the Importer finishes the import job and then checks the success of the operation.

Usage

polarion_dumper.py -i {input_file}

By default the input data are submitted to Polarion. You can disable this behavior with -n option. In this case the XML file used for submission will be saved to disk. Default file location is the current directory (can be overridden with -o option).

When output file is specified with -o PATH, the XML file used for results submission will be saved to disk. If PATH is a directory, resulting file name will be generated - PATH/FILE_TYPE-TESTRUN_ID-TIMESTAMP.xml.

When the input file is a XML file in a format supported by one of the Polarion Importers (e.g. saved earlier with -o FILE -n), it is submitted to Polarion.

Configuration

You can specify credentials on command line with --user kerberos_username --password kerberos_password. Or you can set them in a config file.

The config file is specified on command line with -c config_file.yaml.

Another possibility for specifying credentials are environment variables (the same are used for pylarion):

export POLARION_USERNAME=kerberos_username
export POLARION_PASSWORD=kerberos_password

You can mix all these approaches, e.g. user name on command line and password in the environment variable.

Important

You need to specify URLs of the importer services and queues in the config file. See <https://mojo.redhat.com/docs/DOC-1098563#config>

Install

To install the package to your virtualenv, run

pip install dump2polarion

or install it from cloned directory

pip install -e .

Package on PyPI <https://pypi.python.org/pypi/dump2polarion>

Requirements

Requirements are listed in requirements.txt.

Customization

The library can be customized per Polarion project. Every project can have it's own behavior, like one is parametrized and the other is not, one wants to have all test results imported and the other wants to import just PASSes, etc.

Each exporter object (TestcaseExport, XunitExport, RequirementExport) accepts transform_func callable that is executed for every record. The callable can transform record data, e.g.:

def results_transform(result):
    """Export results only when comment is present; modify the comment."""
    comment = result.get("comment")
    if not comment:
        return None

    result = copy.deepcopy(result)

    result["comment"] = "Changed comment: {}".format(comment)
    return result

xunit_transform = XunitExport(testrun_id, tests_records, config, transform_func=results_transform)

CSV format for XUnit

There needs to be a row with field names - it is present by default when exported from Polarion.

Fields are ID; Title; Test Case ID (optional but recommended); Verdict; Comment (optional); Time (optional); stdout (optional); stderr (optional) + any other field you want. Order of the fields and case doesn't matter.

The "Verdict" field and any optional fields must be added manually. Valid values for "verdict" are "passed", "failed", "skipped", "waiting" or empty. It's case insensitive.

There can be any content before the row with field names and the test results.

SQLite format for XUnit

You can convert the CSV file exported out of Polarion using the csv2sqlite.py script:

csv2sqlite.py -i {input_file.csv} -o {output_file.sqlite3}

How to submit the XML file manually

polarion_dumper.py -i input.xml --user {user} --password {password}

or

curl -k -u {user}:{password} -X POST -F file=@./output.xml {importer_url}

More info

For CFME QE specific instructions see <https://mojo.redhat.com/docs/DOC-1098563>

For info about XUnit Importer see <https://mojo.redhat.com/docs/DOC-1073077>

For info about Test Case Importer see <https://mojo.redhat.com/docs/DOC-1075945>

For info about Requirements Importer see <https://mojo.redhat.com/docs/DOC-1163149>

About

Dump data to Polarion® Importers

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages