Skip to content

Latest commit

 

History

History
639 lines (525 loc) · 35.8 KB

README.md

File metadata and controls

639 lines (525 loc) · 35.8 KB

@reportportal/agent-js-playwright

Agent to integrate Playwright with ReportPortal.

Installation

Install the agent in your project:

npm install --save-dev @reportportal/agent-js-playwright

Configuration

1. Create playwright.config.ts or *.config.js file with reportportal configuration:

  import { PlaywrightTestConfig } from '@playwright/test';

  const RPconfig = {
    apiKey: '<API_KEY>',
    endpoint: 'https://your.reportportal.server/api/v1',
    project: 'Your reportportal project name',
    launch: 'Your launch name',
    attributes: [
      {
        key: 'key',
        value: 'value',
      },
      {
        value: 'value',
      },
    ],
    description: 'Your launch description',
  };

  const config: PlaywrightTestConfig = {
    reporter: [['@reportportal/agent-js-playwright', RPconfig]],
    testDir: './tests',
  };
  export default config;

The full list of available options presented below.

Option Necessity Default Description
apiKey Required User's reportportal token from which you want to send requests. It can be found on the profile page of this user.
endpoint Required URL of your server. For example 'https://server:8080/api/v1'.
launch Required Name of launch at creation.
project Required The name of the project in which the launches will be created.
attributes Optional [] Launch attributes.
description Optional '' Launch description.
rerun Optional false Enable rerun
rerunOf Optional Not set UUID of launch you want to rerun. If not specified, reportportal will update the latest launch with the same name
mode Optional 'DEFAULT' Results will be submitted to Launches page
'DEBUG' - Results will be submitted to Debug page.
skippedIssue Optional true reportportal provides feature to mark skipped tests as not 'To Investigate'.
Option could be equal boolean values:
true - skipped tests considered as issues and will be marked as 'To Investigate' on reportportal.
false - skipped tests will not be marked as 'To Investigate' on application.
debug Optional false This flag allows seeing the logs of the client-javascript. Useful for debugging.
launchId Optional Not set The ID of an already existing launch. The launch must be in 'IN_PROGRESS' status while the tests are running. Please note that if this ID is provided, the launch will not be finished at the end of the run and must be finished separately.
restClientConfig Optional Not set axios like http client config. May contain agent property for configure http(s) client, and other client options e.g. proxy, timeout. For debugging and displaying logs the debug: true option can be used.
Visit client-javascript for more details.
headers Optional {} The object with custom headers for internal http client.
launchUuidPrint Optional false Whether to print the current launch UUID.
launchUuidPrintOutput Optional 'STDOUT' Launch UUID printing output. Possible values: 'STDOUT', 'STDERR', 'FILE', 'ENVIRONMENT'. Works only if launchUuidPrint set to true. File format: rp-launch-uuid-${launch_uuid}.tmp. Env variable: RP_LAUNCH_UUID, note that the env variable is only available in the reporter process (it cannot be obtained from tests).
includeTestSteps Optional false Allows you to see the test steps at the log level.
includePlaywrightProjectNameToCodeReference Optional false Includes Playwright project name to code reference. See testCaseId and codeRef calculation. It may be useful when you want to see the different history for the same test cases within different playwright projects.
extendTestDescriptionWithLastError Optional true If set to true the latest error log will be attached to the test case description.
uploadVideo Optional true Whether to attach the Playwright's video to the test case.
uploadTrace Optional true Whether to attach the Playwright's trace to the test case.
token Deprecated Not set Use apiKey instead.

The following options can be overridden using ENVIRONMENT variables:

Option ENV variable
launchId RP_LAUNCH_ID

2. Add script to package.json file:

{
  "scripts": {
    "test": "npx playwright test --config=playwright.config.ts"
  }
}

Asynchronous API

The client supports an asynchronous reporting (via the ReportPortal asynchronous API). If you want the client to report through the asynchronous API, change v1 to v2 in the endpoint address.

Note: It is highly recommended to use the v2 endpoint for reporting, especially for extensive test suites.

Reporting

When organizing tests, specify titles for test.describe blocks, as this is necessary to build the correct structure of reports.

It is also required to specify playwright project names in playwright.config.ts when running the same tests in different playwright projects.

Attachments

Attachments can be easily added during test run via testInfo.attach according to the Playwright docs.

import { test, expect } from '@playwright/test';

test('basic test', async ({ page }, testInfo) => {
  await page.goto('https://playwright.dev');

  // Capture a screenshot and attach it
  const screenshot = await page.screenshot();
  await testInfo.attach('screenshot', { body: screenshot, contentType: 'image/png' });
});

Note: attachment path can be provided instead of body.

As an alternative to this approach the ReportingAPI methods can be used.

Note: ReportingAPI methods will send attachments to ReportPortal right after their call, unlike attachments provided via testInfo.attach that will be reported only on the test item finish.

Logging

You can use the following console native methods to report logs to tests:

console.log();
console.info();
console.debug();
console.warn();
console.error();

console's log, info,dubug reports as info log.

console's error, warn reports as error log if message contains "error" mention, otherwise as warn log.

As an alternative to this approach the ReportingAPI methods can be used.

Nested steps

ReportPortal supports reportings of native Playwright steps as nested steps.

import { test, expect } from '@playwright/test';

test('test', async ({ page }) => {
  await test.step('Log in', async () => {
    // ...
  });

  await test.step('Outer step', async () => {
    // ...
    // You can nest steps inside each other.
    await test.step('Inner step', async () => {
      // ...
    });
  });
});

To turn on this feature, just set the includeTestSteps config options to true.

Reporting API

This reporter provides Reporting API to use it directly in tests to send some additional data to the report.

To start using the ReportingApi in tests, just import it from '@reportportal/agent-js-playwright':

import { ReportingApi } from '@reportportal/agent-js-playwright';

Reporting API methods

The API provide methods for attaching data (logs, attributes, testCaseId, status).
All ReportingApi methods have an optional suite parameter.
If you want to add a data to the suite, you must pass the suite name as the last parameter.

addAttributes

Add attributes (tags) to the current test. Should be called inside of corresponding test.
ReportingApi.addAttributes(attributes: Array<Attribute>, suite?: string);
required: attributes
optional: suite
Example:

test('should have the correct attributes', () => {
  ReportingApi.addAttributes([
    {
      key: 'testKey',
      value: 'testValue',
    },
    {
      value: 'testValueTwo',
    },
  ]);
  expect(true).toBe(true);
});
setTestCaseId

Set test case id to the current test (About test case id). Should be called inside of corresponding test.
ReportingApi.setTestCaseId(id: string, suite?: string);
required: id
optional: suite
If testCaseId not specified, it will be generated automatically based on codeRef.
Example:

test('should have the correct testCaseId', () => {
  ReportingApi.setTestCaseId('itemTestCaseId');
  expect(true).toBe(true);
});
log

Send logs to report portal for the current test. Should be called inside of corresponding test.
ReportingApi.log(level: LOG_LEVELS, message: string, file?: Attachment, suite?: string);
required: level, message
optional: file, suite
where level can be one of the following: TRACE, DEBUG, WARN, INFO, ERROR, FATAL
Example:

test('should contain logs with attachments',() => {
  const fileName = 'test.jpg';
  const fileContent = fs.readFileSync(path.resolve(__dirname, './attachments', fileName));
  const attachment = {
    name: fileName,
    type: 'image/jpg',
    content: fileContent.toString('base64'),
  };
  ReportingApi.log('INFO', 'info log with attachment', attachment);

  expect(true).toBe(true);
});
info, debug, warn, error, trace, fatal

Send logs with corresponding level to report portal for the current test. Should be called inside of corresponding test.
ReportingApi.info(message: string, file?: Attachment, suite?: string);
ReportingApi.debug(message: string, file?: Attachment, suite?: string);
ReportingApi.warn(message: string, file?: Attachment, suite?: string);
ReportingApi.error(message: string, file?: Attachment, suite?: string);
ReportingApi.trace(message: string, file?: Attachment, suite?: string);
ReportingApi.fatal(message: string, file?: Attachment, suite?: string);
required: message
optional: file, suite
Example:

test('should contain logs with attachments', () => {
    ReportingApi.info('Log message');
    ReportingApi.debug('Log message');
    ReportingApi.warn('Log message');
    ReportingApi.error('Log message');
    ReportingApi.trace('Log message');
    ReportingApi.fatal('Log message');
    
    expect(true).toBe(true);
});
launchLog

Send logs to report portal for the current launch. Should be called inside of the any test or suite.
ReportingApi.launchLog(level: LOG_LEVELS, message: string, file?: Attachment);
required: level, message
optional: file
where level can be one of the following: TRACE, DEBUG, WARN, INFO, ERROR, FATAL
Example:

test('should contain logs with attachments', async () => {
  const fileName = 'test.jpg';
  const fileContent = fs.readFileSync(path.resolve(__dirname, './attachments', fileName));
  const attachment = {
    name: fileName,
    type: 'image/jpg',
    content: fileContent.toString('base64'),
  };
  ReportingApi.launchLog('INFO', 'info log with attachment', attachment);

  await expect(true).toBe(true);
});
launchInfo, launchDebug, launchWarn, launchError, launchTrace, launchFatal

Send logs with corresponding level to report portal for the current launch. Should be called inside of the any test or suite.
ReportingApi.launchInfo(message: string, file?: Attachment);
ReportingApi.launchDebug(message: string, file?: Attachment);
ReportingApi.launchWarn(message: string, file?: Attachment);
ReportingApi.launchError(message: string, file?: Attachment);
ReportingApi.launchTrace(message: string, file?: Attachment);
ReportingApi.launchFatal(message: string, file?: Attachment);
required: message
optional: file
Example:

test('should contain logs with attachments', () => {
    ReportingApi.launchInfo('Log message');
    ReportingApi.launchDebug('Log message');
    ReportingApi.launchWarn('Log message');
    ReportingApi.launchError('Log message');
    ReportingApi.launchTrace('Log message');
    ReportingApi.launchFatal('Log message');
    
    expect(true).toBe(true);
});
setStatus

Assign corresponding status to the current test item. Should be called inside of corresponding test.
ReportingApi.setStatus(status: string, suite?: string);
required: status
optional: suite
where status must be one of the following: passed, failed, stopped, skipped, interrupted, cancelled
Example:

test('should have status FAILED', () => {
    ReportingApi.setStatus('failed');
    
    expect(true).toBe(true);
});
setStatusFailed, setStatusPassed, setStatusSkipped, setStatusStopped, setStatusInterrupted, setStatusCancelled

Assign corresponding status to the current test item. Should be called inside of corresponding test.
ReportingApi.setStatusFailed(suite?: string);
ReportingApi.setStatusPassed(suite?: string);
ReportingApi.setStatusSkipped(suite?: string);
ReportingApi.setStatusStopped(suite?: string);
ReportingApi.setStatusInterrupted(suite?: string);
ReportingApi.setStatusCancelled(suite?: string);
optional: suite
Example:

test('should call ReportingApi to set statuses', () => {
    ReportingAPI.setStatusFailed();
    ReportingAPI.setStatusPassed();
    ReportingAPI.setStatusSkipped();
    ReportingAPI.setStatusStopped();
    ReportingAPI.setStatusInterrupted();
    ReportingAPI.setStatusCancelled();
});
setLaunchStatus

Assign corresponding status to the current launch. Should be called inside of the any test or suite.
ReportingApi.setLaunchStatus(status: string);
required: status
where status must be one of the following: passed, failed, stopped, skipped, interrupted, cancelled
Example:

test('launch should have status FAILED',  () => {
    ReportingApi.setLaunchStatus('failed');
    expect(true).toBe(true);
});
setLaunchStatusFailed, setLaunchStatusPassed, setLaunchStatusSkipped, setLaunchStatusStopped, setLaunchStatusInterrupted, setLaunchStatusCancelled

Assign corresponding status to the current test item. Should be called inside of the any test or suite.
ReportingApi.setLaunchStatusFailed();
ReportingApi.setLaunchStatusPassed();
ReportingApi.setLaunchStatusSkipped();
ReportingApi.setLaunchStatusStopped();
ReportingApi.setLaunchStatusInterrupted();
ReportingApi.setLaunchStatusCancelled();
Example:

test('should call ReportingApi to set launch statuses', () => {
    ReportingAPI.setLaunchStatusFailed();
    ReportingAPI.setLaunchStatusPassed();
    ReportingAPI.setLaunchStatusSkipped();
    ReportingAPI.setLaunchStatusStopped();
    ReportingAPI.setLaunchStatusInterrupted();
    ReportingAPI.setLaunchStatusCancelled();
});

Integration with Sauce Labs

To integrate with Sauce Labs just add attributes for the test case:

[{
 "key": "SLID",
 "value": "# of the job in Sauce Labs"
}, {
 "key": "SLDC",
 "value": "EU (your job region in Sauce Labs)"
}]

Example available in examples repo.

Usage with sharded tests

Playwright supports test sharding on multiple machines. It has its own CLI for merging reports from multiple shards. But the mentioned CLI tool merge-reports is designed to merge local reports represented by files in the file system, so it is not suitable for external reporting systems like ReportPortal, as it requires at least network communication through the right endpoints.

Thus, in order to have a single launch in ReportPortal for sharded tests, additional customization is required. There are several options to achieve this:

Note: The @reportportal/client-javascript SDK used here as a reference, but of course the same actions can be performed by sending requests to the ReportPortal API directly.

Using the rerunOf config option

The agent supports the rerun and rerunOf options. In case only rerun set, the ReportPortal will attach the launch results to the latest existing launch with the same name. In case also the rerunOf option set, which is the ID of any existing launch, the results will be aggregated within that launch.

Note: New executions of test cases that were previously executed within an existing launch (which ID is used as rerunOf) will be considered retries.

  1. Trigger a launch before all tests.

The @reportportal/client-javascript startLaunch method can be used.

/*
* startLaunch.js
* */
const rpClient = require('@reportportal/client-javascript');

const rpConfig = {
    // ...
};

async function startLaunch() {
  const client = new rpClient(rpConfig);
  const response = await client.startLaunch({
    name: rpConfig.launch,
    attributes: rpConfig.attributes,
    // etc. see https://github.com/reportportal/client-javascript?tab=readme-ov-file#startlaunch for the details
  }).promise;

  return response.id;
}

const launchId = await startLaunch();

Received launch ID can be exported e.g. as an environment variable to your CI job.

  1. Specify the launch ID as a rerunOf for each job. This step depends on your CI provider and the available ways to path some values to the Node.js process. The rerunOf should be set directly to the reporter config.
/*
* playwright.config.js
* */
const rpConfig = {
  // ...
  rerun: true,
  rerunOf: process.env.RP_RERUN_OF,
};

That's it. With such configuration the single launch will be used for the all tests from shards.

Using the launchId config option

The agent supports the launchId parameter to specify the ID of the already started launch. This way, you can start the launch using @reportportal/client-javascript before the test run and then specify its ID in the config or via environment variable.

The first step here while using the launchId option is the same as for using the rerunOf.

  1. Trigger a launch before all tests.

The @reportportal/client-javascript startLaunch method can be used.

/*
* startLaunch.js
* */
const rpClient = require('@reportportal/client-javascript');

const rpConfig = {
    // ...
};

async function startLaunch() {
  const client = new rpClient(rpConfig);
   // see https://github.com/reportportal/client-javascript?tab=readme-ov-file#startlaunch for the details
  const response = await client.startLaunch({
    name: rpConfig.launch,
    attributes: rpConfig.attributes,
    // etc.
  }).promise;

  return response.id;
}

const launchId = await startLaunch();

Received launchId can be exported e.g. as an environment variable to your CI job.

  1. Specify the launch ID for each job. This step depends on your CI provider and the available ways to path some values to the Node.js process. The launch ID can be set directly to the reporter config.
/*
* playwright.config.js
* */
const rpConfig = {
  // ...
  launchId: 'receivedLaunchId'
};

or just set as RP_LAUNCH_ID environment variable.

With launch ID provided, the agent will attach all test results to that launch. So it won't be finished by the agent and should be finished separately.

  1. As a run post-step (when all tests finished), launch also needs to be finished separately.

The @reportportal/client-javascript finishLaunch method can be used.

/*
* finishLaunch.js
* */
const RPClient = require('@reportportal/client-javascript');

const rpConfig = {
    // ...
};

const finishLaunch = async () => {
  const client = new RPClient(rpConfig);
  const launchTempId = client.startLaunch({ id: process.env.RP_LAUNCH_ID }).tempId;
  // see https://github.com/reportportal/client-javascript?tab=readme-ov-file#finishlaunch for the details
  await client.finishLaunch(launchTempId, {}).promise;
};

await finishLaunch();

Merging launches based on the build ID

This approach offers a way to merge several launches reported from different shards into one launch after the entire test execution completed and launches are finished.

  • With this option the Auto-analysis, Pattern-analysis and Quality Gates will be triggered for each sharded launch individually.
  • The launch numbering will be changed as each sharded launch will have its own number.
  • The merged launch will be treated as a new launch with its own number.

This approach is equal to merging launches via ReportPortal UI.

  1. Specify a unique CI build ID as a launch attribute, which will be the same for different jobs in the same run (this could be a commit hash or something else). This step depends on your CI provider and the available ways to path some values to the Node.js process.
/*
* playwright.config.js
* */
const rpConfig = {
  // ...
  attributes: [
    {
      key: 'CI_BUILD_ID',
      // e.g.
      value: process.env.GITHUB_COMMIT_SHA,
    }
  ],
};
  1. Collect the launch IDs and call the merge operation.

The ReportPortal API can be used to filter the required launches by the provided attribute to collect their IDs.

/*
* mergeRpLaunches.js
* */
const rpClient = require('@reportportal/client-javascript');

const rpConfig = {
  // ...
};

const client = new rpClient(rpConfig);

async function mergeLaunches() {
  const ciBuildId = process.env.CI_BUILD_ID;
  if (!ciBuildId) {
    console.error('To merge multiple launches, CI_BUILD_ID must not be empty');
    return;
  }
  try {
    // 1. Send request to get all launches with the same CI_BUILD_ID attribute value
    const params = new URLSearchParams({
      'filter.has.attributeValue': ciBuildId,
    });
    const launchSearchUrl = `launch?${params.toString()}`;
    const response = client.restClient.retrieveSyncAPI(launchSearchUrl);
    // 2. Filter them to find launches that are in progress
    const launchesInProgress = response.content.filter((launch) => launch.status === 'IN_PROGRESS');
    // 3. If exists, just return. The steps can be repeated in some interval if needed
    if (launchesInProgress.length) {
      return;
    }
    // 4. If not, merge all found launches with the same CI_BUILD_ID attribute value
    const launchIds = response.content.map((launch) => launch.id);
    const request = client.getMergeLaunchesRequest(launchIds);
    request.description = config.description;
    request.extendSuitesDescription = false;
    const mergeURL = 'launch/merge';
    await client.restClient.create(mergeURL, request);
  } catch (err) {
    console.error('Fail to merge launches', err);
  }
}

mergeLaunches();

Using a merge operation for huge launches can increase the load on ReportPortal's API. See the details and other parameters available for merge operation in ReportPortal API docs.

Note: Since the options described require additional effort, the ReportPortal team intends to create a CLI for them to make them easier to use, but with no ETA. Progress can be tracked in this issue.

Issues troubleshooting

Launches stuck in progress on RP side

There is known issue that in some cases launches not finished as expected in ReportPortal while using static annotations (.skip(), .fixme()) that expect the test to be 'SKIPPED'.
This may happen in case of error thrown from before/beforeAll hooks, retries enabled and fullyParallel: false. Associated with #85.
In this case as a workaround we suggest to use .skip() and .fixme() annotations inside the test body:

use

  test('example fail', async ({}) => {
    test.fixme();
    expect(1).toBeGreaterThan(2);
  });

instead of

  test.fixme('example fail', async ({}) => {
    expect(1).toBeGreaterThan(2);
  });