diff --git a/CODEOWNERS b/CODEOWNERS index 0cb83edd3..2fc9da383 100644 --- a/CODEOWNERS +++ b/CODEOWNERS @@ -1,2 +1,2 @@ # Default code owners - Atlassian Data Center App Performance Toolkit -* @ometelytsia @SergeyMoroz0703 @astashys @mmizin \ No newline at end of file +* @ometelytsia @SergeyMoroz0703 @astashys @opopovss @OlehStefanyshyn \ No newline at end of file diff --git a/Dockerfile b/Dockerfile index a8518e833..979a32881 100644 --- a/Dockerfile +++ b/Dockerfile @@ -8,9 +8,6 @@ FROM blazemeter/taurus ENV APT_INSTALL="apt-get -y install --no-install-recommends" -# Remove bintray manually until PR https://github.com/Blazemeter/taurus/pull/1484/files is promoted to prod -RUN sed -i '/dl.bintray.com/d' /etc/apt/sources.list - RUN apt-get -y update \ && $APT_INSTALL vim git openssh-server python3.8-dev python3-pip wget \ && update-alternatives --install /usr/bin/python python /usr/bin/python3.8 1 \ diff --git a/README.md b/README.md index f5fe77c47..a64cf06e7 100644 --- a/README.md +++ b/README.md @@ -1,25 +1,24 @@ # Data Center App Performance Toolkit The Data Center App Performance Toolkit extends [Taurus](https://gettaurus.org/) which is an open source performance framework that executes JMeter and Selenium. -This repository contains Taurus scripts for performance testing of Atlassian Data Center products: Jira, Jira Service Management, Confluence, and Bitbucket. +This repository contains Taurus scripts for performance testing of Atlassian Data Center products: Jira, Jira Service Management, Confluence, Bitbucket and Crowd. ## Supported versions * Supported Jira versions: - * Jira [Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html): `8.13.7`, `8.5.15` + * Jira [Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html): `8.13.10`, `8.5.18` * Supported Jira Service Management versions: - * Jira Service Management [Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html): `4.13.7`, `4.5.15` + * Jira Service Management [Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html): `4.13.10`, `4.5.18` * Supported Confluence versions: - * Confluence [Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html): `7.4.9` - * Confluence Platform release: `7.0.5` + * Confluence [Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html): `7.13.0`, `7.4.11` * Supported Bitbucket Server versions: - * Bitbucket Server [Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html): `7.6.7`, `6.10.11` + * Bitbucket Server [Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html): `7.6.9`, `6.10.13` * Bitbucket Server Platform release: `7.0.5` * Supported Crowd versions: - * Crowd [Long Term Support release](https://confluence.atlassian.com/crowd/crowd-release-notes-199094.html): `4.3.0` + * Crowd [Long Term Support release](https://confluence.atlassian.com/crowd/crowd-release-notes-199094.html): `4.3.5` ## Support In case of technical questions, issues or problems with DC Apps Performance Toolkit, contact us for support in the [community Slack](http://bit.ly/dcapt_slack) **#data-center-app-performance-toolkit** channel. diff --git a/app/bitbucket.yml b/app/bitbucket.yml index ce70f7a5c..87b31930c 100644 --- a/app/bitbucket.yml +++ b/app/bitbucket.yml @@ -24,6 +24,7 @@ services: - module: shellexec prepare: - python util/pre_run/environment_checker.py + - python util/pre_run/check_for_updates.py - python util/pre_run/git_client_check.py - python util/data_preparation/bitbucket_prepare_data.py shutdown: @@ -85,7 +86,7 @@ modules: httpsampler.ignore_failed_embedded_resources: "true" selenium: chromedriver: - version: "91.0.4472.101" # Supports Chrome version 91. You can refer to http://chromedriver.chromium.org/downloads + version: "93.0.4577.63" # Supports Chrome version 93. You can refer to http://chromedriver.chromium.org/downloads reporting: - data-source: sample-labels module: junit-xml diff --git a/app/confluence.yml b/app/confluence.yml index 1815f40db..372f55bfe 100644 --- a/app/confluence.yml +++ b/app/confluence.yml @@ -38,6 +38,7 @@ services: - module: shellexec prepare: - python util/pre_run/environment_checker.py + - python util/pre_run/check_for_updates.py - python util/data_preparation/confluence_prepare_data.py shutdown: - python util/post_run/jmeter_post_check.py @@ -109,7 +110,7 @@ modules: httpsampler.ignore_failed_embedded_resources: "true" selenium: chromedriver: - version: "91.0.4472.101" # Supports Chrome version 91. You can refer to http://chromedriver.chromium.org/downloads + version: "93.0.4577.63" # Supports Chrome version 93. You can refer to http://chromedriver.chromium.org/downloads reporting: - data-source: sample-labels module: junit-xml diff --git a/app/crowd.yml b/app/crowd.yml index 33df9c247..f1b100d44 100644 --- a/app/crowd.yml +++ b/app/crowd.yml @@ -31,11 +31,12 @@ settings: JMETER_VERSION: 5.2.1 LANGUAGE: en_US.utf8 - allow_analytics: Yes # Allow sending basic run analytics to Atlassian. These analytics help us to understand how the tool is being used and help us to continue to invest in this tooling. For more details please see our README. + allow_analytics: Yes # Allow sending basic run analytics to Atlassian. These analytics help us to understand how the tool is being used and help us to continue to invest in this tooling. For more details please see our README. services: - module: shellexec prepare: - python util/pre_run/environment_checker.py + - python util/pre_run/check_for_updates.py - python util/data_preparation/crowd_prepare_data.py - python util/data_preparation/crowd_sync_check.py shutdown: diff --git a/app/jira.yml b/app/jira.yml index bb6510ecb..871cb4025 100644 --- a/app/jira.yml +++ b/app/jira.yml @@ -40,6 +40,7 @@ services: - module: shellexec prepare: - python util/pre_run/environment_checker.py + - python util/pre_run/check_for_updates.py - python util/data_preparation/jira_prepare_data.py shutdown: - python util/post_run/jmeter_post_check.py @@ -113,7 +114,7 @@ modules: httpsampler.ignore_failed_embedded_resources: "true" selenium: chromedriver: - version: "91.0.4472.101" # Supports Chrome version 91. You can refer to http://chromedriver.chromium.org/downloads + version: "93.0.4577.63" # Supports Chrome version 93. You can refer to http://chromedriver.chromium.org/downloads reporting: - data-source: sample-labels module: junit-xml diff --git a/app/jsm.yml b/app/jsm.yml index e276b7a15..067e2605c 100644 --- a/app/jsm.yml +++ b/app/jsm.yml @@ -21,7 +21,7 @@ settings: WEBDRIVER_VISIBLE: False JMETER_VERSION: 5.2.1 LANGUAGE: en_US.utf8 - allow_analytics: Yes # Allow sending basic run analytics to Atlassian. These analytics help us to understand how the tool is being used and help us to continue to invest in this tooling. For more details please see our README. + allow_analytics: Yes # Allow sending basic run analytics to Atlassian. These analytics help us to understand how the tool is being used and help us to continue to invest in this tooling. For more details please see our README. # Action percentage for Jmeter and Locust load executors agent_browse_projects: 10 agent_view_request: 24 @@ -150,7 +150,7 @@ modules: httpsampler.ignore_failed_embedded_resources: "true" selenium: chromedriver: - version: "90.0.4430.24" # Supports Chrome version 90. You can refer to http://chromedriver.chromium.org/downloads + version: "91.0.4472.101" # Supports Chrome version 91. You can refer to http://chromedriver.chromium.org/downloads reporting: - data-source: sample-labels module: junit-xml \ No newline at end of file diff --git a/app/selenium_ui/bitbucket/pages/selectors.py b/app/selenium_ui/bitbucket/pages/selectors.py index a279875fe..54ca302e9 100644 --- a/app/selenium_ui/bitbucket/pages/selectors.py +++ b/app/selenium_ui/bitbucket/pages/selectors.py @@ -224,4 +224,4 @@ class UserSettingsLocator: class RepoCommitsLocator: - repo_commits_graph = (By.CSS_SELECTOR, 'svg.commit-graph') + repo_commits_graph = (By.ID, 'commits-table') diff --git a/app/selenium_ui/jsm/pages/agent_pages.py b/app/selenium_ui/jsm/pages/agent_pages.py index 412f6f3d0..91884e947 100644 --- a/app/selenium_ui/jsm/pages/agent_pages.py +++ b/app/selenium_ui/jsm/pages/agent_pages.py @@ -90,27 +90,38 @@ def __init__(self, driver, request_key=None): def wait_for_page_loaded(self): self.wait_until_visible(ViewCustomerRequestLocators.bread_crumbs) + def check_comment_text_is_displayed(self, text, rte_status=None): + if self.get_elements(ViewCustomerRequestLocators.comment_text_field_RTE) or \ + self.get_elements(ViewCustomerRequestLocators.comment_text_field): + if rte_status: + self.wait_until_available_to_switch(ViewCustomerRequestLocators.comment_text_field_RTE) + if self.wait_until_present(ViewCustomerRequestLocators.comment_tinymce_field).text != text: + self.wait_until_present(ViewCustomerRequestLocators.comment_tinymce_field).send_keys(text) + self.return_to_parent_frame() + self.wait_until_present(ViewCustomerRequestLocators.comment_internally_btn).click() + elif self.wait_until_present(ViewCustomerRequestLocators.comment_text_field).text != text: + self.wait_until_present(ViewCustomerRequestLocators.comment_text_field).send_keys(text) + self.wait_until_present(ViewCustomerRequestLocators.comment_internally_btn).click() + def add_request_comment(self, rte_status): comment_text = f"Add comment from selenium - {self.generate_random_string(30)}" textarea = self.get_element(ViewCustomerRequestLocators.comment_collapsed_textarea) self.driver.execute_script("arguments[0].scrollIntoView(true);", textarea) textarea.click() + comment_button = self.get_element(ViewCustomerRequestLocators.comment_internally_btn) + self.driver.execute_script("arguments[0].scrollIntoView(true);", comment_button) if rte_status: self.wait_until_available_to_switch(ViewCustomerRequestLocators.comment_text_field_RTE) - tinymce_field = self.get_element(ViewCustomerRequestLocators.comment_tinymce_field) - self.driver.execute_script("arguments[0].scrollIntoView(true);", tinymce_field) - self.action_chains().send_keys_to_element(tinymce_field, comment_text).perform() + self.wait_until_present(ViewCustomerRequestLocators.comment_tinymce_field).send_keys(comment_text) self.return_to_parent_frame() + comment_button.click() + self.check_comment_text_is_displayed(comment_text, True) else: - comment_text_field = self.get_element(ViewCustomerRequestLocators.comment_text_field) - self.driver.execute_script("arguments[0].scrollIntoView(true);", comment_text_field) - self.action_chains().move_to_element(comment_text_field).click()\ - .send_keys_to_element(comment_text_field, comment_text).perform() + self.wait_until_present(ViewCustomerRequestLocators.comment_text_field).send_keys(comment_text) + comment_button.click() + self.check_comment_text_is_displayed(comment_text) - comment_button = self.get_element(ViewCustomerRequestLocators.comment_internally_btn) - self.driver.execute_script("arguments[0].scrollIntoView(true);", comment_button) - comment_button.click() self.wait_until_visible(ViewCustomerRequestLocators.comment_collapsed_textarea) @@ -168,13 +179,15 @@ def __init__(self, driver, project_key=None, queue_id=None): self.page_url = url_manager.view_queue_all_open() def wait_for_page_loaded(self): - self.wait_until_visible(ViewQueueLocators.queues_status) + self.wait_until_any_ec_presented( + selector_names=[ViewQueueLocators.queues_status, ViewQueueLocators.queue_is_empty], timeout=self.timeout) def get_random_queue(self): - queues = self.get_elements(ViewQueueLocators.queues) - random_queue = random.choice([queue for queue in queues - if queue.text.partition('\n')[0] not in - ['All open', 'Recently resolved', 'Resolved past 7 days'] - and queue.text.partition('\n')[2] != '0']) - random_queue.click() - self.wait_until_visible(ViewQueueLocators.queues_status, timeout=self.timeout) + if not self.get_elements(ViewQueueLocators.queue_is_empty): + queues = self.get_elements(ViewQueueLocators.queues) + random_queue = random.choice([queue for queue in queues + if queue.text.partition('\n')[0] not in + ['All open', 'Recently resolved', 'Resolved past 7 days'] + and queue.text.partition('\n')[2] != '0']) + random_queue.click() + self.wait_until_present(ViewQueueLocators.queues_status, timeout=self.timeout) diff --git a/app/selenium_ui/jsm/pages/agent_selectors.py b/app/selenium_ui/jsm/pages/agent_selectors.py index 3dc0be557..dc928a538 100644 --- a/app/selenium_ui/jsm/pages/agent_selectors.py +++ b/app/selenium_ui/jsm/pages/agent_selectors.py @@ -119,3 +119,4 @@ class ViewQueueLocators: queues = (By.CSS_SELECTOR, "#pinnednav-opts-sd-queues-nav li") queues_status = (By.XPATH, "//span[contains(text(),'Status')]") + queue_is_empty = (By.CSS_SELECTOR, '.sd-queue-empty') diff --git a/app/util/analytics/analytics_utils.py b/app/util/analytics/analytics_utils.py index a4606cb68..c19048b17 100644 --- a/app/util/analytics/analytics_utils.py +++ b/app/util/analytics/analytics_utils.py @@ -4,8 +4,12 @@ import getpass import re import socket + from datetime import datetime, timezone +from util.common_util import get_current_version, get_latest_version +latest_version = get_latest_version() +current_version = get_current_version() SUCCESS_TEST_RATE = 95.00 SUCCESS_RT_THRESHOLD = 20 OS = {'macOS': ['Darwin'], 'Windows': ['Windows'], 'Linux': ['Linux']} @@ -62,7 +66,18 @@ def generate_report_summary(collector): summary_report.append(f'Summary run status|{overall_status}\n') summary_report.append(f'Artifacts dir|{os.path.basename(collector.log_dir)}') summary_report.append(f'OS|{collector.os}') - summary_report.append(f'DC Apps Performance Toolkit version|{collector.tool_version}') + if latest_version is None: + summary_report.append((f"DC Apps Performance Toolkit version|{collector.tool_version} " + f"(WARNING: Could not get the latest version.)")) + elif latest_version > current_version: + summary_report.append(f"DC Apps Performance Toolkit version|{collector.tool_version} " + f"(FAIL: Please update toolkit to the latest version - {latest_version})") + elif latest_version == current_version: + summary_report.append(f"DC Apps Performance Toolkit version|{collector.tool_version} " + f"(OK: Toolkit is up to date)") + else: + summary_report.append(f"DC Apps Performance Toolkit version|{collector.tool_version} " + f"(OK: Toolkit is ahead of the latest production version: {latest_version})") summary_report.append(f'Application|{collector.app_type} {collector.application_version}') summary_report.append(f'Dataset info|{collector.dataset_information}') summary_report.append(f'Application nodes count|{collector.nodes_count}') diff --git a/app/util/bitbucket/populate_db.sh b/app/util/bitbucket/populate_db.sh index 442cb4a41..fb098c65a 100644 --- a/app/util/bitbucket/populate_db.sh +++ b/app/util/bitbucket/populate_db.sh @@ -3,6 +3,21 @@ ################### Check if NFS exists ################### pgrep nfsd > /dev/null && echo "NFS found" || { echo NFS process was not found. This script is intended to run only on the Bitbucket NFS Server machine. && exit 1; } +# Read command line arguments +while [[ "$#" -gt 0 ]]; do case $1 in + --small) small=1 ;; + --force) + if [ -n "$2" ] && [ "${2:0:1}" != "-" ]; then + force=1 + version=${2} + shift + else + force=1 + fi + ;; + *) echo "Unknown parameter passed: $1"; exit 1;; +esac; shift; done + ################### Variables section ################### # Command to install psql client for Amazon Linux 2. # In case of different distributive, please adjust accordingly or install manually. @@ -23,7 +38,7 @@ BITBUCKET_DB_PASS="Password1!" BITBUCKET_AUTO_DECLINE_VERSION="7.7.0" # BITBUCKET version variables -SUPPORTED_BITBUCKET_VERSIONS=(6.10.11 7.0.5 7.6.7) +SUPPORTED_BITBUCKET_VERSIONS=(6.10.13 7.0.5 7.6.9) BITBUCKET_VERSION=$(sudo su bitbucket -c "cat ${BITBUCKET_VERSION_FILE}") if [[ -z "$BITBUCKET_VERSION" ]]; then echo The $BITBUCKET_VERSION_FILE file does not exists or emtpy. Please check if BITBUCKET_VERSION_FILE variable \ @@ -33,8 +48,12 @@ fi echo "Bitbucket version: ${BITBUCKET_VERSION}" # Datasets AWS bucket and db dump name -DATASETS_AWS_BUCKET="https://centaurus-datasets.s3.amazonaws.com/bitbucket" + DATASETS_SIZE="large" +if [[ ${small} == 1 ]]; then + DATASETS_SIZE="small" +fi +DATASETS_AWS_BUCKET="https://centaurus-datasets.s3.amazonaws.com/bitbucket" DB_DUMP_NAME="db.dump" DB_DUMP_URL="${DATASETS_AWS_BUCKET}/${BITBUCKET_VERSION}/${DATASETS_SIZE}/${DB_DUMP_NAME}" @@ -48,10 +67,16 @@ if [[ ! "${SUPPORTED_BITBUCKET_VERSIONS[*]}" =~ ${BITBUCKET_VERSION} ]]; then echo "e.g. ./populate_db.sh --force 6.10.0" echo "!!! Warning !!! This may break your Bitbucket instance. Also, note that downgrade is not supported by Bitbucket." # Check if --force flag is passed into command - if [[ "$1" == "--force" ]]; then + if [[ ${force} == 1 ]]; then + # Check if version was specified after --force flag + if [[ -z ${version} ]]; then + echo "Error: --force flag requires version after it." + echo "Specify one of these versions: ${SUPPORTED_BITBUCKET_VERSIONS[*]}" + exit 1 + fi # Check if passed Bitbucket version is in list of supported - if [[ "${SUPPORTED_BITBUCKET_VERSIONS[*]}" =~ ${2} ]]; then - DB_DUMP_URL="${DATASETS_AWS_BUCKET}/$2/${DATASETS_SIZE}/${DB_DUMP_NAME}" + if [[ " ${SUPPORTED_BITBUCKET_VERSIONS[@]} " =~ " ${version} " ]]; then + DB_DUMP_URL="${DATASETS_AWS_BUCKET}/${version}/${DATASETS_SIZE}/${DB_DUMP_NAME}" echo "Force mode. Dataset URL: ${DB_DUMP_URL}" else LAST_DATASET_VERSION=${SUPPORTED_BITBUCKET_VERSIONS[${#SUPPORTED_BITBUCKET_VERSIONS[@]}-1]} diff --git a/app/util/bitbucket/upload_attachments.sh b/app/util/bitbucket/upload_attachments.sh index e69bebd4b..1d066bf8f 100644 --- a/app/util/bitbucket/upload_attachments.sh +++ b/app/util/bitbucket/upload_attachments.sh @@ -3,10 +3,25 @@ ################### Check if NFS exists ################### pgrep nfsd > /dev/null && echo "NFS found" || { echo NFS process was not found. This script is intended to run only on the Bitbucket NFS Server machine. && exit 1; } +# Read command line arguments +while [[ "$#" -gt 0 ]]; do case $1 in + --small) small=1 ;; + --force) + if [ -n "$2" ] && [ "${2:0:1}" != "-" ]; then + force=1 + version=${2} + shift + else + force=1 + fi + ;; + *) echo "Unknown parameter passed: $1"; exit 1;; +esac; shift; done + ################### Variables section ################### # Bitbucket version variables BITBUCKET_VERSION_FILE="/media/atl/bitbucket/shared/bitbucket.version" -SUPPORTED_BITBUCKET_VERSIONS=(6.10.11 7.0.5 7.6.7) +SUPPORTED_BITBUCKET_VERSIONS=(6.10.13 7.0.5 7.6.9) BITBUCKET_VERSION=$(sudo su bitbucket -c "cat ${BITBUCKET_VERSION_FILE}") if [[ -z "$BITBUCKET_VERSION" ]]; then echo The $BITBUCKET_VERSION_FILE file does not exists or emtpy. Please check if BITBUCKET_VERSION_FILE variable \ @@ -15,9 +30,13 @@ if [[ -z "$BITBUCKET_VERSION" ]]; then fi echo "Bitbucket Version: ${BITBUCKET_VERSION}" +DATASETS_SIZE="large" +if [[ ${small} == 1 ]]; then + DATASETS_SIZE="small" +fi + DATASETS_AWS_BUCKET="https://centaurus-datasets.s3.amazonaws.com/bitbucket" ATTACHMENTS_TAR="attachments.tar.gz" -DATASETS_SIZE="large" ATTACHMENTS_TAR_URL="${DATASETS_AWS_BUCKET}/${BITBUCKET_VERSION}/${DATASETS_SIZE}/${ATTACHMENTS_TAR}" NFS_DIR="/media/atl/bitbucket/shared" ATTACHMENT_DIR_DATA="data" @@ -31,10 +50,10 @@ if [[ ! "${SUPPORTED_BITBUCKET_VERSIONS[*]}" =~ ${BITBUCKET_VERSION} ]]; then echo "e.g. ./upload_attachments --force 6.10.0" echo "!!! Warning !!! This may broke your Bitbucket instance." # Check if --force flag is passed into command - if [[ "$1" == "--force" ]]; then + if [[ ${force} == 1 ]]; then # Check if passed Bitbucket version is in list of supported - if [[ "${SUPPORTED_BITBUCKET_VERSIONS[*]}" =~ ${2} ]]; then - ATTACHMENTS_TAR_URL="${DATASETS_AWS_BUCKET}/$2/${DATASETS_SIZE}/${ATTACHMENTS_TAR}" + if [[ "${SUPPORTED_BITBUCKET_VERSIONS[*]}" =~ ${version} ]]; then + ATTACHMENTS_TAR_URL="${DATASETS_AWS_BUCKET}/${version}/${DATASETS_SIZE}/${ATTACHMENTS_TAR}" echo "Force mode. Dataset URL: ${ATTACHMENTS_TAR_URL}" else LAST_DATASET_VERSION=${SUPPORTED_BITBUCKET_VERSIONS[${#SUPPORTED_BITBUCKET_VERSIONS[@]}-1]} diff --git a/app/util/common_util.py b/app/util/common_util.py new file mode 100644 index 000000000..c6707c91b --- /dev/null +++ b/app/util/common_util.py @@ -0,0 +1,56 @@ +import datetime +import functools +import requests +from datetime import timedelta +from timeit import default_timer as timer +from packaging import version +from util.conf import TOOLKIT_VERSION + +CONF_URL = "https://raw.githubusercontent.com/atlassian/dc-app-performance-toolkit/master/app/util/conf.py" + + +def get_latest_version(supported=True): + VERSION_STR = "TOOLKIT_VERSION" if supported else "UNSUPPORTED_VERSION" + try: + r = requests.get(CONF_URL) + r.raise_for_status() + conf = r.text.splitlines() + version_line = next((line for line in conf if VERSION_STR in line)) + latest_version_str = version_line.split('=')[1].replace("'", "").replace('"', "").strip() + latest_version = version.parse(latest_version_str) + return latest_version + except requests.exceptions.RequestException as e: + print(f"Warning: DCAPT check for update failed - {e}") + except StopIteration: + print("Warning: failed to get the unsupported version") + + +def get_unsupported_version(): + + unsupported_version_str = get_latest_version(supported=False) + + return unsupported_version_str + + +def get_current_version(): + return version.parse(TOOLKIT_VERSION) + + +def print_timing(message, sep='-'): + assert message is not None, "Message is not passed to print_timing decorator" + + def deco_wrapper(func): + @functools.wraps(func) + def wrapper(*args, **kwargs): + start = timer() + print(sep * 20) + print(f'{message} started {datetime.datetime.now().strftime("%H:%M:%S")}') + result = func(*args, **kwargs) + end = timer() + print(f"{message} finished in {timedelta(seconds=end - start)}") + print(sep * 20) + return result + + return wrapper + + return deco_wrapper diff --git a/app/util/conf.py b/app/util/conf.py index 3382ebc84..e4193d68b 100644 --- a/app/util/conf.py +++ b/app/util/conf.py @@ -2,7 +2,8 @@ from util.project_paths import JIRA_YML, CONFLUENCE_YML, BITBUCKET_YML, JSM_YML, CROWD_YML -TOOLKIT_VERSION = '5.0.0' +TOOLKIT_VERSION = '5.1.0' +UNSUPPORTED_VERSION = '3.0.0' def read_yml_file(file): diff --git a/app/util/confluence/index-wait-till-finished.sh b/app/util/confluence/index-wait-till-finished.sh new file mode 100644 index 000000000..36c281e8e --- /dev/null +++ b/app/util/confluence/index-wait-till-finished.sh @@ -0,0 +1,62 @@ +#!/bin/bash + +# Wait for full re index finished + +SEARCH_LOG="/var/atlassian/application-data/confluence/logs/atlassian-confluence-index*" +CONFLUENCE_VERSION_FILE="/media/atl/confluence/shared-home/confluence.version" +PROGRESS="Re-index progress:.*" +FINISHED="Re-index progress: 100% complete" + +CONFLUENCE_VERSION=$(sudo su confluence -c "cat ${CONFLUENCE_VERSION_FILE}") +if [[ -z "$CONFLUENCE_VERSION" ]]; then + echo The $CONFLUENCE_VERSION_FILE file does not exists or emtpy. Please check if CONFLUENCE_VERSION_FILE variable \ + has a valid file path of the Confluence version file or set your Cluster CONFLUENCE_VERSION explicitly. + exit 1 +fi +echo "Confluence Version: ${CONFLUENCE_VERSION}" + +if [ "$(sudo su confluence -c "ls -l ""$SEARCH_LOG"" 2>/dev/null | wc -l")" -gt 0 ] +then + echo "Log files were found:" + sudo su confluence -c "ls $SEARCH_LOG" +else + echo "ERROR: There are no log files found like $SEARCH_LOG" + echo "Make sure your Confluence version is 7.7.x or higher." + exit 1 +fi + +TIMEOUT=21600 # 6 hour +COUNTER=0 +SLEEP_TIME=60 +ATTEMPTS=$((TIMEOUT / SLEEP_TIME)) + +while [ ${COUNTER} -lt ${ATTEMPTS} ];do + grep_result=$(sudo su -c "grep -h -o \"$PROGRESS\" $SEARCH_LOG" 2>/dev/null | tail -1) + echo "Status:" + echo "$grep_result" + if [ -z "$grep_result" ];then + echo "ERROR: $PROGRESS was not found in $SEARCH_LOG" + echo "Check if index process was started." + exit 1 + fi + finished=$(echo "$grep_result" | grep "$FINISHED") + if [ -z "$finished" ];then + echo "Waiting for index finished, attempt ${COUNTER}/${ATTEMPTS} at waiting ${SLEEP_TIME} seconds." + echo # New line + sleep ${SLEEP_TIME} + (( COUNTER++ )) || true + else + echo "Index finished successfully." + break + fi +done + +if [ "${COUNTER}" -eq ${ATTEMPTS} ]; then + echo # move to a new line + echo "ERROR: Wait for index finished failed" + echo "See logs for more details:" + sudo su -c "ls -a $SEARCH_LOG" + exit 1 +fi + +echo "DCAPT util script execution is finished successfully." \ No newline at end of file diff --git a/app/util/confluence/populate_db.sh b/app/util/confluence/populate_db.sh index ec189ff6f..4733d0638 100644 --- a/app/util/confluence/populate_db.sh +++ b/app/util/confluence/populate_db.sh @@ -20,7 +20,7 @@ CONFLUENCE_DB_PASS="Password1!" SELECT_CONFLUENCE_SETTING_SQL="select BANDANAVALUE from BANDANA where BANDANACONTEXT = '_GLOBAL' and BANDANAKEY = 'atlassian.confluence.settings';" # Confluence version variables -SUPPORTED_CONFLUENCE_VERSIONS=(7.0.5 7.4.9) +SUPPORTED_CONFLUENCE_VERSIONS=(7.4.11 7.13.0) if [[ ! $(systemctl status confluence) ]]; then echo "The Confluence service was not found on this host." \ @@ -53,8 +53,14 @@ if [[ ! "${SUPPORTED_CONFLUENCE_VERSIONS[*]}" =~ ${CONFLUENCE_VERSION} ]]; then echo "!!! Warning !!! This may break your Confluence instance. Also, note that downgrade is not supported by Confluence." # Check if --force flag is passed into command if [[ "$1" == "--force" ]]; then + # Check if version was specified after --force flag + if [[ -z "$2" ]]; then + echo "Error: --force flag requires version after it." + echo "Specify one of these versions: ${SUPPORTED_CONFLUENCE_VERSIONS[*]}" + exit 1 + fi # Check if passed Confluence version is in list of supported - if [[ "${SUPPORTED_CONFLUENCE_VERSIONS[*]}" =~ ${2} ]]; then + if [[ " ${SUPPORTED_CONFLUENCE_VERSIONS[@]} " =~ " ${2} " ]]; then DB_DUMP_URL="${DATASETS_AWS_BUCKET}/$2/${DATASETS_SIZE}/${DB_DUMP_NAME}" echo "Force mode. Dataset URL: ${DB_DUMP_URL}" else diff --git a/app/util/confluence/upload_attachments.sh b/app/util/confluence/upload_attachments.sh index 50a62d58f..b9cd65c6a 100644 --- a/app/util/confluence/upload_attachments.sh +++ b/app/util/confluence/upload_attachments.sh @@ -4,7 +4,7 @@ ################### Variables section ################### # Confluence version variables CONFLUENCE_VERSION_FILE="/media/atl/confluence/shared-home/confluence.version" -SUPPORTED_CONFLUENCE_VERSIONS=(7.0.5 7.4.9) +SUPPORTED_CONFLUENCE_VERSIONS=(7.4.11 7.13.0) CONFLUENCE_VERSION=$(sudo su confluence -c "cat ${CONFLUENCE_VERSION_FILE}") if [[ -z "$CONFLUENCE_VERSION" ]]; then echo The $CONFLUENCE_VERSION_FILE file does not exists or emtpy. Please check if CONFLUENCE_VERSION_FILE variable \ @@ -73,7 +73,7 @@ fi echo "Step1: Download msrcync" # https://github.com/jbd/msrsync -cd ${TMP_DIR} || exit +cd ${TMP_DIR} || exit 1 if [[ -s msrsync ]]; then echo "msrsync already downloaded" else diff --git a/app/util/crowd/populate_db.sh b/app/util/crowd/populate_db.sh index bc5be2cac..185ac8bf0 100644 --- a/app/util/crowd/populate_db.sh +++ b/app/util/crowd/populate_db.sh @@ -17,7 +17,8 @@ CROWD_DB_USER="postgres" CROWD_DB_PASS="Password1!" # Crowd version variables -SUPPORTED_CROWD_VERSIONS=(4.3.0) +BASE_CROWD_VERSION=4.3.0 +SUPPORTED_CROWD_VERSIONS=(4.3.5) if [[ ! $(systemctl status crowd) ]]; then echo "The Crowd service was not found on this host." \ @@ -44,10 +45,10 @@ DB_DUMP_NAME="db.dump" if [[ ! "${SUPPORTED_CROWD_VERSIONS[*]}" =~ ${CROWD_VERSION} ]]; then echo "Crowd Version: ${CROWD_VERSION} is not officially supported by Data Center App Performance Toolkit." echo "Supported Crowd Versions: ${SUPPORTED_CROWD_VERSIONS[*]}" - echo "!!! Warning !!! Dump from version ${SUPPORTED_CROWD_VERSIONS[0]} would be used" + echo "!!! Warning !!! Dump from version $BASE_CROWD_VERSION would be used" fi -DB_DUMP_URL="${DATASETS_AWS_BUCKET}/${SUPPORTED_CROWD_VERSIONS[0]}/${DATASETS_SIZE}/${DB_DUMP_NAME}" +DB_DUMP_URL="${DATASETS_AWS_BUCKET}/$BASE_CROWD_VERSION/${DATASETS_SIZE}/${DB_DUMP_NAME}" echo "!!! Warning !!!" echo # move to a new line @@ -109,7 +110,21 @@ if [[ $? -ne 0 ]]; then exit 1 fi -echo "Step4: Download DB dump" +echo "Step4: Write 'base.url' property to file" +CROWD_BASE_URL_FILE="base_url" +if [[ -s ${CROWD_BASE_URL_FILE} ]]; then + echo "File ${CROWD_BASE_URL_FILE} was found. Base url: $(cat ${CROWD_BASE_URL_FILE})." +else + PGPASSWORD=${CROWD_DB_PASS} psql -h ${DB_HOST} -d ${CROWD_DB_NAME} -U ${CROWD_DB_USER} -Atc \ + "select property_value from cwd_property where property_name='base.url';" > ${CROWD_BASE_URL_FILE} + if [[ ! -s ${CROWD_BASE_URL_FILE} ]]; then + echo "Failed to get Base URL value from database. Check DB configuration variables." + exit 1 + fi + echo "$(cat ${CROWD_BASE_URL_FILE}) was written to the ${CROWD_BASE_URL_FILE} file." +fi + +echo "Step5: Download DB dump" rm -rf ${DB_DUMP_NAME} ARTIFACT_SIZE_BYTES=$(curl -sI ${DB_DUMP_URL} | grep "Content-Length" | awk {'print $2'} | tr -d '[:space:]') ARTIFACT_SIZE_GB=$((${ARTIFACT_SIZE_BYTES}/1024/1024/1024)) @@ -129,7 +144,7 @@ if [[ $? -ne 0 ]]; then exit 1 fi -echo "Step5: SQL Restore" +echo "Step6: SQL Restore" echo "Drop DB" PGPASSWORD=${CROWD_DB_PASS} dropdb -U ${CROWD_DB_USER} -h ${DB_HOST} ${CROWD_DB_NAME} if [[ $? -ne 0 ]]; then @@ -151,12 +166,30 @@ if [[ $? -ne 0 ]]; then exit 1 fi -echo "Step6: Start Crowd" +echo "Step7: Update 'base.url' property in database" +if [[ -s ${CROWD_BASE_URL_FILE} ]]; then + BASE_URL=$(cat ${CROWD_BASE_URL_FILE}) + if [[ $(PGPASSWORD=${CROWD_DB_PASS} psql -h ${DB_HOST} -d ${CROWD_DB_NAME} -U ${CROWD_DB_USER} -c \ + "UPDATE cwd_property SET property_value = '${BASE_URL}' WHERE property_name = 'base.url';") != "UPDATE 1" ]]; then + echo "Couldn't update database 'base.url' property. Please check your database connection." + exit 1 + else + echo "The database 'base.url' property was updated with ${BASE_URL}" + fi +else + echo "The ${CROWD_BASE_URL_FILE} file doesn't exist or empty. Please check file existence or 'base.url' property in the database." + exit 1 +fi + +echo "Step8: Start Crowd" sudo systemctl start crowd rm -rf ${DB_DUMP_NAME} +echo "Step9: Remove ${CROWD_BASE_URL_FILE} file" +sudo rm ${CROWD_BASE_URL_FILE} + echo "DCAPT util script execution is finished successfully." echo # move to a new line echo "Important: new admin user credentials are admin/admin" -echo "Wait a couple of minutes until Crowd is started." +echo "Wait a couple of minutes until Crowd is started." \ No newline at end of file diff --git a/app/util/data_preparation/confluence_prepare_data.py b/app/util/data_preparation/confluence_prepare_data.py index 1982c4c0c..9a505f5b4 100644 --- a/app/util/data_preparation/confluence_prepare_data.py +++ b/app/util/data_preparation/confluence_prepare_data.py @@ -1,8 +1,9 @@ import random import string - import urllib3 +from util.common_util import print_timing +from multiprocessing.pool import ThreadPool from util.conf import CONFLUENCE_SETTINGS from util.api.confluence_clients import ConfluenceRpcClient, ConfluenceRestClient from util.project_paths import CONFLUENCE_USERS, CONFLUENCE_PAGES, CONFLUENCE_BLOGS, CONFLUENCE_CUSTOM_PAGES @@ -25,13 +26,23 @@ def generate_random_string(length=20): return "".join([random.choice(string.ascii_lowercase) for _ in range(length)]) +@print_timing('Creating dataset started') def __create_data_set(rest_client, rpc_client): dataset = dict() dataset[USERS] = __get_users(rest_client, rpc_client, CONFLUENCE_SETTINGS.concurrency) perf_user = random.choice(dataset[USERS])['user'] perf_user_api = ConfluenceRestClient(CONFLUENCE_SETTINGS.server_url, perf_user['username'], DEFAULT_USER_PASSWORD) - dataset[PAGES] = __get_pages(perf_user_api, 5000) - dataset[BLOGS] = __get_blogs(perf_user_api, 5000) + + pool = ThreadPool(processes=2) + async_pages = pool.apply_async(__get_pages, (perf_user_api, 5000)) + async_blogs = pool.apply_async(__get_blogs, (perf_user_api, 5000)) + + async_pages.wait() + async_blogs.wait() + + dataset[PAGES] = async_pages.get() + dataset[BLOGS] = async_blogs.get() + dataset[CUSTOM_PAGES] = __get_custom_pages(perf_user_api, 5000, CONFLUENCE_SETTINGS.custom_dataset_query) print(f'Users count: {len(dataset[USERS])}') print(f'Pages count: {len(dataset[PAGES])}') @@ -41,6 +52,7 @@ def __create_data_set(rest_client, rpc_client): return dataset +@print_timing('Getting users') def __get_users(confluence_api, rpc_api, count): errors_count = 0 cur_perf_users = confluence_api.get_users(DEFAULT_USER_PREFIX, count) @@ -65,6 +77,7 @@ def __get_users(confluence_api, rpc_api, count): return cur_perf_users +@print_timing('Getting pages') def __get_pages(confluence_api, count): pages = confluence_api.get_content_search( 0, count, cql='type=page' @@ -79,6 +92,7 @@ def __get_pages(confluence_api, count): return pages +@print_timing('Getting custom pages') def __get_custom_pages(confluence_api, count, cql): pages = [] if cql: @@ -89,6 +103,7 @@ def __get_custom_pages(confluence_api, count, cql): return pages +@print_timing('Getting blogs') def __get_blogs(confluence_api, count): blogs = confluence_api.get_content_search( 0, count, cql='type=blogpost' @@ -110,6 +125,7 @@ def __write_to_file(file_path, items): f.write(f"{item}\n") +@print_timing('Started writing data to files') def write_test_data_to_files(dataset): pages = [f"{page['id']},{page['space']['key']}" for page in dataset[PAGES]] __write_to_file(CONFLUENCE_PAGES, pages) @@ -144,6 +160,7 @@ def __check_for_admin_permissions(confluence_api): raise SystemExit(f"The '{confluence_api.user}' user does not have admin permissions.") +@print_timing('Confluence data preparation') def main(): print("Started preparing data") diff --git a/app/util/data_preparation/jsm_prepare_data.py b/app/util/data_preparation/jsm_prepare_data.py index 6c8713b01..4000f027d 100644 --- a/app/util/data_preparation/jsm_prepare_data.py +++ b/app/util/data_preparation/jsm_prepare_data.py @@ -1,14 +1,12 @@ -import datetime -import functools + import random import string from concurrent.futures.thread import ThreadPoolExecutor -from datetime import timedelta from itertools import repeat -from timeit import default_timer as timer -import urllib3 +import urllib3 +from util.common_util import print_timing from util.api.abstract_clients import JSM_EXPERIMENTAL_HEADERS from util.api.jira_clients import JiraRestClient from util.api.jsm_clients import JsmRestClient @@ -54,26 +52,6 @@ urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) -def print_timing(message, sep='-'): - assert message is not None, "Message is not passed to print_timing decorator" - - def deco_wrapper(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - start = timer() - print(sep * 20) - print(f'{message} started {datetime.datetime.now().strftime("%H:%M:%S")}') - result = func(*args, **kwargs) - end = timer() - print(f"{message} finished in {timedelta(seconds=end - start)}") - print(sep * 20) - return result - - return wrapper - - return deco_wrapper - - def __calculate_issues_per_project(projects_count): calculated_issues_per_project_count = {} max_percentage_key = max(PROJECTS_ISSUES_PERC, key=int) diff --git a/app/util/jira/populate_db.sh b/app/util/jira/populate_db.sh index c659b1ac2..8c9b6f673 100644 --- a/app/util/jira/populate_db.sh +++ b/app/util/jira/populate_db.sh @@ -45,8 +45,8 @@ JIRA_DB_PASS="Password1!" # Jira/JSM supported versions -SUPPORTED_JIRA_VERSIONS=(8.5.15 8.13.7) -SUPPORTED_JSM_VERSIONS=(4.5.15 4.13.7) +SUPPORTED_JIRA_VERSIONS=(8.5.18 8.13.10) +SUPPORTED_JSM_VERSIONS=(4.5.18 4.13.10) SUPPORTED_VERSIONS=("${SUPPORTED_JIRA_VERSIONS[@]}") # JSM section @@ -90,8 +90,14 @@ if [[ ! "${SUPPORTED_VERSIONS[*]}" =~ ${JIRA_VERSION} ]]; then echo "!!! Warning !!! This may break your Jira instance." # Check if --force flag is passed into command if [[ ${force} == 1 ]]; then + # Check if version was specified after --force flag + if [[ -z ${version} ]]; then + echo "Error: --force flag requires version after it." + echo "Specify one of these versions: ${SUPPORTED_VERSIONS[*]}" + exit 1 + fi # Check if passed Jira version is in list of supported - if [[ "${SUPPORTED_VERSIONS[*]}" =~ ${version} ]]; then + if [[ " ${SUPPORTED_VERSIONS[@]} " =~ " ${version} " ]]; then DB_DUMP_URL="${DATASETS_AWS_BUCKET}/${version}/${DATASETS_SIZE}/${DB_DUMP_NAME}" echo "Force mode. Dataset URL: ${DB_DUMP_URL}" # If there is no DOWNGRADE_OPT - set it diff --git a/app/util/jira/upload_attachments.sh b/app/util/jira/upload_attachments.sh index 5b60e1607..6ef63364c 100644 --- a/app/util/jira/upload_attachments.sh +++ b/app/util/jira/upload_attachments.sh @@ -28,8 +28,8 @@ JIRA_VERSION_FILE="/media/atl/jira/shared/jira-software.version" # Jira/JSM supported versions -SUPPORTED_JIRA_VERSIONS=(8.5.15 8.13.7) -SUPPORTED_JSM_VERSIONS=(4.5.15 4.13.7) +SUPPORTED_JIRA_VERSIONS=(8.5.18 8.13.10) +SUPPORTED_JSM_VERSIONS=(4.5.18 4.13.10) SUPPORTED_VERSIONS=("${SUPPORTED_JIRA_VERSIONS[@]}") if [[ ${jsm} == 1 ]]; then @@ -107,7 +107,7 @@ fi echo "Step1: Download msrcync" # https://github.com/jbd/msrsync -cd ${TMP_DIR} +cd ${TMP_DIR} || exit 1 if [[ -s msrsync ]]; then echo "msrsync already downloaded" else diff --git a/app/util/pre_run/check_for_updates.py b/app/util/pre_run/check_for_updates.py new file mode 100644 index 000000000..1c214292f --- /dev/null +++ b/app/util/pre_run/check_for_updates.py @@ -0,0 +1,21 @@ +from util.common_util import get_latest_version, get_current_version, get_unsupported_version + +latest_version = get_latest_version() +current_version = get_current_version() +unsupported_version = get_unsupported_version() + +if latest_version is None: + print('Warning: failed to get the latest version') +elif current_version < latest_version: + print(f"Warning: DCAPT version {current_version} is outdated. " + f"Consider upgrade to the latest version: {latest_version}.") +elif current_version == latest_version: + print(f"Info: DCAPT version {current_version} is the latest.") +elif unsupported_version is None: + print('Warning: failed to get the unsupported version') +elif current_version <= unsupported_version: + raise SystemExit(f"DCAPT version {current_version} is no longer supported. " + f"Consider an upgrade to the latest version: {latest_version}") +else: + print(f"Info: DCAPT version {current_version} " + f"is ahead of the latest production version: {latest_version}.") diff --git a/docs/dc-apps-performance-toolkit-user-guide-bitbucket.md b/docs/dc-apps-performance-toolkit-user-guide-bitbucket.md index 594098091..bb9f555e4 100644 --- a/docs/dc-apps-performance-toolkit-user-guide-bitbucket.md +++ b/docs/dc-apps-performance-toolkit-user-guide-bitbucket.md @@ -4,18 +4,27 @@ platform: platform product: marketplace category: devguide subcategory: build -date: "2021-06-16" +date: "2021-09-16" --- # Data Center App Performance Toolkit User Guide For Bitbucket This document walks you through the process of testing your app on Bitbucket using the Data Center App Performance Toolkit. These instructions focus on producing the required [performance and scale benchmarks for your Data Center app](https://developer.atlassian.com/platform/marketplace/dc-apps-performance-and-scale-testing/). -To use the Data Center App Performance Toolkit, you'll need to: +In this document, we cover the use of the Data Center App Performance Toolkit on two types of environments: -1. [Set up Bitbucket Data Center on AWS](#instancesetup). -1. [Load an enterprise-scale dataset on your Bitbucket Data Center deployment](#preloading). -1. [Set up an execution environment for the toolkit](#executionhost). -1. [Run all the testing scenarios in the toolkit](#testscenario). +**[Development environment](#mainenvironmentdev)**: Bitbucket Data Center environment for a test run of Data Center App Performance Toolkit and development of [app-specific actions](#appspecificactions). We recommend you use the [AWS Quick Start for Bitbucket Data Center](https://aws.amazon.com/quickstart/architecture/bitbucket/) with the parameters prescribed here. + +1. [Set up a development environment Bitbucket Data Center on AWS](#devinstancesetup). +2. [Create a dataset for the development environment](#devdataset). +3. [Run toolkit on the development environment locally](#devtestscenario). +4. [Develop and test app-specific actions locally](#devappaction). + +**[Enterprise-scale environment](#mainenvironmententerprise)**: Bitbucket Data Center environment used to generate Data Center App Performance Toolkit test results for the Marketplace approval process. Preferably, use the [AWS Quick Start for Bitbucket Data Center](https://aws.amazon.com/quickstart/architecture/bitbucket/) with the parameters prescribed below. These parameters provision larger, more powerful infrastructure for your Bitbucket Data Center. + +5. [Set up an enterprise-scale environment Bitbucket Data Center on AWS](#instancesetup). +6. [Load an enterprise-scale dataset on your Bitbucket Data Center deployment](#preloading). +7. [Set up an execution environment for the toolkit](#executionhost). +8. [Run all the testing scenarios in the toolkit](#testscenario). {{% note %}} For simple spikes or tests, you can skip steps 1-2 and target any Bitbucket test instance. When you [set up your execution environment](#executionhost), you may need to edit the scripts according to your test instance's data set. @@ -23,7 +32,211 @@ For simple spikes or tests, you can skip steps 1-2 and target any Bitbucket test --- -## 1. Setting up Bitbucket Data Center +## Development environment + +Running the tests in a development environment helps familiarize you with the toolkit. It'll also provide you with a lightweight and less expensive environment for developing. Once you're ready to generate test results for the Marketplace Data Center Apps Approval process, run the toolkit in an **enterprise-scale environment**. + +### 1. Setting up Bitbucket Data Center development environment + +We recommend that you set up development environment using the [AWS Quick Start for Bitbucket Data Center](https://aws.amazon.com/quickstart/architecture/bitbucket/) (**How to deploy** tab). All the instructions on this page are optimized for AWS. If you already have an existing Bitbucket Data Center environment, you can also use that too (if so, skip to [Create a dataset for the development environment](#devdataset)). + +#### Using the AWS Quick Start for Bitbucket + +If you are a new user, perform an end-to-end deployment. This involves deploying Bitbucket into a _new_ ASI: + +Navigate to **[AWS Quick Start for Bitbucket Data Center](https://aws.amazon.com/quickstart/architecture/bitbucket/) > How to deploy** tab **> Deploy into a new ASI** link. + +If you have already deployed the ASI separately by using the [ASI Quick Start](https://aws.amazon.com/quickstart/architecture/atlassian-standard-infrastructure/)ASI Quick Start or by deploying another Atlassian product (Jira, Bitbucket, or Confluence Data Center development environment) with ASI, deploy Bitbucket into your existing ASI: + +Navigate to **[AWS Quick Start for Bitbucket Data Center](https://aws.amazon.com/quickstart/architecture/bitbucket/) > How to deploy** tab **> Deploy into your existing ASI** link. + +{{% note %}} +You are responsible for the cost of AWS services used while running this Quick Start reference deployment. This Quick Start doesn't have any additional prices. See [Amazon EC2 pricing](https://aws.amazon.com/ec2/pricing/) for more detail. +{{% /note %}} + +To reduce costs, we recommend you to keep your deployment up and running only during the performance runs. + +#### AWS cost estimation for the development environment + +AWS Bitbucket Data Center development environment infrastructure costs about 25 - 40$ per working week depending on such factors like region, instance type, deployment type of DB, and other. + +#### Quick Start parameters for development environment + +All important parameters are listed and described in this section. For all other remaining parameters, we recommend using the Quick Start defaults. + +**Bitbucket setup** + +| Parameter | Recommended value | +| --------- | ----------------- | +| Bitbucket Product | Software | +| Version | The Data Center App Performance Toolkit officially supports `7.6.9`, `6.10.13` ([Long Term Support releases](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) and `7.0.5` Platform Release | + +**Cluster nodes** + +| Parameter | Recommended value | +| --------- | ----------------- | +| Cluster node instance type | [t3.medium](https://aws.amazon.com/ec2/instance-types/t3/) (we recommend this instance type for its good balance between price and performance in testing environments) | +| Maximum number of cluster nodes | 1 | +| Minimum number of cluster nodes | 1 | +| Cluster node instance volume size | 50 | + +**File server** + +| Parameter | Recommended Value | +| --------- | ----------------- | +| File server instance type | m4.xlarge | +| Home directory size | 100 | + + +**Database** + +| Parameter | Recommended Value | +| --------- | ----------------- | +| The database engine to deploy with | PostgresSQL | +| The database engine version to use | 11 | +| Database instance class | [db.t3.medium](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html#Concepts.DBInstanceClass.Summary) | +| RDS Provisioned IOPS | 1000 | +| Master password | Password1! | +| Enable RDS Multi-AZ deployment | false | +| Bitbucket database password | Password1! | +| Database storage | 100 | + +**Elasticsearch** + +| Parameter | Recommended Value | +| --------- | ----------------- | +| Elasticsearch master user password | (leave blank) | +| Elasticsearch instance type | m4.large.elasticsearch | +| Elasticsearch disk-space per node (GB) | 100 | + +**Networking (for new ASI)** + +| Parameter | Recommended Value | +| --------- | ----------------- | +| Trusted IP range | 0.0.0.0/0 _(for public access) or your own trusted IP range_ | +| Availability Zones | _Select two availability zones in your region_ | +| Permitted IP range | 0.0.0.0/0 _(for public access) or your own trusted IP range_ | +| Make instance internet facing | true | +| Key Name | _The EC2 Key Pair to allow SSH access. See [Amazon EC2 Key Pairs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) for more info._ | + +**Networking (for existing ASI)** + +| Parameter | Recommended Value | +| --------- | ----------------- | +| Make instance internet facing | true | +| Permitted IP range | 0.0.0.0/0 _(for public access) or your own trusted IP range_ | +| Key Name | _The EC2 Key Pair to allow SSH access. See [Amazon EC2 Key Pairs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) for more info._ | + +### Running the setup wizard + +After successfully deploying Bitbucket Data Center in AWS, you'll need to configure it: + +1. In the AWS console, go to **Services** > **CloudFormation** > **Stack** > **Stack details** > **Select your stack**. +1. On the **Outputs** tab, copy the value of the **LoadBalancerURL** key. +1. Open **LoadBalancerURL** in your browser. This will take you to the Bitbucket setup wizard. +1. On the **Bitbucket setup** page, populate the following fields: + - **Application title**: any name for your Bitbucket Data Center deployment + - **Base URL**: your stack's Elastic LoadBalancer URL + - **License key**: select new evaluation license or existing license checkbox + Click **Next**. +1. On the **Administrator account setup** page, populate the following fields: + - **Username**: admin _(recommended)_ + - **Full name**: any full name of the admin user + - **Email address**: email address of the admin user + - **Password**: admin _(recommended)_ + - **Confirm Password**: admin _(recommended)_ + Click **Go to Bitbucket**. + +--- + +### 2. Generate dataset for development environment + +After creating the development environment Bitbucket Data Center, generate test dataset to run Data Center App Performance Toolkit: +- Create at least one project +- Create repository with some files in a project +- Create a new branch from the repo, make and push changes to the branch and create a pull request + +--- + +### 3. Run toolkit on the development environment locally + +{{% warning %}} +Make sure **English (United States)** language is selected as a default language on the **![cog icon](/platform/marketplace/images/cog.png) > Server settings > Language** page. Other languages are **not supported** by the toolkit. +{{% /warning %}} + +1. Clone [Data Center App Performance Toolkit](https://github.com/atlassian/dc-app-performance-toolkit) locally. +1. Follow the [README.md](https://github.com/atlassian/dc-app-performance-toolkit/blob/master/README.md) instructions to set up toolkit locally. +1. Navigate to `dc-app-performance-toolkit/app` folder. +1. Open the `bitbucket.yml` file and fill in the following variables: + - `application_hostname`: your_dc_bitbucket_instance_hostname without protocol. + - `application_protocol`: http or https. + - `application_port`: for HTTP - 80, for HTTPS - 443, 8080, 7990 or your instance-specific port. + - `secure`: True or False. Default value is True. Set False to allow insecure connections, e.g. when using self-signed SSL certificate. + - `application_postfix`: it is empty by default; e.g., /bitbucket for url like this http://localhost:7990/bitbucket. + - `admin_login`: admin user username. + - `admin_password`: admin user password. + - `load_executor`: executor for load tests - [jmeter](https://jmeter.apache.org/) + - `concurrency`: `1` - number of concurrent JMeter users. + - `test_duration`: `5m` - duration of the performance run. + - `ramp-up`: `1s` - amount of time it will take JMeter to add all test users to test execution. + - `total_actions_per_hour`: `3270` - number of total JMeter actions per hour. + - `WEBDRIVER_VISIBLE`: visibility of Chrome browser during selenium execution (False is by default). + +1. Run bzt. + + ``` bash + bzt bitbucket.yml + ``` + +1. Review the resulting table in the console log. All JMeter and Selenium actions should have 95+% success rate. +In case some actions does not have 95+% success rate refer to the following logs in `dc-app-performance-toolkit/app/results/bitbucket/YY-MM-DD-hh-mm-ss` folder: + + - `results_summary.log`: detailed run summary + - `results.csv`: aggregated .csv file with all actions and timings + - `bzt.log`: logs of the Taurus tool execution + - `jmeter.*`: logs of the JMeter tool execution + - `pytest.*`: logs of Pytest-Selenium execution + +{{% warning %}} +Do not proceed with the next step until you have all actions 95+% success rate. Ask [support](#support) if above logs analysis did not help. +{{% /warning %}} + +--- + +### 4. Develop and test app-specific action locally +Data Center App Performance Toolkit has its own set of default test actions for Bitbucket Data Center: JMeter and Selenium for load and UI tests respectively. + +**App-specific action** - action (performance test) you have to develop to cover main use cases of your application. Performance test should focus on the common usage of your application and not to cover all possible functionality of your app. For example, application setup screen or other one-time use cases are out of scope of performance testing. + +1. Define main use case of your app. Usually it is one or two main app use cases. +1. Your app adds new UI elements in Bitbucket Data Center - Selenium app-specific action has to be developed. +1. Your app introduces new endpoint or extensively calls existing Bitbucket Data Center API - JMeter app-specific actions has to be developed. + + +{{% note %}} +We strongly recommend developing your app-specific actions on the development environment to reduce AWS infrastructure costs. +{{% /note %}} + +#### Example of app-specific Selenium action development +You develop an app that adds some additional fields to specific types of Bitbucket issues. In this case, you should develop Selenium app-specific action: + +1. Extend example of app-specific action in `dc-app-performance-toolkit/app/extension/bitbucket/extension_ui.py`. +[Code example.](https://github.com/atlassian/dc-app-performance-toolkit/blob/master/app/extension/bitbucket/extension_ui.py) +So, our test has to open app-specific issues and measure time to load of this app-specific issues. +1. If you need to run `app_speicifc_action` as specific user uncomment `app_specific_user_login` function in [code example](https://github.com/atlassian/dc-app-performance-toolkit/blob/master/app/extension/bitbucket/extension_ui.py). Note, that in this case `test_1_selenium_custom_action` should follow just before `test_2_selenium_z_log_out` action. +1. In `dc-app-performance-toolkit/app/selenium_ui/bitbucket-ui.py`, review and uncomment the following block of code to make newly created app-specific actions executed: +``` python +# def test_1_selenium_custom_action(webdriver, datasets, screen_shots): +# app_specific_action(webdriver, datasets) +``` + +4. Run toolkit with `bzt bitbucket.yml` command to ensure that all Selenium actions including `app_specific_action` are successful. + +## Enterprise-scale environment + +After adding your custom app-specific actions, you should now be ready to run the required tests for the Marketplace Data Center Apps Approval process. To do this, you'll need an **enterprise-scale environment**. + +### 5. Set up an enterprise-scale environment Bitbucket Data Center on AWS We recommend that you use the [AWS Quick Start for Bitbucket Data Center](https://aws.amazon.com/quickstart/architecture/bitbucket/) (**How to deploy** tab) to deploy a Bitbucket Data Center testing environment. This Quick Start will allow you to deploy Bitbucket Data Center with a new [Atlassian Standard Infrastructure](https://aws.amazon.com/quickstart/architecture/atlassian-standard-infrastructure/) (ASI) or into an existing one. @@ -99,7 +312,7 @@ All important parameters are listed and described in this section. For all other | Parameter | Recommended Value | | --------- | ----------------- | -| Version | The Data Center App Performance Toolkit officially supports `7.6.7`, `6.10.11` ([Long Term Support releases](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) and `7.0.5` Platform Release | +| Version | The Data Center App Performance Toolkit officially supports `7.6.9`, `6.10.13` ([Long Term Support releases](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) and `7.0.5` Platform Release | **Cluster nodes** @@ -193,7 +406,7 @@ After [Preloading your Bitbucket deployment with an enterprise-scale dataset](#p --- -## 2. Preloading your Bitbucket deployment with an enterprise-scale dataset +### 6. Preloading your Bitbucket deployment with an enterprise-scale dataset Data dimensions and values for an enterprise-scale dataset are listed and described in the following table. @@ -210,9 +423,9 @@ Data dimensions and values for an enterprise-scale dataset are listed and descri All the datasets use the standard `admin`/`admin` credentials. {{% /note %}} -Pre-loading the dataset is a three-step process: +Pre-loading the dataset is a two-step process: -1. [Importing the main dataset](#importingdataset). To help you out, we provide an enterprise-scale dataset you can import either via the [populate_db.sh](https://github.com/atlassian/dc-app-performance-toolkit/blob/master/app/util/bitbucket/populate_db.sh) script or restore from xml backup file. +1. [Importing the main dataset](#importingdataset). To help you out, we provide an enterprise-scale dataset you can import either via the [populate_db.sh](https://github.com/atlassian/dc-app-performance-toolkit/blob/master/app/util/bitbucket/populate_db.sh). 1. [Restoring attachments](#copyingattachments). We also provide attachments, which you can pre-load via an [upload_attachments.sh](https://github.com/atlassian/dc-app-performance-toolkit/blob/master/app/util/bitbucket/upload_attachments.sh) script. The following subsections explain each step in greater detail. @@ -391,7 +604,7 @@ In case of any difficulties with Index generation, contact us for support in the --- -## 3. Setting up an execution environment +### 7. Setting up an execution environment For generating performance results suitable for Marketplace approval process use dedicated execution environment. This is a separate AWS EC2 instance to run the toolkit from. Running the toolkit from a dedicated instance but not from a local machine eliminates network fluctuations and guarantees stable CPU and memory performance. @@ -435,7 +648,7 @@ You'll need to run the toolkit for each [test scenario](#testscenario) in the ne --- -## 4. Running the test scenarios on your execution environment +### 8. Running the test scenarios on your execution environment Using the Data Center App Performance Toolkit for [Performance and scale testing your Data Center app](/platform/marketplace/developing-apps-for-atlassian-data-center-products/) involves two test scenarios: @@ -534,35 +747,6 @@ The purpose of scalability testing is to reflect the impact on the customer expe For many apps and extensions to Atlassian products, there should not be a significant performance difference between operating on a single node or across many nodes in Bitbucket DC deployment. To demonstrate performance impacts of operating your app at scale, we recommend testing your Bitbucket DC app in a cluster. -#### Extending the base action - -You can find more info about the local Bitbucket setup for developer's purpose in the [README.md](https://github.com/atlassian/dc-app-performance-toolkit/blob/master/README.md) file. -Extension scripts, which extend the base Selenium (`bitbucket-ui.py`) scripts, are located in a separate folder (`dc-app-performance-toolkit/extension/bitbucket`). You can modify these scripts to include their app-specific actions. - -##### Modifying Selenium - -You can extend Selenium scripts to measure the end-to-end browser timings. - -We use **Pytest** to drive Selenium tests. The `bitbucket-ui.py` executor script is located in the `app/selenium_ui/` folder. This file contains all browser actions, defined by the `test_ functions`. These actions are executed one by one during the testing. - -#### Example of app-specific Selenium action development -You develop an app that adds additional UI elements to a repository page, in this case you should edit `dc-app-performance-toolkit/extension/bitbucket/extension_ui.py`: -[Code example.](https://github.com/atlassian/dc-app-performance-toolkit/blob/master/app/extension/bitbucket/extension_ui.py) - -In the `bitbucket-ui.py` script, view the following block of code: - -``` python -# def test_1_selenium_custom_action(webdriver, datasets, screen_shots): -# app_specific_action(webdriver, datasets) -``` -If you need to run `app_speicifc_action` as specific user uncomment `app_specific_user_login` function in [code example](https://github.com/atlassian/dc-app-performance-toolkit/blob/master/app/extension/bitbucket/extension_ui.py). Note, that in this case `test_1_selenium_custom_action` should follow just before `test_2_selenium_logout` action. - -To view more examples, see the `modules.py` file in the `selenium_ui/bitbucket` directory. - -#### Running tests with your modification - -To ensure that the test runs without errors in parallel, run your extension scripts with the base scripts as a sanity check. - ##### Run 3 (~1 hour) To receive scalability benchmark results for one-node Bitbucket DC **with** app-specific actions: diff --git a/docs/dc-apps-performance-toolkit-user-guide-confluence.md b/docs/dc-apps-performance-toolkit-user-guide-confluence.md index d9777b177..1151488c1 100644 --- a/docs/dc-apps-performance-toolkit-user-guide-confluence.md +++ b/docs/dc-apps-performance-toolkit-user-guide-confluence.md @@ -4,7 +4,7 @@ platform: platform product: marketplace category: devguide subcategory: build -date: "2021-06-16" +date: "2021-09-16" --- # Data Center App Performance Toolkit User Guide For Confluence @@ -66,7 +66,7 @@ All important parameters are listed and described in this section. For all other | Parameter | Recommended value | | --------- | ----------------- | | Collaborative editing mode | synchrony-local | -| Confluence Version | The Data Center App Performance Toolkit officially supports `7.4.9` ([Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) and `7.0.5` Platform Release | +| Confluence Version | The Data Center App Performance Toolkit officially supports `7.13.0` and `7.4.11` ([Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) | **Cluster nodes** @@ -380,7 +380,7 @@ All important parameters are listed and described in this section. For all other | Parameter | Recommended value | | --------- | ----------------- | | Collaborative editing mode | synchrony-local | -| Confluence Version | The Data Center App Performance Toolkit officially supports `7.4.9` ([Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) and `7.0.5` Platform Release | +| Confluence Version | The Data Center App Performance Toolkit officially supports `7.13.0` and `7.4.11` ([Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) | **Cluster nodes** @@ -597,16 +597,39 @@ Do not close or interrupt the session. It will take some time to upload attachme For more information, go to [Re-indexing Confluence](https://confluence.atlassian.com/doc/content-index-administration-148844.html). -{{% note %}} -For Confluence 7, `populate_db.sh` script triggers index process automatically. So no need to start index manually once again, just wait until current index process is finished. -{{% /note %}} +Index process is triggered automatically after `polulate_db.sh` script execution. + +For Confluence **7.4.x**: -For Confluence 6: 1. Log in as a user with the **Confluence System Administrators** [global permission](https://confluence.atlassian.com/doc/global-permissions-overview-138709.html). 1. Go to **![cog icon](/platform/marketplace/images/cog.png) > General Configuration > Content Indexing**. -1. Click **Rebuild** and wait until re-indexing is completed. +1. Wait until re-indexing is completed. + +For Confluence **7.13.x**: + +1. Using SSH, connect to the Confluence node via the Bastion instance: + + For Linux or MacOS run following commands in terminal (for Windows use [Git Bash](https://git-scm.com/downloads) terminal): + + ```bash + ssh-add path_to_your_private_key_pem + export BASTION_IP=bastion_instance_public_ip + export NODE_IP=node_private_ip + export SSH_OPTS1='-o ServerAliveInterval=60' + export SSH_OPTS2='-o ServerAliveCountMax=30' + ssh ${SSH_OPTS1} ${SSH_OPTS2} -o "proxycommand ssh -W %h:%p ${SSH_OPTS1} ${SSH_OPTS2} ec2-user@${BASTION_IP}" ec2-user@${NODE_IP} + ``` + For more information, go to [Connecting your nodes over SSH](https://confluence.atlassian.com/adminjiraserver/administering-jira-data-center-on-aws-938846969.html#AdministeringJiraDataCenteronAWS-ConnectingtoyournodesoverSSH). +1. Download the [index-wait-till-finished.sh](https://github.com/atlassian/dc-app-performance-toolkit/blob/master/app/util/confluence/index-wait-till-finished.sh) script and make it executable: -Confluence will be unavailable for some time during the re-indexing process. + ``` bash + wget https://raw.githubusercontent.com/atlassian/dc-app-performance-toolkit/master/app/util/confluence/index-wait-till-finished.sh && chmod +x index-wait-till-finished.sh + ``` +1. Run the script: + + ``` bash + ./index-wait-till-finished.sh 2>&1 | tee -a index-wait-till-finished.log + ``` ### Create Index Snapshot (~30 min) diff --git a/docs/dc-apps-performance-toolkit-user-guide-crowd.md b/docs/dc-apps-performance-toolkit-user-guide-crowd.md index 87e6f737d..81c0aab14 100644 --- a/docs/dc-apps-performance-toolkit-user-guide-crowd.md +++ b/docs/dc-apps-performance-toolkit-user-guide-crowd.md @@ -4,7 +4,7 @@ platform: platform product: marketplace category: devguide subcategory: build -date: "2021-06-16" +date: "2021-09-16" --- # Data Center App Performance Toolkit User Guide For Crowd @@ -99,7 +99,7 @@ All important parameters are listed and described in this section. For all other | Parameter | Recommended Value | | --------- | ----------------- | -| Version | The Data Center App Performance Toolkit officially supports `4.3.0` ([Long Term Support release](https://confluence.atlassian.com/crowd/crowd-release-notes-199094.html)) | +| Version | The Data Center App Performance Toolkit officially supports `4.3.5` | **Cluster nodes** diff --git a/docs/dc-apps-performance-toolkit-user-guide-jira.md b/docs/dc-apps-performance-toolkit-user-guide-jira.md index 7ca5e7e9a..8d4be5fc8 100644 --- a/docs/dc-apps-performance-toolkit-user-guide-jira.md +++ b/docs/dc-apps-performance-toolkit-user-guide-jira.md @@ -4,7 +4,7 @@ platform: platform product: marketplace category: devguide subcategory: build -date: "2021-06-16" +date: "2021-09-16" --- # Data Center App Performance Toolkit User Guide For Jira @@ -66,7 +66,7 @@ All important parameters are listed and described in this section. For all other | Parameter | Recommended value | | --------- | ----------------- | | Jira Product | Software | -| Version | The Data Center App Performance Toolkit officially supports `8.13.7`, `8.5.15` ([Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) | +| Version | The Data Center App Performance Toolkit officially supports `8.13.10`, `8.5.18` ([Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) | **Cluster nodes** @@ -387,7 +387,7 @@ All important parameters are listed and described in this section. For all other | Parameter | Recommended Value | | --------- | ----------------- | | Jira Product | Software | -| Version | The Data Center App Performance Toolkit officially supports `8.13.7`, `8.5.15` ([Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) | +| Version | The Data Center App Performance Toolkit officially supports `8.13.10`, `8.5.18` ([Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) | **Cluster nodes** diff --git a/docs/dc-apps-performance-toolkit-user-guide-jsm.md b/docs/dc-apps-performance-toolkit-user-guide-jsm.md index 3f0741aaa..e8a8658ba 100644 --- a/docs/dc-apps-performance-toolkit-user-guide-jsm.md +++ b/docs/dc-apps-performance-toolkit-user-guide-jsm.md @@ -4,7 +4,7 @@ platform: platform product: marketplace category: devguide subcategory: build -date: "2021-06-16" +date: "2021-09-16" --- # Data Center App Performance Toolkit User Guide For Jira Service Management @@ -66,7 +66,7 @@ All important parameters are listed and described in this section. For all other | Parameter | Recommended value | | --------- | ----------------- | | Jira Product | ServiceManagement | -| Version | The Data Center App Performance Toolkit officially supports `4.13.7`, `4.5.15` ([Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) | +| Version | The Data Center App Performance Toolkit officially supports `4.13.10`, `4.5.18` ([Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) | **Cluster nodes** @@ -543,7 +543,7 @@ All important parameters are listed and described in this section. For all other | Parameter | Recommended Value | | --------- | ----------------- | | Jira Product | ServiceManagement | -| Version | The Data Center App Performance Toolkit officially supports `4.13.7`, `4.5.15` ([Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) | +| Version | The Data Center App Performance Toolkit officially supports `4.13.10`, `4.5.18` ([Long Term Support release](https://confluence.atlassian.com/enterprise/atlassian-enterprise-releases-948227420.html)) | **Cluster nodes** @@ -1013,6 +1013,7 @@ To receive scalability benchmark results for two-node Jira Service Management DC indexes - 100% Index restore complete ``` + {{% note %}} If index synchronization is failed by some reason, you can manually copy index from the first node. To do it, login to the second node (use private browser window and check footer information to see which node is current), go to **System** > **Indexing**. In the **Copy the Search Index from another node**, choose the source node (first node) and the target node (current node). The index will copied from one instance to another. {{% /note %}} diff --git a/requirements.txt b/requirements.txt index e191655b2..dbc977900 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,6 +1,5 @@ -matplotlib==3.4.2 -pandas==1.2.4 -importlib-metadata==4.5.0 -bzt==1.15.3 -locust==1.4.4 +matplotlib==3.4.3 +pandas==1.3.3 +bzt==1.15.4 +locust==1.6.0 filelock==3.0.12 \ No newline at end of file