diff --git a/README.rst b/README.rst index cdb642dbf..6ada0fffc 100644 --- a/README.rst +++ b/README.rst @@ -87,7 +87,9 @@ Available Features `RAM usage <#memory-usage>`_, `CPU load <#cpu-load>`_, `flash/disk usage <#disk-usage>`_, mobile signal (LTE/UMTS/GSM `signal strength <#mobile-signal-strength>`_, `signal quality <#mobile-signal-quality>`_, - `access technology in use <#mobile-access-technology-in-use>`_) + `access technology in use <#mobile-access-technology-in-use>`_), `bandwidth <#iperf>`_, + `transferred data <#iperf>`_, `restransmits <#iperf>`_, `jitter <#iperf>`_, + `datagram <#iperf>`_, `datagram loss <#iperf>`_ * Maintains a record of `WiFi sessions <#monitoring-wifi-sessions>`_ with clients' MAC address and vendor, session start and stop time and connected device along with other information @@ -105,6 +107,8 @@ Available Features * Extensible metrics and charts: it's possible to define new metrics and new charts * API to retrieve the chart metrics and status information of each device based on `NetJSON DeviceMonitoring `_ +* `Iperf check <#iperf-1>`_ that provides network performance measurements such as maximum + achievable bandwidth, jitter, datagram loss etc of the openwrt device using `iperf3 utility `_ ------------ @@ -376,7 +380,15 @@ Configure celery (you may use a different broker if you want): CELERY_BEAT_SCHEDULE = { 'run_checks': { 'task': 'openwisp_monitoring.check.tasks.run_checks', + # Executes only ping & config check every 5 min 'schedule': timedelta(minutes=5), + 'args': ( + [ # Checks path + 'openwisp_monitoring.check.classes.Ping', + 'openwisp_monitoring.check.classes.ConfigApplied', + ], + ), + 'relative': True, }, # Delete old WifiSession 'delete_wifi_clients_and_sessions': { @@ -803,6 +815,59 @@ Mobile Access Technology in use .. figure:: https://github.com/openwisp/openwisp-monitoring/raw/docs/docs/access-technology.png :align: center +Iperf +~~~~~ + ++--------------------+---------------------------------------------------------------------------------------------------------------------------+ +| **measurement**: | ``iperf`` | ++--------------------+---------------------------------------------------------------------------------------------------------------------------+ +| **types**: | | ``int`` (iperf_result, sent_bytes_tcp, received_bytes_tcp, retransmits, sent_bytes_udp, total_packets, lost_packets), | +| | | ``float`` (sent_bps_tcp, received_bps_tcp, sent_bps_udp, jitter, lost_percent) | ++--------------------+---------------------------------------------------------------------------------------------------------------------------+ +| **fields**: | | ``iperf_result``, ``sent_bps_tcp``, ``received_bps_tcp``, ``sent_bytes_tcp``, ``received_bytes_tcp``, ``retransmits``, | +| | | ``sent_bps_udp``, ``sent_bytes_udp``, ``jitter``, ``total_packets``, ``lost_packets``, ``lost_percent`` | ++--------------------+---------------------------------------------------------------------------------------------------------------------------+ +| **configuration**: | ``iperf`` | ++--------------------+---------------------------------------------------------------------------------------------------------------------------+ +| **charts**: | ``bandwidth``, ``transfer``, ``retransmits``, ``jitter``, ``datagram``, ``datagram_loss`` | ++--------------------+---------------------------------------------------------------------------------------------------------------------------+ + +**Bandwidth**: + +.. figure:: https://github.com/openwisp/openwisp-monitoring/raw/docs/docs/1.1/bandwidth.png + :align: center + +**Transferred Data**: + +.. figure:: https://github.com/openwisp/openwisp-monitoring/raw/docs/docs/1.1/transferred-data.png + :align: center + +**Retransmits**: + +.. figure:: https://github.com/openwisp/openwisp-monitoring/raw/docs/docs/1.1/retransmits.png + :align: center + +**Jitter**: + +.. figure:: https://github.com/openwisp/openwisp-monitoring/raw/docs/docs/1.1/jitter.png + :align: center + +**Datagram**: + +.. figure:: https://github.com/openwisp/openwisp-monitoring/raw/docs/docs/1.1/datagram.png + :align: center + +**Datagram loss**: + +.. figure:: https://github.com/openwisp/openwisp-monitoring/raw/docs/docs/1.1/datagram-loss.png + :align: center + +For more info on how to configure and use Iperf, please refer to +`iperf check usage instructions <#iperf-check-usage-instructions>`_. + +**Note:** Iperf charts uses ``connect_points=True`` in +`default chart configuration <#openwisp_monitoring_charts>`_ that joins it's individual chart data points. + Dashboard Monitoring Charts --------------------------- @@ -819,15 +884,15 @@ You can configure the interfaces included in the **General traffic chart** using the `"OPENWISP_MONITORING_DASHBOARD_TRAFFIC_CHART" <#openwisp_monitoring_dashboard_traffic_chart>`_ setting. -Adaptive byte charts +Adaptive size charts -------------------- .. figure:: https://github.com/openwisp/openwisp-monitoring/raw/docs/docs/1.1/adaptive-chart.png :align: center When configuring charts, it is possible to flag their unit -as ``adaptive_bytes``, this allows to make the charts more readable because -the units are shown in either `B`, `KB`, `MB`, `GB` and `TB` depending on +as ``adaptive_prefix``, this allows to make the charts more readable because +the units are shown in either `K`, `M`, `G` and `T` depending on the size of each point, the summary values and Y axis are also resized. Example taken from the default configuration of the traffic chart: @@ -836,7 +901,17 @@ Example taken from the default configuration of the traffic chart: 'traffic': { # other configurations for this chart - 'unit': 'adaptive_bytes', + + # traffic measured in 'B' (bytes) + # unit B, KB, MB, GB, TB + 'unit': 'adaptive_prefix+B', + }, + + 'bandwidth': { + # adaptive unit for bandwidth related charts + # bandwidth measured in 'bps'(bits/sec) + # unit bps, Kbps, Mbps, Gbps, Tbps + 'unit': 'adaptive_prefix+bps', }, Monitoring WiFi Sessions @@ -939,6 +1014,231 @@ configuration status of a device changes, this ensures the check reacts quickly to events happening in the network and informs the user promptly if there's anything that is not working as intended. +Iperf +~~~~~ + +This check provides network performance measurements such as maximum achievable bandwidth, +jitter, datagram loss etc of the device using `iperf3 utility `_. + +This check is **disabled by default**. You can enable auto creation of this check by setting the +`OPENWISP_MONITORING_AUTO_IPERF <#OPENWISP_MONITORING_AUTO_IPERF>`_ to ``True``. + +It also supports tuning of various parameters. + +You can also change the parameters used for iperf checks (e.g. timing, port, username, +password, rsa_publc_key etc) using the `OPENWISP_MONITORING_IPERF_CHECK_CONFIG +<#OPENWISP_MONITORING_IPERF_CHECK_CONFIG>`_ setting. + +Iperf Check Usage Instructions +------------------------------ + +1. Make sure iperf is installed on the device +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Register your device to OpenWISP and make sure the `iperf3 openwrt package +`_ is installed on the device, +eg: + +.. code-block:: shell + + opkg install iperf3 # if using without authentication + opkg install iperf3-ssl # if using with authentication (read below for more info) + +2. Ensure SSH access from OpenWISP is enabled on your devices +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Follow the steps in `"How to configure push updates" section of the +OpenWISP documentation +`_ +to allow SSH access to you device from OpenWISP. + +**Note:** Make sure device connection is enabled +& working with right update strategy i.e. ``OpenWRT SSH``. + +.. image:: https://github.com/openwisp/openwisp-monitoring/raw/docs/docs/1.1/enable-openwrt-ssh.png + :alt: Enable ssh access from openwisp to device + :align: center + +3. Set up and configure Iperf server settings +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +After having deployed your Iperf servers, you need to +configure the iperf settings on the django side of OpenWISP, +see the `test project settings for reference +`_. + +The host can be specified by hostname, IPv4 literal, or IPv6 literal. +Example: + +.. code-block:: python + + OPENWISP_MONITORING_IPERF_CHECK_CONFIG = { + # Public iperf servers are also available + # https://iperf.fr/iperf-servers.php#public-servers + # org pk : {'host', 'client_options'} + 'a9734710-db30-46b0-a2fc-01f01046fe4f': { + 'host': ['iperf.openwisp.io', '2001:db8::1', '192.168.5.2'], + 'client_options': { + 'port': 5209, + 'udp': {'bitrate': '20M'}, + 'tcp': {'bitrate': '0'}, + }, + } + } + +The celery-beat configuration for the iperf check needs to be added too: + +.. code-block:: python + + from celery.schedules import crontab + + # Celery TIME_ZONE should be equal to django TIME_ZONE + # In order to schedule run_iperf_checks on the correct time intervals + CELERY_TIMEZONE = TIME_ZONE + CELERY_BEAT_SCHEDULE = { + # Other celery beat configurations + # Celery beat configuration for iperf check + 'run_iperf_checks': { + 'task': 'openwisp_monitoring.check.tasks.run_checks', + # https://docs.celeryq.dev/en/latest/userguide/periodic-tasks.html#crontab-schedules + # Executes check every 5 mins from 00:00 AM to 6:00 AM (night) + 'schedule': crontab(minute='*/5', hour='0-6'), + # Iperf check path + 'args': (['openwisp_monitoring.check.classes.Iperf'],), + 'relative': True, + } + } + +Once the changes are saved, you will need to restart all the processes. + +**Note:** We recommended to configure this check to run in non peak +traffic times to not interfere with standard traffic. + +4. Run the check +~~~~~~~~~~~~~~~~ + +This should happen automatically if you have celery-beat correctly +configured and running in the background. +For testing purposes, you can run this check manually using the +`run_checks <#run_checks>`_ command. + +After that, you should see the iperf network measurements charts. + +.. image:: https://github.com/openwisp/openwisp-monitoring/raw/docs/docs/1.1/iperf-charts.png + :alt: Iperf network measurement charts + +Iperf authentication +~~~~~~~~~~~~~~~~~~~~ + +By default iperf check runs without any kind of **authentication**, +in this section we will explain how to configure **RSA authentication** +between the **client** and the **server** to restrict connections +to authenticated clients. + +Server side +########### + +1. Generate RSA keypair +^^^^^^^^^^^^^^^^^^^^^^^ + +.. code-block:: shell + + openssl genrsa -des3 -out private.pem 2048 + openssl rsa -in private.pem -outform PEM -pubout -out public_key.pem + openssl rsa -in private.pem -out private_key.pem -outform PEM + +After running the commands mentioned above, the public key will be stored in +``public_key.pem`` which will be used in **rsa_public_key** parameter +in `OPENWISP_MONITORING_IPERF_CHECK_CONFIG +<#OPENWISP_MONITORING_IPERF_CHECK_CONFIG>`_ +and the private key will be contained in the file ``private_key.pem`` +which will be used with **--rsa-private-key-path** command option when +starting the iperf server. + +2. Create user credentials +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. code-block:: shell + + USER=iperfuser PASSWD=iperfpass + echo -n "{$USER}$PASSWD" | sha256sum | awk '{ print $1 }' + ---- + ee17a7f98cc87a6424fb52682396b2b6c058e9ab70e946188faa0714905771d7 #This is the hash of "iperfuser" + +Add the above hash with username in ``credentials.csv`` + +.. code-block:: shell + + # file format: username,sha256 + iperfuser,ee17a7f98cc87a6424fb52682396b2b6c058e9ab70e946188faa0714905771d7 + +3. Now start the iperf server with auth options +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. code-block:: shell + + iperf3 -s --rsa-private-key-path ./private_key.pem --authorized-users-path ./credentials.csv + +Client side (OpenWrt device) +############################ + +1. Install iperf3-ssl +^^^^^^^^^^^^^^^^^^^^^ + +Install the `iperf3-ssl openwrt package +`_ +instead of the normal +`iperf3 openwrt package `_ +because the latter comes without support for authentication. + +You may also check your installed **iperf3 openwrt package** features: + +.. code-block:: shell + + root@vm-openwrt:~ iperf3 -v + iperf 3.7 (cJSON 1.5.2) + Linux vm-openwrt 4.14.171 #0 SMP Thu Feb 27 21:05:12 2020 x86_64 + Optional features available: CPU affinity setting, IPv6 flow label, TCP congestion algorithm setting, + sendfile / zerocopy, socket pacing, authentication # contains 'authentication' + +2. Configure iperf check auth parameters +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Now, add the following iperf authentication parameters +to `OPENWISP_MONITORING_IPERF_CHECK_CONFIG +<#OPENWISP_MONITORING_IPERF_CHECK_CONFIG>`_ +in the settings: + +.. code-block:: python + + OPENWISP_MONITORING_IPERF_CHECK_CONFIG = { + 'a9734710-db30-46b0-a2fc-01f01046fe4f': { + 'host': ['iperf1.openwisp.io', 'iperf2.openwisp.io', '192.168.5.2'], + # All three parameters (username, password, rsa_publc_key) + # are required for iperf authentication + 'username': 'iperfuser', + 'password': 'iperfpass', + # Add RSA public key without any headers + # ie. -----BEGIN PUBLIC KEY-----, -----BEGIN END KEY----- + 'rsa_public_key': ( + """ + MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwuEm+iYrfSWJOupy6X3N + dxZvUCxvmoL3uoGAs0O0Y32unUQrwcTIxudy38JSuCccD+k2Rf8S4WuZSiTxaoea + 6Du99YQGVZeY67uJ21SWFqWU+w6ONUj3TrNNWoICN7BXGLE2BbSBz9YaXefE3aqw + GhEjQz364Itwm425vHn2MntSp0weWb4hUCjQUyyooRXPrFUGBOuY+VvAvMyAG4Uk + msapnWnBSxXt7Tbb++A5XbOMdM2mwNYDEtkD5ksC/x3EVBrI9FvENsH9+u/8J9Mf + 2oPl4MnlCMY86MQypkeUn7eVWfDnseNky7TyC0/IgCXve/iaydCCFdkjyo1MTAA4 + BQIDAQAB + """ + ), + 'client_options': { + 'port': 5209, + 'udp': {'bitrate': '20M'}, + 'tcp': {'bitrate': '0'}, + }, + } + } + Settings -------- @@ -1033,6 +1333,57 @@ validating custom parameters of a ``Check`` object. This setting allows you to choose whether `config_applied <#configuration-applied>`_ checks should be created automatically for newly registered devices. It's enabled by default. +``OPENWISP_MONITORING_AUTO_IPERF`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + ++--------------+-------------+ +| **type**: | ``bool`` | ++--------------+-------------+ +| **default**: | ``False`` | ++--------------+-------------+ + +This setting allows you to choose whether `iperf <#iperf-1>`_ checks should be +created automatically for newly registered devices. It's disabled by default. + +``OPENWISP_MONITORING_IPERF_CHECK_CONFIG`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + ++--------------+-------------+ +| **type**: | ``dict`` | ++--------------+-------------+ +| **default**: | ``{}`` | ++--------------+-------------+ + +This setting allows to override the default iperf check configuration defined in +``openwisp_monitoring.check.classes.iperf.DEFAULT_IPERF_CHECK_CONFIG``. + +For example, if you want to change only the **port number** of +``iperf`` check you can use: + +.. code-block:: python + + OPENWISP_MONITORING_IPERF_CHECK_CONFIG = { + 'a9734710-db30-46b0-a2fc-01f01046fe4f': { + 'host': ['iperf.openwisp.io'], + 'client_options': { + 'port': 6201, + }, + } + } + +``OPENWISP_MONITORING_IPERF_CHECK_DELETE_RSA_KEY`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + ++--------------+-------------------------------+ +| **type**: | ``bool`` | ++--------------+-------------------------------+ +| **default**: | ``True`` | ++--------------+-------------------------------+ + +This setting allows you to set whether +`iperf check RSA public key <#configure-iperf-check-for-authentication>`_ +will be deleted after successful completion of the check or not. + ``OPENWISP_MONITORING_AUTO_CHARTS`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/openwisp_monitoring/check/apps.py b/openwisp_monitoring/check/apps.py index 15c9832f0..8f030cf4f 100644 --- a/openwisp_monitoring/check/apps.py +++ b/openwisp_monitoring/check/apps.py @@ -32,3 +32,11 @@ def _connect_signals(self): sender=load_model('config', 'Device'), dispatch_uid='auto_config_check', ) + if app_settings.AUTO_IPERF: + from .base.models import auto_iperf_check_receiver + + post_save.connect( + auto_iperf_check_receiver, + sender=load_model('config', 'Device'), + dispatch_uid='auto_iperf_check', + ) diff --git a/openwisp_monitoring/check/base/models.py b/openwisp_monitoring/check/base/models.py index a0bab9e66..e4abad47a 100644 --- a/openwisp_monitoring/check/base/models.py +++ b/openwisp_monitoring/check/base/models.py @@ -9,7 +9,11 @@ from jsonfield import JSONField from openwisp_monitoring.check import settings as app_settings -from openwisp_monitoring.check.tasks import auto_create_config_check, auto_create_ping +from openwisp_monitoring.check.tasks import ( + auto_create_config_check, + auto_create_iperf_check, + auto_create_ping, +) from openwisp_utils.base import TimeStampedEditableModel from ...utils import transaction_on_commit @@ -116,3 +120,21 @@ def auto_config_check_receiver(sender, instance, created, **kwargs): object_id=str(instance.pk), ) ) + + +def auto_iperf_check_receiver(sender, instance, created, **kwargs): + """ + Implements OPENWISP_MONITORING_AUTO_IPERF + The creation step is executed in the background + """ + # we need to skip this otherwise this task will be executed + # every time the configuration is requested via checksum + if not created: + return + transaction_on_commit( + lambda: auto_create_iperf_check.delay( + model=sender.__name__.lower(), + app_label=sender._meta.app_label, + object_id=str(instance.pk), + ) + ) diff --git a/openwisp_monitoring/check/classes/__init__.py b/openwisp_monitoring/check/classes/__init__.py index 33bf8293c..4a85b5243 100644 --- a/openwisp_monitoring/check/classes/__init__.py +++ b/openwisp_monitoring/check/classes/__init__.py @@ -1,2 +1,3 @@ from .config_applied import ConfigApplied # noqa +from .iperf import Iperf # noqa from .ping import Ping # noqa diff --git a/openwisp_monitoring/check/classes/iperf.py b/openwisp_monitoring/check/classes/iperf.py new file mode 100644 index 000000000..7a93c056f --- /dev/null +++ b/openwisp_monitoring/check/classes/iperf.py @@ -0,0 +1,356 @@ +import logging +from functools import reduce +from json import loads +from json.decoder import JSONDecodeError + +from django.core.exceptions import ValidationError +from jsonschema import draft7_format_checker, validate +from jsonschema.exceptions import ValidationError as SchemaError +from swapper import load_model + +from openwisp_controller.connection.settings import UPDATE_STRATEGIES + +from .. import settings as app_settings +from .base import BaseCheck + +logger = logging.getLogger(__name__) + +Chart = load_model('monitoring', 'Chart') +Metric = load_model('monitoring', 'Metric') +AlertSettings = load_model('monitoring', 'AlertSettings') +DeviceConnection = load_model('connection', 'DeviceConnection') + +DEFAULT_IPERF_CHECK_CONFIG = { + 'host': { + 'type': 'array', + 'items': { + 'type': 'string', + }, + 'default': [], + }, + # username, password max_length chosen from iperf3 docs to avoid iperf param errors + 'username': {'type': 'string', 'default': '', 'minLength': 1, 'maxLength': 20}, + 'password': {'type': 'string', 'default': '', 'minLength': 1, 'maxLength': 20}, + 'rsa_public_key': { + 'type': 'string', + 'default': '', + }, + 'client_options': { + 'type': 'object', + 'properties': { + 'port': { + 'type': 'integer', + 'default': 5201, + # max, min port chosen from iperf3 docs + 'minimum': 1, + 'maximum': 65535, + }, + 'time': { + 'type': 'integer', + # Sets the interval time in seconds + # between periodic bandwidth, jitter, and loss reports. + 'default': 10, + 'minimum': 1, + # arbitrary chosen to avoid slowing down the queue (30min) + 'maximum': 1800, + }, + 'tcp': { + 'type': 'object', + 'properties': { + 'bitrate': { + 'type': 'string', + 'default': '0', + } + }, + }, + 'udp': { + 'type': 'object', + 'properties': { + 'bitrate': { + 'type': 'string', + 'default': '30M', + } + }, + }, + }, + }, +} + + +def get_iperf_schema(): + schema = { + '$schema': 'http://json-schema.org/draft-07/schema#', + 'type': 'object', + 'additionalProperties': True, + 'dependencies': { + 'username': ['password', 'rsa_public_key'], + 'password': ['username', 'rsa_public_key'], + 'rsa_public_key': ['username', 'password'], + }, + } + schema['properties'] = DEFAULT_IPERF_CHECK_CONFIG + return schema + + +class Iperf(BaseCheck): + + schema = get_iperf_schema() + + def validate_params(self, params=None): + try: + if not params: + params = self.params + validate(params, self.schema, format_checker=draft7_format_checker) + except SchemaError as e: + message = 'Invalid param' + path = '/'.join(e.path) + if path: + message = '{0} in "{1}"'.format(message, path) + message = '{0}: {1}'.format(message, e.message) + raise ValidationError({'params': message}) from e + + def check(self, store=True): + iperf_config = app_settings.IPERF_CHECK_CONFIG + if iperf_config: + org_id = str(self.related_object.organization.id) + self.validate_params(params=iperf_config[org_id]) + + port = self._get_param( + 'client_options.port', 'client_options.properties.port.default' + ) + time = self._get_param( + 'client_options.time', 'client_options.properties.time.default' + ) + tcp_bitrate = self._get_param( + 'client_options.tcp.bitrate', + 'client_options.properties.tcp.properties.bitrate.default', + ) + udp_bitrate = self._get_param( + 'client_options.udp.bitrate', + 'client_options.properties.udp.properties.bitrate.default', + ) + username = self._get_param('username', 'username.default') + device_connection = self._get_device_connection() + if not device_connection: + logger.warning( + f'Failed to get a working DeviceConnection for "{self.related_object}", iperf check skipped!' + ) + return + # The DeviceConnection could fail if the management tunnel is down. + if not self._connect(device_connection): + logger.warning( + f'DeviceConnection for "{self.related_object}" is not working, iperf check skipped!' + ) + return + server = self._get_iperf_servers()[0] + command_tcp = f'iperf3 -c {server} -p {port} -t {time} -b {tcp_bitrate} -J' + command_udp = f'iperf3 -c {server} -p {port} -t {time} -b {udp_bitrate} -u -J' + + # All three parameters ie. username, password and rsa_public_key is required + # for authentication to work, checking only username here + if username: + password = self._get_param('password', 'password.default') + key = self._get_param('rsa_public_key', 'rsa_public_key.default') + rsa_public_key = self._get_compelete_rsa_key(key) + rsa_public_key_path = '/tmp/iperf-public-key.pem' + + command_tcp = f'echo "{rsa_public_key}" > {rsa_public_key_path} && \ + IPERF3_PASSWORD="{password}" iperf3 -c {server} -p {port} -t {time} \ + --username "{username}" --rsa-public-key-path {rsa_public_key_path} -b {tcp_bitrate} -J' + + command_udp = f'IPERF3_PASSWORD="{password}" iperf3 -c {server} -p {port} -t {time} \ + --username "{username}" --rsa-public-key-path {rsa_public_key_path} -b {udp_bitrate} -u -J' + # If IPERF_CHECK_DELETE_RSA_KEY, remove rsa_public_key from the device + if app_settings.IPERF_CHECK_DELETE_RSA_KEY: + command_udp = f'{command_udp} && rm {rsa_public_key_path}' + + # TCP mode + result, exit_code = self._exec_command(device_connection, command_tcp) + # Exit code 127 : command doesn't exist + if exit_code == 127: + logger.warning( + f'Iperf3 is not installed on the "{self.related_object}", error - {result.strip()}' + ) + return + + result_tcp = self._get_iperf_result(result, exit_code, mode='TCP') + # UDP mode + result, exit_code = self._exec_command(device_connection, command_udp) + result_udp = self._get_iperf_result(result, exit_code, mode='UDP') + result = {} + if store and result_tcp and result_udp: + # Store iperf_result field 1 if any mode passes, store 0 when both fails + iperf_result = result_tcp['iperf_result'] | result_udp['iperf_result'] + result.update({**result_tcp, **result_udp, 'iperf_result': iperf_result}) + self.store_result(result) + device_connection.disconnect() + return result + + def _get_compelete_rsa_key(self, key): + """ + Returns RSA key with proper format + """ + pem_prefix = '-----BEGIN PUBLIC KEY-----\n' + pem_suffix = '\n-----END PUBLIC KEY-----' + key = key.strip() + return f'{pem_prefix}{key}{pem_suffix}' + + def _get_device_connection(self): + """ + Returns an active SSH DeviceConnection for a device + """ + openwrt_ssh = UPDATE_STRATEGIES[0][0] + device_connection = DeviceConnection.objects.filter( + device_id=self.related_object.id, + update_strategy=openwrt_ssh, + enabled=True, + ).first() + return device_connection + + def _get_iperf_servers(self): + """ + Get iperf test servers + """ + org_servers = self._get_param('host', 'host.default') + return org_servers + + def _exec_command(self, dc, command): + """ + Executes device command (easier to mock) + """ + return dc.connector_instance.exec_command(command, raise_unexpected_exit=False) + + def _connect(self, dc): + """ + Connects device returns its working status (easier to mock) + """ + return dc.connect() + + def _deep_get(self, dictionary, keys, default=None): + """ + Returns dict value using dot_key string ie.key1.key2_nested.key3_nested + if found otherwise returns default + """ + return reduce( + lambda d, key: d.get(key, default) if isinstance(d, dict) else default, + keys.split("."), + dictionary, + ) + + def _get_param(self, conf_key, default_conf_key): + """ + Returns specified param or its default value according to the schema + """ + org_id = str(self.related_object.organization.id) + iperf_config = app_settings.IPERF_CHECK_CONFIG + + if self.params: + check_params = self._deep_get(self.params, conf_key) + if check_params: + return check_params + + if iperf_config: + iperf_config = iperf_config[org_id] + iperf_config_param = self._deep_get(iperf_config, conf_key) + if iperf_config_param: + return iperf_config_param + + return self._deep_get(DEFAULT_IPERF_CHECK_CONFIG, default_conf_key) + + def _get_iperf_result(self, result, exit_code, mode): + """ + Returns iperf test result + """ + try: + result = loads(result) + except JSONDecodeError: + # Errors other than iperf3 test errors + logger.warning( + f'Iperf check failed for "{self.related_object}", error - {result.strip()}' + ) + return + + if mode == 'TCP': + if exit_code != 0: + logger.warning( + f'Iperf check failed for "{self.related_object}", {result["error"]}' + ) + return { + 'iperf_result': 0, + 'sent_bps_tcp': 0.0, + 'received_bps_tcp': 0.0, + 'sent_bytes_tcp': 0, + 'received_bytes_tcp': 0, + 'retransmits': 0, + } + else: + sent = result['end']['sum_sent'] + received = result['end']['sum_received'] + return { + 'iperf_result': 1, + 'sent_bps_tcp': float(sent['bits_per_second']), + 'received_bps_tcp': float(received['bits_per_second']), + 'sent_bytes_tcp': sent['bytes'], + 'received_bytes_tcp': received['bytes'], + 'retransmits': sent['retransmits'], + } + + elif mode == 'UDP': + if exit_code != 0: + logger.warning( + f'Iperf check failed for "{self.related_object}", {result["error"]}' + ) + return { + 'iperf_result': 0, + 'sent_bps_udp': 0.0, + 'sent_bytes_udp': 0, + 'jitter': 0.0, + 'total_packets': 0, + 'lost_packets': 0, + 'lost_percent': 0.0, + } + else: + return { + 'iperf_result': 1, + 'sent_bps_udp': float(result['end']['sum']['bits_per_second']), + 'sent_bytes_udp': result['end']['sum']['bytes'], + 'jitter': float(result['end']['sum']['jitter_ms']), + 'total_packets': result['end']['sum']['packets'], + 'lost_packets': result['end']['sum']['lost_packets'], + 'lost_percent': float(result['end']['sum']['lost_percent']), + } + + def store_result(self, result): + """ + Store result in the DB + """ + metric = self._get_metric() + copied = result.copy() + iperf_result = copied.pop('iperf_result') + metric.write(iperf_result, extra_values=copied) + + def _get_metric(self): + """ + Gets or creates metric + """ + metric, created = self._get_or_create_metric() + if created: + self._create_charts(metric) + return metric + + def _create_charts(self, metric): + """ + Creates iperf related charts + """ + charts = [ + 'bandwidth', + 'transfer', + 'retransmits', + 'jitter', + 'datagram', + 'datagram_loss', + ] + for chart in charts: + chart = Chart(metric=metric, configuration=chart) + chart.full_clean() + chart.save() diff --git a/openwisp_monitoring/check/settings.py b/openwisp_monitoring/check/settings.py index 4575c8eca..c6b8a0368 100644 --- a/openwisp_monitoring/check/settings.py +++ b/openwisp_monitoring/check/settings.py @@ -5,9 +5,14 @@ ( ('openwisp_monitoring.check.classes.Ping', 'Ping'), ('openwisp_monitoring.check.classes.ConfigApplied', 'Configuration Applied'), + ('openwisp_monitoring.check.classes.Iperf', 'Iperf'), ), ) AUTO_PING = get_settings_value('AUTO_PING', True) AUTO_CONFIG_CHECK = get_settings_value('AUTO_DEVICE_CONFIG_CHECK', True) MANAGEMENT_IP_ONLY = get_settings_value('MANAGEMENT_IP_ONLY', True) PING_CHECK_CONFIG = get_settings_value('PING_CHECK_CONFIG', {}) +AUTO_IPERF = get_settings_value('AUTO_IPERF', False) +IPERF_CHECK_CONFIG = get_settings_value('IPERF_CHECK_CONFIG', {}) +IPERF_CHECK_DELETE_RSA_KEY = get_settings_value('IPERF_CHECK_DELETE_RSA_KEY', True) +CHECKS_LIST = get_settings_value('CHECK_LIST', list(dict(CHECK_CLASSES).keys())) diff --git a/openwisp_monitoring/check/tasks.py b/openwisp_monitoring/check/tasks.py index 2ae62bc0d..76ce82bb1 100644 --- a/openwisp_monitoring/check/tasks.py +++ b/openwisp_monitoring/check/tasks.py @@ -4,9 +4,11 @@ from celery import shared_task from django.conf import settings from django.contrib.contenttypes.models import ContentType -from django.core.exceptions import ObjectDoesNotExist +from django.core.exceptions import ImproperlyConfigured, ObjectDoesNotExist from swapper import load_model +from .settings import CHECKS_LIST + logger = logging.getLogger(__name__) @@ -15,7 +17,7 @@ def get_check_model(): @shared_task -def run_checks(): +def run_checks(checks=None): """ Retrieves the id of all active checks in chunks of 2000 items and calls the ``perform_check`` task (defined below) for each of them. @@ -23,9 +25,22 @@ def run_checks(): This allows to enqueue all the checks that need to be performed and execute them in parallel with multiple workers if needed. """ + # If checks is None, We should execute all the checks + if checks is None: + checks = CHECKS_LIST + + if not isinstance(checks, list): + raise ImproperlyConfigured( + f'Check path {checks} should be of type "list"' + ) # pragma: no cover + if not all(check_path in CHECKS_LIST for check_path in checks): + raise ImproperlyConfigured( + f'Check path {checks} should be in {CHECKS_LIST}' + ) # pragma: no cover + iterator = ( get_check_model() - .objects.filter(is_active=True) + .objects.filter(is_active=True, check_type__in=checks) .only('id') .values('id') .iterator() @@ -100,3 +115,30 @@ def auto_create_config_check( ) check.full_clean() check.save() + + +@shared_task +def auto_create_iperf_check( + model, app_label, object_id, check_model=None, content_type_model=None +): + """ + Called by openwisp_monitoring.check.models.auto_iperf_check_receiver + """ + Check = check_model or get_check_model() + iperf_check_path = 'openwisp_monitoring.check.classes.Iperf' + has_check = Check.objects.filter( + object_id=object_id, content_type__model='device', check_type=iperf_check_path + ).exists() + # create new check only if necessary + if has_check: + return + content_type_model = content_type_model or ContentType + ct = content_type_model.objects.get(app_label=app_label, model=model) + check = Check( + name='Iperf', + check_type=iperf_check_path, + content_type=ct, + object_id=object_id, + ) + check.full_clean() + check.save() diff --git a/openwisp_monitoring/check/tests/iperf_test_utils.py b/openwisp_monitoring/check/tests/iperf_test_utils.py new file mode 100644 index 000000000..97a8a959a --- /dev/null +++ b/openwisp_monitoring/check/tests/iperf_test_utils.py @@ -0,0 +1,931 @@ +# flake8: noqa + +RESULT_TCP = """ +{ + "start": { + "connected": [ + { + "socket": 5, + "local_host": "127.0.0.1", + "local_port": 54966, + "remote_host": "127.0.0.1", + "remote_port": 5201 + } + ], + "version": "iperf 3.9", + "system_info": "Linux openwisp-desktop 5.11.2-51-generic #58~20.04.1-Ubuntu SMP Tue Jun 14 11:29:12 UTC 2022 x86_64", + "timestamp": { + "time": "Thu, 30 Jun 2022 21:39:55 GMT", + "timesecs": 1656625195 + }, + "connecting_to": { + "host": "localhost", + "port": 5201 + }, + "cookie": "npx4ad65t3j4wginxr4a7mqedmkhhspx3sob", + "tcp_mss_default": 32768, + "sock_bufsize": 0, + "sndbuf_actual": 16384, + "rcvbuf_actual": 131072, + "test_start": { + "protocol": "TCP", + "num_streams": 1, + "blksize": 131072, + "omit": 0, + "duration": 10, + "bytes": 0, + "blocks": 0, + "reverse": 0, + "tos": 0 + } + }, + "intervals": [ + { + "streams": [ + { + "socket": 5, + "start": 0, + "end": 1.000048, + "seconds": 1.000048041343689, + "bytes": 5790760960, + "bits_per_second": 46323862219.414116, + "retransmits": 0, + "snd_cwnd": 1506109, + "rtt": 22, + "rttvar": 3, + "pmtu": 65535, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 0, + "end": 1.000048, + "seconds": 1.000048041343689, + "bytes": 5790760960, + "bits_per_second": 46323862219.414116, + "retransmits": 0, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 1.000048, + "end": 2.000185, + "seconds": 1.0001369714736938, + "bytes": 5463080960, + "bits_per_second": 43698662209.83867, + "retransmits": 0, + "snd_cwnd": 2160939, + "rtt": 22, + "rttvar": 3, + "pmtu": 65535, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 1.000048, + "end": 2.000185, + "seconds": 1.0001369714736938, + "bytes": 5463080960, + "bits_per_second": 43698662209.83867, + "retransmits": 0, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 2.000185, + "end": 3.00019, + "seconds": 1.0000050067901611, + "bytes": 5679349760, + "bits_per_second": 45434570598.638954, + "retransmits": 0, + "snd_cwnd": 2553837, + "rtt": 21, + "rttvar": 1, + "pmtu": 65535, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 2.000185, + "end": 3.00019, + "seconds": 1.0000050067901611, + "bytes": 5679349760, + "bits_per_second": 45434570598.638954, + "retransmits": 0, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 3.00019, + "end": 4.000232, + "seconds": 1.0000419616699219, + "bytes": 5710807040, + "bits_per_second": 45684539320.4405, + "retransmits": 0, + "snd_cwnd": 2553837, + "rtt": 24, + "rttvar": 5, + "pmtu": 65535, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 3.00019, + "end": 4.000232, + "seconds": 1.0000419616699219, + "bytes": 5710807040, + "bits_per_second": 45684539320.4405, + "retransmits": 0, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 4.000232, + "end": 5.000158, + "seconds": 0.999925971031189, + "bytes": 5307105280, + "bits_per_second": 42459985508.942955, + "retransmits": 0, + "snd_cwnd": 3208667, + "rtt": 27, + "rttvar": 4, + "pmtu": 65535, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 4.000232, + "end": 5.000158, + "seconds": 0.999925971031189, + "bytes": 5307105280, + "bits_per_second": 42459985508.942955, + "retransmits": 0, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 5.000158, + "end": 6.000229, + "seconds": 1.0000710487365723, + "bytes": 5308416000, + "bits_per_second": 42464310964.35657, + "retransmits": 0, + "snd_cwnd": 3208667, + "rtt": 28, + "rttvar": 1, + "pmtu": 65535, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 5.000158, + "end": 6.000229, + "seconds": 1.0000710487365723, + "bytes": 5308416000, + "bits_per_second": 42464310964.35657, + "retransmits": 0, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 6.000229, + "end": 7.000056, + "seconds": 0.9998270273208618, + "bytes": 5241569280, + "bits_per_second": 41939808681.0701, + "retransmits": 0, + "snd_cwnd": 3208667, + "rtt": 23, + "rttvar": 4, + "pmtu": 65535, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 6.000229, + "end": 7.000056, + "seconds": 0.9998270273208618, + "bytes": 5241569280, + "bits_per_second": 41939808681.0701, + "retransmits": 0, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 7.000056, + "end": 8.000202, + "seconds": 1.0001460313797, + "bytes": 5734400000, + "bits_per_second": 45868501759.40331, + "retransmits": 0, + "snd_cwnd": 3208667, + "rtt": 22, + "rttvar": 1, + "pmtu": 65535, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 7.000056, + "end": 8.000202, + "seconds": 1.0001460313797, + "bytes": 5734400000, + "bits_per_second": 45868501759.40331, + "retransmits": 0, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 8.000202, + "end": 9.0003, + "seconds": 1.0000979900360107, + "bytes": 5415895040, + "bits_per_second": 43322915105.98867, + "retransmits": 0, + "snd_cwnd": 3208667, + "rtt": 35, + "rttvar": 12, + "pmtu": 65535, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 8.000202, + "end": 9.0003, + "seconds": 1.0000979900360107, + "bytes": 5415895040, + "bits_per_second": 43322915105.98867, + "retransmits": 0, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 9.0003, + "end": 10.000218, + "seconds": 0.999917984008789, + "bytes": 5402787840, + "bits_per_second": 43225847930.76398, + "retransmits": 0, + "snd_cwnd": 3208667, + "rtt": 26, + "rttvar": 17, + "pmtu": 65535, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 9.0003, + "end": 10.000218, + "seconds": 0.999917984008789, + "bytes": 5402787840, + "bits_per_second": 43225847930.76398, + "retransmits": 0, + "omitted": false, + "sender": true + } + } + ], + "end": { + "streams": [ + { + "sender": { + "socket": 5, + "start": 0, + "end": 10.000218, + "seconds": 10.000218, + "bytes": 55054172160, + "bits_per_second": 44042377604.16823, + "retransmits": 0, + "max_snd_cwnd": 3208667, + "max_rtt": 35, + "min_rtt": 21, + "mean_rtt": 25, + "sender": true + }, + "receiver": { + "socket": 5, + "start": 0, + "end": 10.000272, + "seconds": 10.000218, + "bytes": 55054172160, + "bits_per_second": 44042139781.797935, + "sender": true + } + } + ], + "sum_sent": { + "start": 0, + "end": 10.000218, + "seconds": 10.000218, + "bytes": 55054172160, + "bits_per_second": 44042377604.16823, + "retransmits": 0, + "sender": true + }, + "sum_received": { + "start": 0, + "end": 10.000272, + "seconds": 10.000272, + "bytes": 55054172160, + "bits_per_second": 44042139781.797935, + "sender": true + }, + "cpu_utilization_percent": { + "host_total": 99.49882081069975, + "host_user": 0.6620490539150914, + "host_system": 98.83676176238454, + "remote_total": 0.377797593572381, + "remote_user": 0.02174276147834767, + "remote_system": 0.35605477540538377 + }, + "sender_tcp_congestion": "cubic", + "receiver_tcp_congestion": "cubic" + } +} +""" + +RESULT_UDP = """ +{ + "start": { + "connected": [ + { + "socket": 5, + "local_host": "127.0.0.1", + "local_port": 54477, + "remote_host": "127.0.0.1", + "remote_port": 5201 + } + ], + "version": "iperf 3.9", + "system_info": "openwisp-desktop 5.11.2-51-generic #58~20.04.1-Ubuntu SMP Tue Jun 14 11:29:12 UTC 2022 x86_64", + "timestamp": { + "time": "Thu, 30 Jun 2022 21:10:31 GMT", + "timesecs": 1656623431 + }, + "connecting_to": { + "host": "localhost", + "port": 5201 + }, + "cookie": "kvuxkz3ncutquvpl2evufmdkn726molzocot", + "sock_bufsize": 0, + "sndbuf_actual": 212992, + "rcvbuf_actual": 212992, + "test_start": { + "protocol": "UDP", + "num_streams": 1, + "blksize": 32768, + "omit": 0, + "duration": 10, + "bytes": 0, + "blocks": 0, + "reverse": 0, + "tos": 0 + } + }, + "intervals": [ + { + "streams": [ + { + "socket": 5, + "start": 0, + "end": 1.000057, + "seconds": 1.0000569820404053, + "bytes": 131072, + "bits_per_second": 1048516.253404483, + "packets": 4, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 0, + "end": 1.000057, + "seconds": 1.0000569820404053, + "bytes": 131072, + "bits_per_second": 1048516.253404483, + "packets": 4, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 1.000057, + "end": 2.000079, + "seconds": 1.000022053718567, + "bytes": 131072, + "bits_per_second": 1048552.875509981, + "packets": 4, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 1.000057, + "end": 2.000079, + "seconds": 1.000022053718567, + "bytes": 131072, + "bits_per_second": 1048552.875509981, + "packets": 4, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 2.000079, + "end": 3.000079, + "seconds": 1, + "bytes": 131072, + "bits_per_second": 1048576, + "packets": 4, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 2.000079, + "end": 3.000079, + "seconds": 1, + "bytes": 131072, + "bits_per_second": 1048576, + "packets": 4, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 3.000079, + "end": 4.000079, + "seconds": 1, + "bytes": 131072, + "bits_per_second": 1048576, + "packets": 4, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 3.000079, + "end": 4.000079, + "seconds": 1, + "bytes": 131072, + "bits_per_second": 1048576, + "packets": 4, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 4.000079, + "end": 5.000182, + "seconds": 1.0001029968261719, + "bytes": 131072, + "bits_per_second": 1048468.0111225117, + "packets": 4, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 4.000079, + "end": 5.000182, + "seconds": 1.0001029968261719, + "bytes": 131072, + "bits_per_second": 1048468.0111225117, + "packets": 4, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 5.000182, + "end": 6.000056, + "seconds": 0.9998739957809448, + "bytes": 131072, + "bits_per_second": 1048708.1416504055, + "packets": 4, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 5.000182, + "end": 6.000056, + "seconds": 0.9998739957809448, + "bytes": 131072, + "bits_per_second": 1048708.1416504055, + "packets": 4, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 6.000056, + "end": 7.000056, + "seconds": 1, + "bytes": 131072, + "bits_per_second": 1048576, + "packets": 4, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 6.000056, + "end": 7.000056, + "seconds": 1, + "bytes": 131072, + "bits_per_second": 1048576, + "packets": 4, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 7.000056, + "end": 8.000056, + "seconds": 1, + "bytes": 131072, + "bits_per_second": 1048576, + "packets": 4, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 7.000056, + "end": 8.000056, + "seconds": 1, + "bytes": 131072, + "bits_per_second": 1048576, + "packets": 4, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 8.000056, + "end": 9.000057, + "seconds": 1.0000009536743164, + "bytes": 131072, + "bits_per_second": 1048575.0000009537, + "packets": 4, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 8.000056, + "end": 9.000057, + "seconds": 1.0000009536743164, + "bytes": 131072, + "bits_per_second": 1048575.0000009537, + "packets": 4, + "omitted": false, + "sender": true + } + }, + { + "streams": [ + { + "socket": 5, + "start": 9.000057, + "end": 10.00006, + "seconds": 1.0000029802322388, + "bytes": 131072, + "bits_per_second": 1048572.8750093132, + "packets": 4, + "omitted": false, + "sender": true + } + ], + "sum": { + "start": 9.000057, + "end": 10.00006, + "seconds": 1.0000029802322388, + "bytes": 131072, + "bits_per_second": 1048572.8750093132, + "packets": 4, + "omitted": false, + "sender": true + } + } + ], + "end": { + "streams": [ + { + "udp": { + "socket": 5, + "start": 0, + "end": 10.00006, + "seconds": 10.00006, + "bytes": 1310720, + "bits_per_second": 1048569.7085817485, + "jitter_ms": 0.011259258240784126, + "lost_packets": 0, + "packets": 40, + "lost_percent": 0, + "out_of_order": 0, + "sender": true + } + } + ], + "sum": { + "start": 0, + "end": 10.000115, + "seconds": 10.000115, + "bytes": 1310720, + "bits_per_second": 1048569.7085817485, + "jitter_ms": 0.011259258240784126, + "lost_packets": 0, + "packets": 40, + "lost_percent": 0, + "sender": true + }, + "cpu_utilization_percent": { + "host_total": 0.6057128493969417, + "host_user": 0, + "host_system": 0.6057128493969417, + "remote_total": 0.016163250220207454, + "remote_user": 0.01616789349806445, + "remote_system": 0 + } + } +} +""" + +RESULT_FAIL = """ +{ + "start": { + "connected": [], + "version": "iperf 3.7", + "system_info": "Linux vm-openwrt 4.14.171 #0 SMP Thu Feb 27 21:05:12 2020 x86_64" + }, + "intervals": [], + "end": {}, + "error": "error - unable to connect to server: Connection refused" +} +""" +RESULT_AUTH_FAIL = """ +{ + "start": { + "connected": [], + "version": "iperf 3.7", + "system_info": "Linux vm-openwrt 4.14.171 #0 SMP Thu Feb 27 21:05:12 2020 x86_64", + "timestamp": { + "time": "Tue, 19 Jul 2022 12:23:38 UTC", + "timesecs": 1658233418 + }, + "connecting_to": { + "host": "192.168.5.109", + "port": 5201 + }, + "cookie": "llz5f6akwyonbtcj3fx4phvfaflohdlvxr4z", + "tcp_mss_default": 1460 + }, + "intervals": [], + "end": {}, + "error": "error - test authorization failed" +} +""" +PARAM_ERROR = """Usage: iperf3 [-s|-c host] [options] + iperf3 [-h|--help] [-v|--version] + +Server or Client: + -p, --port # server port to listen on/connect to + -f, --format [kmgtKMGT] format to report: Kbits, Mbits, Gbits, Tbits + -i, --interval # seconds between periodic throughput reports + -F, --file name xmit/recv the specified file + -A, --affinity n/n,m set CPU affinity + -B, --bind bind to the interface associated with the address + -V, --verbose more detailed output + -J, --json output in JSON format + --logfile f send output to a log file + --forceflush force flushing output at every interval + -d, --debug emit debugging output + -v, --version show version information and quit + -h, --help show this message and quit +Server specific: + -s, --server run in server mode + -D, --daemon run the server as a daemon + -I, --pidfile file write PID file + -1, --one-off handle one client connection then exit + --rsa-private-key-path path to the RSA private key used to decrypt + authentication credentials + --authorized-users-path path to the configuration file containing user + credentials +Client specific: + -c, --client run in client mode, connecting to + -u, --udp use UDP rather than TCP + --connect-timeout # timeout for control connection setup (ms) + -b, --bitrate #[KMG][/#] target bitrate in bits/sec (0 for unlimited) + (default 1 Mbit/sec for UDP, unlimited for TCP) + (optional slash and packet count for burst mode) + --pacing-timer #[KMG] set the timing for pacing, in microseconds (default 1000) + --fq-rate #[KMG] enable fair-queuing based socket pacing in + bits/sec (Linux only) + -t, --time # time in seconds to transmit for (default 10 secs) + -n, --bytes #[KMG] number of bytes to transmit (instead of -t) + -k, --blockcount #[KMG] number of blocks (packets) to transmit (instead of -t or -n) + -l, --length #[KMG] length of buffer to read or write + (default 128 KB for TCP, dynamic or 1460 for UDP) + --cport bind to a specific client port (TCP and UDP, default: ephemeral port) + -P, --parallel # number of parallel client streams to run + -R, --reverse run in reverse mode (server sends, client receives) + --bidir run in bidirectional mode. + Client and server send and receive data. + -w, --window #[KMG] set window size / socket buffer size + -C, --congestion set TCP congestion control algorithm (Linux and FreeBSD only) + -M, --set-mss # set TCP/SCTP maximum segment size (MTU - 40 bytes) + -N, --no-delay set TCP/SCTP no delay, disabling Nagle's Algorithm + -4, --version4 only use IPv4 + -6, --version6 only use IPv6 + -S, --tos N set the IP type of service, 0-255. + The usual prefixes for octal and hex can be used, + i.e. 52, 064 and 0x34 all specify the same value. + --dscp N or --dscp val set the IP dscp value, either 0-63 or symbolic. + Numeric values can be specified in decimal, + octal and hex (see --tos above). + -L, --flowlabel N set the IPv6 flow label (only supported on Linux) + -Z, --zerocopy use a 'zero copy' method of sending data + -O, --omit N omit the first n seconds + -T, --title str prefix every output line with this string + --extra-data str data string to include in client and server JSON + --get-server-output get results from server + --udp-counters-64bit use 64-bit counters in UDP test packets + --repeating-payload use repeating pattern in payload, instead of + randomized payload (like in iperf2) + --username username for authentication + --rsa-public-key-path path to the RSA public key used to encrypt + authentication credentials + +[KMG] indicates options that support a K/M/G suffix for kilo-, mega-, or giga- + +iperf3 homepage at: https://software.es.net/iperf/ +Report bugs to: https://github.com/esnet/iperf +iperf3: parameter error - you must specify username (max 20 chars), password (max 20 chars) and a path to a valid public rsa client to be used""" + +TEST_RSA_KEY = """MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwuEm+iYrfSWJOupy6X3N +dxZvUCxvmoL3uoGAs0O0Y32unUQrwcTIxudy38JSuCccD+k2Rf8S4WuZSiTxaoea +6Du99YQGVZeY67uJ21SWFqWU+w6ONUj3TrNNWoICN7BXGLE2BbSBz9YaXefE3aqw +GhEjQz364Itwm425vHn2MntSp0weWb4hUCjQUyyooRXPrFUGBOuY+VvAvMyAG4Uk +msapnWnBSxXt7Tbb++A5XbOMdM2mwNYDEtkD5ksC/x3EVBrI9FvENsH9+u/8J9Mf +2oPl4MnlCMY86MQypkeUn7eVWfDnseNky7TyC0/IgCXve/iaydCCFdkjyo1MTAA4 +BQIDAQAB""" + +INVALID_PARAMS = [ + {'host': ''}, + {'host': 12}, + {'host': 'test.openwisp.io'}, + {'username': 121}, + {'password': -323}, + {'rsa_public_key': 1334}, + {'username': ''}, + {'password': 0}, + {'rsa_public_key': 0}, + { + 'username': 'openwisp-test-user', + 'password': 'open-pass', + 'rsa_public_key': -1, + }, + { + 'username': 1123, + 'password': 'rossi', + 'rsa_public_key': '', + }, + { + 'username': 'openwisp-test-user', + 'password': -214, + }, + { + 'client_options': { + 'port': 'testport', + 'time': 120, + 'tcp': {'bitrate': '10M'}, + 'udp': {'bitrate': '50M'}, + } + }, + { + 'host': ['test.openwisp.io'], + 'client_options': { + 'port': 'testport', + 'time': 120, + 'tcp': {'bitrate': '10M'}, + 'udp': {'bitrate': '50M'}, + }, + }, + { + 'host': ['test.openwisp.io'], + 'client_options': { + 'port': 70000, + 'time': 120, + 'tcp': {'bitrate': '10M'}, + 'udp': {'bitrate': '50M'}, + }, + }, + { + 'host': ['test.openwisp.io'], + 'client_options': { + 'port': -21, + 'time': 120, + 'tcp': {'bitrate': '10M'}, + 'udp': {'bitrate': '50M'}, + }, + }, + { + 'host': ['test.openwisp.io'], + 'client_options': { + 'port': 5201, + 'time': 1200000, + 'tcp': {'bitrate': '10M'}, + 'udp': {'bitrate': '50M'}, + }, + }, + { + 'host': ['test.openwisp.io'], + 'client_options': { + 'port': 5201, + 'time': 20, + 'tcp': {'bitrate': 10}, + 'udp': {'bitrate': '50M'}, + }, + }, + { + 'host': ['test.openwisp.io'], + 'client_options': { + 'port': 5201, + 'time': 120, + 'tcp': {'bitrate': '10M'}, + 'udp': {'bitrate': 50}, + }, + }, +] diff --git a/openwisp_monitoring/check/tests/test_iperf.py b/openwisp_monitoring/check/tests/test_iperf.py new file mode 100644 index 000000000..a9c3a253d --- /dev/null +++ b/openwisp_monitoring/check/tests/test_iperf.py @@ -0,0 +1,628 @@ +from json import loads +from unittest.mock import call, patch + +from django.core.exceptions import ValidationError +from django.test import TransactionTestCase +from swapper import load_model + +from openwisp_controller.connection.settings import UPDATE_STRATEGIES +from openwisp_controller.connection.tests.utils import CreateConnectionsMixin, SshServer +from openwisp_monitoring.check.classes.iperf import get_iperf_schema +from openwisp_monitoring.check.classes.iperf import logger as iperf_logger + +from ...device.tests import TestDeviceMonitoringMixin +from .. import settings as app_settings +from ..classes import Iperf +from .iperf_test_utils import ( + INVALID_PARAMS, + PARAM_ERROR, + RESULT_AUTH_FAIL, + RESULT_FAIL, + RESULT_TCP, + RESULT_UDP, + TEST_RSA_KEY, +) + +Chart = load_model('monitoring', 'Chart') +AlertSettings = load_model('monitoring', 'AlertSettings') +Metric = load_model('monitoring', 'Metric') +Check = load_model('check', 'Check') +Notification = load_model('openwisp_notifications', 'Notification') + + +class TestIperf(CreateConnectionsMixin, TestDeviceMonitoringMixin, TransactionTestCase): + + _IPERF = app_settings.CHECK_CLASSES[2][0] + _RESULT_KEYS = [ + 'iperf_result', + 'sent_bps_tcp', + 'received_bps_tcp', + 'sent_bytes_tcp', + 'received_bytes_tcp', + 'retransmits', + 'sent_bps_udp', + 'sent_bytes_udp', + 'jitter', + 'total_packets', + 'lost_packets', + 'lost_percent', + ] + + @classmethod + def setUpClass(cls): + super().setUpClass() + cls.mock_ssh_server = SshServer( + {'root': cls._TEST_RSA_PRIVATE_KEY_PATH} + ).__enter__() + cls.ssh_server.port = cls.mock_ssh_server.port + + @classmethod + def tearDownClass(cls): + super().tearDownClass() + cls.mock_ssh_server.__exit__() + + def _create_iperf_test_env(self): + ckey = self._create_credentials_with_key(port=self.ssh_server.port) + dc = self._create_device_connection(credentials=ckey) + dc.connect() + self.device = dc.device + self._EXPECTED_COMMAND_CALLS = [ + call(dc, 'iperf3 -c iperf.openwisptestserver.com -p 5201 -t 10 -b 0 -J'), + call( + dc, 'iperf3 -c iperf.openwisptestserver.com -p 5201 -t 10 -b 30M -u -J' + ), + ] + self._EXPECTED_WARN_CALLS = [ + call( + f'Iperf check failed for "{self.device}", error - unable to connect to server: Connection refused' # noqa + ), + call( + f'Iperf check failed for "{self.device}", error - unable to connect to server: Connection refused' # noqa + ), + ] + check = Check.objects.get(check_type=self._IPERF) + return check, dc + + def _set_auth_expected_calls(self, dc, org_id, config): + password = config[org_id]['password'] + username = config[org_id]['username'] + server = 'iperf.openwisptestserver.com' + test_prefix = '-----BEGIN PUBLIC KEY-----\n' + test_suffix = '\n-----END PUBLIC KEY-----' + key = config[org_id]['rsa_public_key'] + rsa_key_path = '/tmp/iperf-public-key.pem' + + self._EXPECTED_COMMAND_CALLS = [ + call( + dc, + f'echo "{test_prefix}{key}{test_suffix}" > {rsa_key_path} && \ + IPERF3_PASSWORD="{password}" iperf3 -c {server} -p 5201 -t 10 \ + --username "{username}" --rsa-public-key-path {rsa_key_path} -b 0 -J', + ), + call( + dc, + f'IPERF3_PASSWORD="{password}" iperf3 -c {server} -p 5201 -t 10 \ + --username "{username}" --rsa-public-key-path {rsa_key_path} -b 30M -u -J && rm {rsa_key_path}', + ), + ] + + def _assert_iperf_fail_result(self, result): + for key in self._RESULT_KEYS: + self.assertIn(key, result) + self.assertEqual(result['iperf_result'], 0) + self.assertEqual(result['sent_bps_tcp'], 0.0) + self.assertEqual(result['received_bps_tcp'], 0.0) + self.assertEqual(result['sent_bytes_tcp'], 0) + self.assertEqual(result['received_bytes_tcp'], 0) + self.assertEqual(result['retransmits'], 0) + self.assertEqual(result['sent_bps_udp'], 0.0) + self.assertEqual(result['sent_bytes_udp'], 0) + self.assertEqual(result['jitter'], 0.0) + self.assertEqual(result['total_packets'], 0) + self.assertEqual(result['lost_percent'], 0.0) + + @patch.object(Iperf, '_exec_command') + @patch.object( + Iperf, '_get_iperf_servers', return_value=['iperf.openwisptestserver.com'] + ) + @patch.object(iperf_logger, 'warning') + def test_iperf_check_no_params( + self, mock_warn, mock_get_iperf_servers, mock_exec_command + ): + mock_exec_command.side_effect = [(RESULT_TCP, 0), (RESULT_UDP, 0)] + + # By default check params {} + check, _ = self._create_iperf_test_env() + tcp_result = loads(RESULT_TCP)['end'] + udp_result = loads(RESULT_UDP)['end']['sum'] + result = check.perform_check(store=False) + for key in self._RESULT_KEYS: + self.assertIn(key, result) + self.assertEqual(result['iperf_result'], 1) + self.assertEqual( + result['sent_bps_tcp'], tcp_result['sum_sent']['bits_per_second'] + ) + self.assertEqual( + result['received_bytes_tcp'], tcp_result['sum_received']['bytes'] + ) + self.assertEqual(result['jitter'], udp_result['jitter_ms']) + self.assertEqual(result['total_packets'], udp_result['packets']) + self.assertEqual(mock_warn.call_count, 0) + self.assertEqual(mock_exec_command.call_count, 2) + self.assertEqual(mock_get_iperf_servers.call_count, 1) + mock_exec_command.assert_has_calls(self._EXPECTED_COMMAND_CALLS) + + @patch.object(Iperf, '_exec_command') + @patch.object( + Iperf, '_get_iperf_servers', return_value=['iperf.openwisptestserver.com'] + ) + @patch.object(iperf_logger, 'warning') + def test_iperf_check_params( + self, mock_warn, mock_get_iperf_servers, mock_exec_command + ): + mock_exec_command.side_effect = [(RESULT_TCP, 0), (RESULT_UDP, 0)] + tcp_result = loads(RESULT_TCP)['end'] + udp_result = loads(RESULT_UDP)['end']['sum'] + check, dc = self._create_iperf_test_env() + server = 'iperf.openwisptestserver.com' + test_prefix = '-----BEGIN PUBLIC KEY-----\n' + test_suffix = '\n-----END PUBLIC KEY-----' + rsa_key_path = '/tmp/test-rsa.pem' + test_params = { + 'username': 'openwisp-test-user', + 'password': 'openwisp_pass', + 'rsa_public_key': TEST_RSA_KEY, + 'client_options': { + 'port': 6201, + 'time': 20, + 'tcp': {'bitrate': '10M'}, + 'udp': {'bitrate': '30M'}, + }, + } + time = test_params['client_options']['time'] + port = test_params['client_options']['port'] + tcp_bitrate = test_params['client_options']['tcp']['bitrate'] + udp_bitrate = test_params['client_options']['udp']['bitrate'] + username = test_params['username'] + password = test_params['password'] + key = test_params['rsa_public_key'] + rsa_key_path = '/tmp/iperf-public-key.pem' + check.params = test_params + check.save() + self._EXPECTED_COMMAND_CALLS = [ + call( + dc, + f'echo "{test_prefix}{key}{test_suffix}" > {rsa_key_path} && \ + IPERF3_PASSWORD="{password}" iperf3 -c {server} -p {port} -t {time} \ + --username "{username}" --rsa-public-key-path {rsa_key_path} -b {tcp_bitrate} -J', + ), + call( + dc, + f'IPERF3_PASSWORD="{password}" iperf3 -c {server} -p {port} -t {time} \ + --username "{username}" --rsa-public-key-path {rsa_key_path} -b {udp_bitrate} -u -J && rm {rsa_key_path}', # noqa + ), + ] + result = check.perform_check(store=False) + for key in self._RESULT_KEYS: + self.assertIn(key, result) + self.assertEqual(result['iperf_result'], 1) + self.assertEqual( + result['sent_bps_tcp'], tcp_result['sum_sent']['bits_per_second'] + ) + self.assertEqual( + result['received_bytes_tcp'], tcp_result['sum_received']['bytes'] + ) + self.assertEqual(result['jitter'], udp_result['jitter_ms']) + self.assertEqual(result['total_packets'], udp_result['packets']) + self.assertEqual(mock_warn.call_count, 0) + self.assertEqual(mock_exec_command.call_count, 2) + self.assertEqual(mock_get_iperf_servers.call_count, 1) + mock_exec_command.assert_has_calls(self._EXPECTED_COMMAND_CALLS) + + @patch.object(Iperf, '_exec_command') + @patch.object( + Iperf, '_get_iperf_servers', return_value=['iperf.openwisptestserver.com'] + ) + @patch.object(iperf_logger, 'warning') + def test_iperf_check_config( + self, mock_warn, mock_get_iperf_servers, mock_exec_command + ): + mock_exec_command.side_effect = [(RESULT_TCP, 0), (RESULT_UDP, 0)] + tcp_result = loads(RESULT_TCP)['end'] + udp_result = loads(RESULT_UDP)['end']['sum'] + check, dc = self._create_iperf_test_env() + self._EXPECTED_COMMAND_CALLS = [ + call(dc, 'iperf3 -c iperf.openwisptestserver.com -p 9201 -t 120 -b 10M -J'), + call( + dc, 'iperf3 -c iperf.openwisptestserver.com -p 9201 -t 120 -b 50M -u -J' + ), + ] + org_id = str(self.device.organization.id) + iperf_config = { + org_id: { + 'client_options': { + 'port': 9201, + 'time': 120, + 'tcp': {'bitrate': '10M'}, + 'udp': {'bitrate': '50M'}, + } + } + } + with patch.object(app_settings, 'IPERF_CHECK_CONFIG', iperf_config): + with patch.object(Iperf, 'schema', get_iperf_schema()): + result = check.perform_check(store=False) + for key in self._RESULT_KEYS: + self.assertIn(key, result) + self.assertEqual(result['iperf_result'], 1) + self.assertEqual( + result['sent_bps_tcp'], tcp_result['sum_sent']['bits_per_second'] + ) + self.assertEqual( + result['received_bytes_tcp'], tcp_result['sum_received']['bytes'] + ) + self.assertEqual(result['jitter'], udp_result['jitter_ms']) + self.assertEqual(result['total_packets'], udp_result['packets']) + self.assertEqual(mock_warn.call_count, 0) + self.assertEqual(mock_exec_command.call_count, 2) + self.assertEqual(mock_get_iperf_servers.call_count, 1) + mock_exec_command.assert_has_calls(self._EXPECTED_COMMAND_CALLS) + + @patch.object(iperf_logger, 'warning') + def test_iperf_device_connection(self, mock_warn): + check, dc = self._create_iperf_test_env() + + with self.subTest('Test active device connection when management tunnel down'): + with patch.object(Iperf, '_connect', return_value=False) as mocked_connect: + check.perform_check(store=False) + mock_warn.assert_called_with( + f'DeviceConnection for "{self.device}" is not working, iperf check skipped!' + ) + mocked_connect.assert_called_once_with(dc) + self.assertEqual(mocked_connect.call_count, 1) + + with self.subTest('Test device connection is not enabled'): + dc.enabled = False + dc.save() + check.perform_check(store=False) + mock_warn.assert_called_with( + f'Failed to get a working DeviceConnection for "{self.device}", iperf check skipped!' + ) + + with self.subTest('Test device connection is not with right update strategy'): + dc.update_strategy = UPDATE_STRATEGIES[1][0] + dc.is_working = True + dc.enabled = True + dc.save() + check.perform_check(store=False) + mock_warn.assert_called_with( + f'Failed to get a working DeviceConnection for "{self.device}", iperf check skipped!' + ) + + def test_iperf_check_content_object_none(self): + check = Check(name='Iperf check', check_type=self._IPERF, params={}) + try: + check.check_instance.validate() + except ValidationError as e: + self.assertIn('device', str(e)) + else: + self.fail('ValidationError not raised') + + def test_iperf_check_content_object_not_device(self): + check = Check( + name='Iperf check', + check_type=self._IPERF, + content_object=self._create_user(), + params={}, + ) + try: + check.check_instance.validate() + except ValidationError as e: + self.assertIn('device', str(e)) + else: + self.fail('ValidationError not raised') + + def test_iperf_check_schema_violation(self): + device = self._create_device(organization=self._create_org()) + for invalid_param in INVALID_PARAMS: + check = Check( + name='Iperf check', + check_type=self._IPERF, + content_object=device, + params=invalid_param, + ) + try: + check.check_instance.validate() + except ValidationError as e: + self.assertIn('Invalid param', str(e)) + else: + self.fail('ValidationError not raised') + + @patch.object(Iperf, '_exec_command') + @patch.object( + Iperf, '_get_iperf_servers', return_value=['iperf.openwisptestserver.com'] + ) + @patch.object(iperf_logger, 'warning') + def test_iperf_check(self, mock_warn, mock_get_iperf_servers, mock_exec_command): + check, _ = self._create_iperf_test_env() + error = "ash: iperf3: not found" + tcp_result = loads(RESULT_TCP)['end'] + udp_result = loads(RESULT_UDP)['end']['sum'] + + with self.subTest('Test iperf3 is not installed on the device'): + mock_exec_command.side_effect = [(error, 127)] + check.perform_check(store=False) + mock_warn.assert_called_with( + f'Iperf3 is not installed on the "{self.device}", error - {error}' + ) + self.assertEqual(mock_warn.call_count, 1) + self.assertEqual(mock_exec_command.call_count, 1) + self.assertEqual(mock_get_iperf_servers.call_count, 1) + mock_exec_command.reset_mock() + mock_get_iperf_servers.reset_mock() + mock_warn.reset_mock() + + with self.subTest('Test iperf3 errors not in json format'): + org_id = str(self.device.organization.id) + iperf_config = { + org_id: { + 'username': 'test', + 'password': 'testpass', + 'rsa_public_key': 'INVALID_RSA_KEY', + } + } + with patch.object(app_settings, 'IPERF_CHECK_CONFIG', iperf_config): + mock_exec_command.side_effect = [(PARAM_ERROR, 1), (PARAM_ERROR, 1)] + EXPECTED_WARN_CALLS = [ + call( + f'Iperf check failed for "{self.device}", error - {PARAM_ERROR}' + ), + call( + f'Iperf check failed for "{self.device}", error - {PARAM_ERROR}' + ), + ] + check.perform_check(store=False) + self.assertEqual(mock_warn.call_count, 2) + self.assertEqual(mock_exec_command.call_count, 2) + self.assertEqual(mock_get_iperf_servers.call_count, 1) + mock_warn.assert_has_calls(EXPECTED_WARN_CALLS) + mock_exec_command.reset_mock() + mock_get_iperf_servers.reset_mock() + mock_warn.reset_mock() + + with self.subTest('Test iperf check passes in both TCP & UDP'): + mock_exec_command.side_effect = [(RESULT_TCP, 0), (RESULT_UDP, 0)] + self.assertEqual(Chart.objects.count(), 2) + self.assertEqual(Metric.objects.count(), 2) + result = check.perform_check(store=False) + for key in self._RESULT_KEYS: + self.assertIn(key, result) + self.assertEqual(result['iperf_result'], 1) + self.assertEqual( + result['sent_bps_tcp'], tcp_result['sum_sent']['bits_per_second'] + ) + self.assertEqual( + result['received_bps_tcp'], + tcp_result['sum_received']['bits_per_second'], + ) + self.assertEqual(result['sent_bytes_tcp'], tcp_result['sum_sent']['bytes']) + self.assertEqual( + result['received_bytes_tcp'], tcp_result['sum_received']['bytes'] + ) + self.assertEqual( + result['retransmits'], tcp_result['sum_sent']['retransmits'] + ) + self.assertEqual(result['sent_bps_udp'], udp_result['bits_per_second']) + self.assertEqual(result['sent_bytes_udp'], udp_result['bytes']) + self.assertEqual(result['jitter'], udp_result['jitter_ms']) + self.assertEqual(result['total_packets'], udp_result['packets']) + self.assertEqual(result['lost_percent'], udp_result['lost_percent']) + self.assertEqual(Chart.objects.count(), 8) + self.assertEqual(Check.objects.count(), 3) + + iperf_metric = Metric.objects.get(key='iperf') + self.assertEqual(Metric.objects.count(), 3) + self.assertEqual(iperf_metric.content_object, self.device) + points = iperf_metric.read(limit=None, extra_fields=list(result.keys())) + self.assertEqual(len(points), 1) + self.assertEqual(points[0]['iperf_result'], result['iperf_result']) + self.assertEqual(points[0]['sent_bps_tcp'], result['sent_bps_tcp']) + self.assertEqual( + points[0]['received_bytes_tcp'], result['received_bytes_tcp'] + ) + self.assertEqual(points[0]['retransmits'], result['retransmits']) + self.assertEqual(points[0]['sent_bps_udp'], result['sent_bps_udp']) + self.assertEqual(points[0]['sent_bytes_udp'], result['sent_bytes_udp']) + self.assertEqual(points[0]['jitter'], result['jitter']) + self.assertEqual(points[0]['total_packets'], result['total_packets']) + self.assertEqual(points[0]['lost_packets'], result['lost_packets']) + self.assertEqual(points[0]['lost_percent'], result['lost_percent']) + + self.assertEqual(mock_warn.call_count, 0) + self.assertEqual(mock_exec_command.call_count, 2) + self.assertEqual(mock_get_iperf_servers.call_count, 1) + mock_exec_command.assert_has_calls(self._EXPECTED_COMMAND_CALLS) + mock_exec_command.reset_mock() + mock_get_iperf_servers.reset_mock() + mock_warn.reset_mock() + + with self.subTest('Test iperf check fails in both TCP & UDP'): + mock_exec_command.side_effect = [(RESULT_FAIL, 1), (RESULT_FAIL, 1)] + + result = check.perform_check(store=False) + self._assert_iperf_fail_result(result) + self.assertEqual(Chart.objects.count(), 8) + self.assertEqual(Metric.objects.count(), 3) + self.assertEqual(mock_exec_command.call_count, 2) + self.assertEqual(mock_get_iperf_servers.call_count, 1) + mock_warn.assert_has_calls(self._EXPECTED_WARN_CALLS) + mock_exec_command.assert_has_calls(self._EXPECTED_COMMAND_CALLS) + mock_exec_command.reset_mock() + mock_get_iperf_servers.reset_mock() + mock_warn.reset_mock() + + with self.subTest('Test iperf check TCP pass UDP fail'): + mock_exec_command.side_effect = [(RESULT_TCP, 0), (RESULT_FAIL, 1)] + + result = check.perform_check(store=False) + for key in self._RESULT_KEYS: + self.assertIn(key, result) + self.assertEqual(result['iperf_result'], 1) + self.assertEqual( + result['sent_bps_tcp'], tcp_result['sum_sent']['bits_per_second'] + ) + self.assertEqual( + result['received_bps_tcp'], + tcp_result['sum_received']['bits_per_second'], + ) + self.assertEqual(result['sent_bytes_tcp'], tcp_result['sum_sent']['bytes']) + self.assertEqual( + result['received_bytes_tcp'], tcp_result['sum_received']['bytes'] + ) + self.assertEqual( + result['retransmits'], tcp_result['sum_sent']['retransmits'] + ) + self.assertEqual(result['sent_bps_udp'], 0.0) + self.assertEqual(result['sent_bytes_udp'], 0) + self.assertEqual(result['jitter'], 0.0) + self.assertEqual(result['total_packets'], 0) + self.assertEqual(result['lost_percent'], 0.0) + self.assertEqual(Chart.objects.count(), 8) + self.assertEqual(Metric.objects.count(), 3) + self.assertEqual(mock_exec_command.call_count, 2) + self.assertEqual(mock_get_iperf_servers.call_count, 1) + mock_warn.assert_has_calls(self._EXPECTED_WARN_CALLS[1:]) + mock_exec_command.assert_has_calls(self._EXPECTED_COMMAND_CALLS) + mock_exec_command.reset_mock() + mock_get_iperf_servers.reset_mock() + mock_warn.reset_mock() + + with self.subTest('Test iperf check TCP fail UDP pass'): + mock_exec_command.side_effect = [(RESULT_FAIL, 1), (RESULT_UDP, 0)] + + result = check.perform_check(store=False) + for key in self._RESULT_KEYS: + self.assertIn(key, result) + self.assertEqual(result['iperf_result'], 1) + self.assertEqual(result['sent_bps_tcp'], 0.0) + self.assertEqual(result['received_bps_tcp'], 0.0) + self.assertEqual(result['sent_bytes_tcp'], 0) + self.assertEqual(result['received_bytes_tcp'], 0) + self.assertEqual(result['retransmits'], 0) + self.assertEqual(result['sent_bps_udp'], udp_result['bits_per_second']) + self.assertEqual(result['sent_bytes_udp'], udp_result['bytes']) + self.assertEqual(result['jitter'], udp_result['jitter_ms']) + self.assertEqual(result['total_packets'], udp_result['packets']) + self.assertEqual(result['lost_percent'], udp_result['lost_percent']) + self.assertEqual(Chart.objects.count(), 8) + self.assertEqual(Metric.objects.count(), 3) + self.assertEqual(mock_exec_command.call_count, 2) + self.assertEqual(mock_get_iperf_servers.call_count, 1) + mock_warn.assert_has_calls(self._EXPECTED_WARN_CALLS[1:]) + mock_exec_command.assert_has_calls(self._EXPECTED_COMMAND_CALLS) + + @patch.object(Iperf, '_exec_command') + @patch.object( + Iperf, '_get_iperf_servers', return_value=['iperf.openwisptestserver.com'] + ) + @patch.object(iperf_logger, 'warning') + def test_iperf_check_auth_config( + self, mock_warn, mock_get_iperf_servers, mock_exec_command + ): + + check, dc = self._create_iperf_test_env() + org_id = str(self.device.organization.id) + iperf_config = { + org_id: { + 'username': 'test', + 'password': 'testpass', + 'rsa_public_key': TEST_RSA_KEY, + } + } + iperf_conf_wrong_pass = { + org_id: { + 'username': 'test', + 'password': 'wrongpass', + 'rsa_public_key': TEST_RSA_KEY, + } + } + iperf_conf_wrong_user = { + org_id: { + 'username': 'wronguser', + 'password': 'testpass', + 'rsa_public_key': TEST_RSA_KEY, + } + } + auth_error = "test authorization failed" + tcp_result = loads(RESULT_TCP)['end'] + udp_result = loads(RESULT_UDP)['end']['sum'] + + self._EXPECTED_WARN_CALLS = [ + call(f'Iperf check failed for "{self.device}", error - {auth_error}'), + call(f'Iperf check failed for "{self.device}", error - {auth_error}'), + ] + with self.subTest('Test iperf check with right config'): + with patch.object( + app_settings, + 'IPERF_CHECK_CONFIG', + iperf_config + # It is required to mock "Iperf.schema" here so that it + # uses the updated configuration from "IPERF_CHECK_CONFIG" setting. + ), patch.object(Iperf, 'schema', get_iperf_schema()): + self._set_auth_expected_calls(dc, org_id, iperf_config) + mock_exec_command.side_effect = [(RESULT_TCP, 0), (RESULT_UDP, 0)] + + result = check.perform_check(store=False) + for key in self._RESULT_KEYS: + self.assertIn(key, result) + self.assertEqual(result['iperf_result'], 1) + self.assertEqual( + result['sent_bps_tcp'], tcp_result['sum_sent']['bits_per_second'] + ) + self.assertEqual( + result['received_bytes_tcp'], tcp_result['sum_received']['bytes'] + ) + self.assertEqual(result['jitter'], udp_result['jitter_ms']) + self.assertEqual(result['total_packets'], udp_result['packets']) + self.assertEqual(mock_exec_command.call_count, 2) + self.assertEqual(mock_get_iperf_servers.call_count, 1) + mock_exec_command.assert_has_calls(self._EXPECTED_COMMAND_CALLS) + mock_exec_command.reset_mock() + mock_get_iperf_servers.reset_mock() + mock_warn.reset_mock() + + with self.subTest('Test iperf check with wrong password'): + with patch.object( + app_settings, 'IPERF_CHECK_CONFIG', iperf_conf_wrong_pass + ), patch.object(Iperf, 'schema', get_iperf_schema()): + self._set_auth_expected_calls(dc, org_id, iperf_conf_wrong_pass) + mock_exec_command.side_effect = [ + (RESULT_AUTH_FAIL, 1), + (RESULT_AUTH_FAIL, 1), + ] + + result = check.perform_check(store=False) + self._assert_iperf_fail_result(result) + self.assertEqual(mock_exec_command.call_count, 2) + mock_warn.assert_has_calls(self._EXPECTED_WARN_CALLS) + mock_exec_command.assert_has_calls(self._EXPECTED_COMMAND_CALLS) + self.assertEqual(mock_get_iperf_servers.call_count, 1) + mock_exec_command.reset_mock() + mock_get_iperf_servers.reset_mock() + mock_warn.reset_mock() + + with self.subTest('Test iperf check with wrong username'): + with patch.object( + app_settings, 'IPERF_CHECK_CONFIG', iperf_conf_wrong_user + ), patch.object(Iperf, 'schema', get_iperf_schema()): + self._set_auth_expected_calls(dc, org_id, iperf_conf_wrong_user) + mock_exec_command.side_effect = [ + (RESULT_AUTH_FAIL, 1), + (RESULT_AUTH_FAIL, 1), + ] + + result = check.perform_check(store=False) + self._assert_iperf_fail_result(result) + self.assertEqual(mock_exec_command.call_count, 2) + mock_warn.assert_has_calls(self._EXPECTED_WARN_CALLS) + mock_exec_command.assert_has_calls(self._EXPECTED_COMMAND_CALLS) + self.assertEqual(mock_get_iperf_servers.call_count, 1) diff --git a/openwisp_monitoring/check/tests/test_models.py b/openwisp_monitoring/check/tests/test_models.py index abbf8ed13..3bb8e13f6 100644 --- a/openwisp_monitoring/check/tests/test_models.py +++ b/openwisp_monitoring/check/tests/test_models.py @@ -9,8 +9,8 @@ from ...device.tests import TestDeviceMonitoringMixin from .. import settings as app_settings -from ..classes import ConfigApplied, Ping -from ..tasks import auto_create_config_check, auto_create_ping +from ..classes import ConfigApplied, Iperf, Ping +from ..tasks import auto_create_config_check, auto_create_iperf_check, auto_create_ping Check = load_model('check', 'Check') Metric = load_model('monitoring', 'Metric') @@ -22,6 +22,7 @@ class TestModels(TestDeviceMonitoringMixin, TransactionTestCase): _PING = app_settings.CHECK_CLASSES[0][0] _CONFIG_APPLIED = app_settings.CHECK_CLASSES[1][0] + _IPERF = app_settings.CHECK_CLASSES[2][0] def test_check_str(self): c = Check(name='Test check') @@ -48,6 +49,12 @@ def test_check_class(self): check_type=self._CONFIG_APPLIED, ) self.assertEqual(c.check_class, ConfigApplied) + with self.subTest('Test Iperf check Class'): + c = Check( + name='Iperf class check', + check_type=self._IPERF, + ) + self.assertEqual(c.check_class, Iperf) def test_base_check_class(self): path = 'openwisp_monitoring.check.classes.base.BaseCheck' @@ -82,6 +89,18 @@ def test_check_instance(self): self.assertEqual(i.related_object, obj) self.assertEqual(i.params, c.params) + with self.subTest('Test Iperf check instance'): + c = Check( + name='Iperf class check', + check_type=self._IPERF, + content_object=obj, + params={}, + ) + i = c.check_instance + self.assertIsInstance(i, Iperf) + self.assertEqual(i.related_object, obj) + self.assertEqual(i.params, c.params) + def test_validation(self): with self.subTest('Test Ping check validation'): check = Check(name='Ping check', check_type=self._PING, params={}) @@ -105,7 +124,7 @@ def test_validation(self): def test_auto_check_creation(self): self.assertEqual(Check.objects.count(), 0) d = self._create_device(organization=self._create_org()) - self.assertEqual(Check.objects.count(), 2) + self.assertEqual(Check.objects.count(), 3) with self.subTest('Test AUTO_PING'): c1 = Check.objects.filter(check_type=self._PING).first() self.assertEqual(c1.content_object, d) @@ -114,11 +133,15 @@ def test_auto_check_creation(self): c2 = Check.objects.filter(check_type=self._CONFIG_APPLIED).first() self.assertEqual(c2.content_object, d) self.assertEqual(self._CONFIG_APPLIED, c2.check_type) + with self.subTest('Test AUTO_IPERF'): + c3 = Check.objects.filter(check_type=self._IPERF).first() + self.assertEqual(c3.content_object, d) + self.assertEqual(self._IPERF, c3.check_type) def test_device_deleted(self): self.assertEqual(Check.objects.count(), 0) d = self._create_device(organization=self._create_org()) - self.assertEqual(Check.objects.count(), 2) + self.assertEqual(Check.objects.count(), 3) d.delete() self.assertEqual(Check.objects.count(), 0) @@ -129,7 +152,7 @@ def test_config_modified_device_problem(self): self._create_config(status='modified', organization=self._create_org()) d = Device.objects.first() d.monitoring.update_status('ok') - self.assertEqual(Check.objects.count(), 2) + self.assertEqual(Check.objects.count(), 3) self.assertEqual(Metric.objects.count(), 0) self.assertEqual(AlertSettings.objects.count(), 0) check = Check.objects.filter(check_type=self._CONFIG_APPLIED).first() @@ -159,7 +182,7 @@ def test_config_error(self): self._create_config(status='error', organization=self._create_org()) dm = Device.objects.first().monitoring dm.update_status('ok') - self.assertEqual(Check.objects.count(), 2) + self.assertEqual(Check.objects.count(), 3) self.assertEqual(Metric.objects.count(), 0) self.assertEqual(AlertSettings.objects.count(), 0) check = Check.objects.filter(check_type=self._CONFIG_APPLIED).first() @@ -192,7 +215,7 @@ def test_config_error(self): @patch('openwisp_monitoring.check.settings.AUTO_PING', False) def test_config_check_critical_metric(self): self._create_config(status='modified', organization=self._create_org()) - self.assertEqual(Check.objects.count(), 2) + self.assertEqual(Check.objects.count(), 3) d = Device.objects.first() dm = d.monitoring dm.update_status('ok') @@ -211,7 +234,7 @@ def test_config_check_critical_metric(self): def test_no_duplicate_check_created(self): self._create_config(organization=self._create_org()) - self.assertEqual(Check.objects.count(), 2) + self.assertEqual(Check.objects.count(), 3) d = Device.objects.first() auto_create_config_check.delay( model=Device.__name__.lower(), @@ -223,13 +246,18 @@ def test_no_duplicate_check_created(self): app_label=Device._meta.app_label, object_id=str(d.pk), ) - self.assertEqual(Check.objects.count(), 2) + auto_create_iperf_check.delay( + model=Device.__name__.lower(), + app_label=Device._meta.app_label, + object_id=str(d.pk), + ) + self.assertEqual(Check.objects.count(), 3) def test_device_unreachable_no_config_check(self): self._create_config(status='modified', organization=self._create_org()) d = self.device_model.objects.first() d.monitoring.update_status('critical') - self.assertEqual(Check.objects.count(), 2) + self.assertEqual(Check.objects.count(), 3) c2 = Check.objects.filter(check_type=self._CONFIG_APPLIED).first() c2.perform_check() self.assertEqual(Metric.objects.count(), 0) @@ -240,7 +268,7 @@ def test_device_unknown_no_config_check(self): self._create_config(status='modified', organization=self._create_org()) d = self.device_model.objects.first() d.monitoring.update_status('unknown') - self.assertEqual(Check.objects.count(), 2) + self.assertEqual(Check.objects.count(), 3) c2 = Check.objects.filter(check_type=self._CONFIG_APPLIED).first() c2.perform_check() self.assertEqual(Metric.objects.count(), 0) diff --git a/openwisp_monitoring/check/tests/test_ping.py b/openwisp_monitoring/check/tests/test_ping.py index 11b9ee47b..23b94652d 100644 --- a/openwisp_monitoring/check/tests/test_ping.py +++ b/openwisp_monitoring/check/tests/test_ping.py @@ -239,7 +239,7 @@ def test_store_result(self, mocked_method): device.management_ip = '10.40.0.1' device.save() # check created automatically by autoping - self.assertEqual(Check.objects.count(), 2) + self.assertEqual(Check.objects.count(), 3) self.assertEqual(Metric.objects.count(), 0) self.assertEqual(Chart.objects.count(), 0) self.assertEqual(AlertSettings.objects.count(), 0) diff --git a/openwisp_monitoring/db/backends/influxdb/queries.py b/openwisp_monitoring/db/backends/influxdb/queries.py index 11f048096..5fbb0281c 100644 --- a/openwisp_monitoring/db/backends/influxdb/queries.py +++ b/openwisp_monitoring/db/backends/influxdb/queries.py @@ -100,6 +100,51 @@ "object_id = '{object_id}' GROUP BY time(1d)" ) }, + 'bandwidth': { + 'influxdb': ( + "SELECT MEAN(sent_bps_tcp) / 1000000000 AS TCP, " + "MEAN(sent_bps_udp) / 1000000000 AS UDP FROM {key} WHERE " + "time >= '{time}' AND content_type = '{content_type}' AND " + "object_id = '{object_id}' GROUP BY time(1d)" + ) + }, + 'transfer': { + 'influxdb': ( + "SELECT SUM(sent_bytes_tcp) / 1000000000 AS TCP," + "SUM(sent_bytes_udp) / 1000000000 AS UDP FROM {key} WHERE " + "time >= '{time}' AND content_type = '{content_type}' AND " + "object_id = '{object_id}' GROUP BY time(1d)" + ) + }, + 'retransmits': { + 'influxdb': ( + "SELECT MEAN(retransmits) AS retransmits FROM {key} " + "WHERE time >= '{time}' AND content_type = '{content_type}' " + "AND object_id = '{object_id}' GROUP BY time(1d)" + ) + }, + 'jitter': { + 'influxdb': ( + "SELECT MEAN(jitter) AS jitter FROM {key} " + "WHERE time >= '{time}' AND content_type = '{content_type}' " + "AND object_id = '{object_id}' GROUP BY time(1d)" + ) + }, + 'datagram': { + 'influxdb': ( + "SELECT MEAN(lost_packets) AS lost_datagram," + "MEAN(total_packets) AS total_datagram FROM {key} WHERE " + "time >= '{time}' AND content_type = '{content_type}' " + "AND object_id = '{object_id}' GROUP BY time(1d)" + ) + }, + 'datagram_loss': { + 'influxdb': ( + "SELECT MEAN(lost_percent) AS datagram_loss FROM {key} " + "WHERE time >= '{time}' AND content_type = '{content_type}' " + "AND object_id = '{object_id}' GROUP BY time(1d)" + ) + }, } default_chart_query = [ diff --git a/openwisp_monitoring/monitoring/base/models.py b/openwisp_monitoring/monitoring/base/models.py index 863e46a3f..2806cf4e8 100644 --- a/openwisp_monitoring/monitoring/base/models.py +++ b/openwisp_monitoring/monitoring/base/models.py @@ -436,6 +436,10 @@ def trace_type(self): def trace_order(self): return self.config_dict.get('trace_order', []) + @property + def connect_points(self): + return self.config_dict.get('connect_points', False) + @property def description(self): return self.config_dict['description'].format( @@ -621,6 +625,7 @@ def json(self, time=DEFAULT_TIME, **kwargs): 'unit': self.unit, 'trace_type': self.trace_type, 'trace_order': self.trace_order, + 'connect_points': self.connect_points, 'colors': self.colors, } ) diff --git a/openwisp_monitoring/monitoring/configuration.py b/openwisp_monitoring/monitoring/configuration.py index 9c097e01c..d081fcdc3 100644 --- a/openwisp_monitoring/monitoring/configuration.py +++ b/openwisp_monitoring/monitoring/configuration.py @@ -206,7 +206,7 @@ def _get_access_tech(): _('Total download traffic'), _('Total upload traffic'), ], - 'unit': 'adaptive_bytes', + 'unit': 'adaptive_prefix+B', 'order': 240, 'query': chart_query['traffic'], 'colors': [ @@ -226,6 +226,7 @@ def _get_access_tech(): 'charts': { 'general_traffic': { 'type': 'stackedbar+lines', + 'fill': 'none', 'trace_type': { 'download': 'stackedbar', 'upload': 'stackedbar', @@ -242,7 +243,7 @@ def _get_access_tech(): _('Total download traffic'), _('Total upload traffic'), ], - 'unit': 'adaptive_bytes', + 'unit': 'adaptive_prefix+B', 'order': 240, 'query': chart_query['general_traffic'], 'query_default_param': { @@ -542,6 +543,121 @@ def _get_access_tech(): } }, }, + 'iperf': { + 'label': _('Iperf'), + 'name': 'Iperf', + 'key': 'iperf', + 'field_name': 'iperf_result', + 'related_fields': [ + 'sent_bps_tcp', + 'received_bps_tcp', + 'sent_bytes_tcp', + 'received_bytes_tcp', + 'retransmits', + 'sent_bytes_udp', + 'sent_bps_udp', + 'jitter', + 'total_packets', + 'lost_packets', + 'lost_percent', + ], + 'charts': { + 'bandwidth': { + 'type': 'scatter', + 'connect_points': True, + 'title': _('Bandwidth'), + 'fill': 'none', + 'description': _('Bitrate during Iperf3 test.'), + 'summary_labels': [ + _('TCP bitrate'), + _('UDP bitrate'), + ], + 'unit': 'adaptive_prefix+bps', + 'order': 280, + 'query': chart_query['bandwidth'], + 'colors': [ + DEFAULT_COLORS[0], + DEFAULT_COLORS[3], + ], + }, + 'transfer': { + 'type': 'scatter', + 'connect_points': True, + 'fill': 'none', + 'title': _('Transferred Data'), + 'description': _('Transferred Data during Iperf3 test.'), + 'summary_labels': [ + _('TCP transferred data'), + _('UDP transferred data'), + ], + 'unit': 'adaptive_prefix+B', + 'order': 290, + 'query': chart_query['transfer'], + 'colors': [ + DEFAULT_COLORS[0], + DEFAULT_COLORS[3], + ], + }, + 'retransmits': { + 'type': 'scatter', + 'connect_points': True, + 'title': _('Retransmits'), + 'description': _('Retransmits during Iperf3 test in TCP mode.'), + 'summary_labels': [_('Restransmits')], + 'unit': '', + 'order': 300, + 'query': chart_query['retransmits'], + 'colors': [DEFAULT_COLORS[-3]], + }, + 'jitter': { + 'type': 'scatter', + 'connect_points': True, + 'title': _('Jitter'), + 'description': _( + 'Jitter is a variance in latency measured using Iperf3 utility in UDP mode.' + ), + 'summary_labels': [ + _('Jitter'), + ], + 'unit': _(' ms'), + 'order': 330, + 'query': chart_query['jitter'], + 'colors': [DEFAULT_COLORS[4]], + }, + 'datagram': { + 'type': 'scatter', + 'fill': 'none', + 'connect_points': True, + 'title': _('Datagram'), + 'description': _( + '(Lost / Total) datagrams measured by Iperf3 test in UDP mode.' + ), + 'summary_labels': [ + _('Lost datagram'), + _('Total datagram'), + ], + 'unit': '', + 'order': 340, + 'query': chart_query['datagram'], + 'colors': [DEFAULT_COLORS[3], DEFAULT_COLORS[2]], + }, + 'datagram_loss': { + 'type': 'scatter', + 'connect_points': True, + 'title': _('Datagram Loss'), + 'description': _( + 'Indicates datagram loss % during Iperf3 test in UDP mode.' + ), + 'summary_labels': [ + _('Datagram loss'), + ], + 'unit': '%', + 'order': 350, + 'query': chart_query['datagram_loss'], + 'colors': [DEFAULT_COLORS[3]], + }, + }, + }, } DEFAULT_CHARTS = {} diff --git a/openwisp_monitoring/monitoring/static/monitoring/js/chart.js b/openwisp_monitoring/monitoring/static/monitoring/js/chart.js index ee6933116..c1e564964 100644 --- a/openwisp_monitoring/monitoring/static/monitoring/js/chart.js +++ b/openwisp_monitoring/monitoring/static/monitoring/js/chart.js @@ -20,19 +20,19 @@ function getAdaptiveScale(value, multiplier, unit) { if (value == 0) { multiplier = 1; - unit = 'B'; + unit = unit; } else if (value < 0.001) { multiplier = 1000000; - unit = 'KB'; + unit = 'K' + unit; } else if (value < 1) { multiplier = 1000; - unit = 'MB'; + unit = 'M' + unit; } else if (value < 1000) { multiplier = 1; - unit = 'GB'; + unit = 'G' + unit; } else if (value >= 1000) { multiplier = 0.001; - unit = 'TB'; + unit = 'T' + unit; } return { multiplier: multiplier, @@ -44,7 +44,7 @@ return Math.round((value * multiplier) * 100) / 100; } - function adaptiveFilterPoints(charts, layout, yRawVal) { + function adaptiveFilterPoints(charts, layout, yRawVal, chartUnit = '') { var y = charts[0].y, sum = 0, count = 0, shownVal, average; for (var i=0; i < y.length; i++) { sum += y[i]; @@ -53,7 +53,7 @@ } } average = sum / count; - var scales = getAdaptiveScale(average, 1, ''); + var scales = getAdaptiveScale(average, 1, chartUnit); var multiplier = scales.multiplier, unit = scales.unit; for (i=0; i < y.length; i++) { @@ -64,7 +64,7 @@ } shownVal = charts[j].y[i]; charts[j].y[i] = getAdaptiveBytes(charts[j].y[i], multiplier); - var hoverScales = getAdaptiveScale(shownVal, 1, ''); + var hoverScales = getAdaptiveScale(shownVal, 1, chartUnit); var hoverMultiplier = hoverScales.multiplier, hoverUnit = hoverScales.unit; shownVal = getAdaptiveBytes(shownVal, hoverMultiplier); @@ -74,8 +74,8 @@ layout.yaxis.title = unit; } - function adaptiveFilterSummary(i, percircles, value) { - var scales = getAdaptiveScale(value, 1, ''), + function adaptiveFilterSummary(i, percircles, value, chartUnit = '') { + var scales = getAdaptiveScale(value, 1, chartUnit), multiplier = scales.multiplier, unit = scales.unit; value = getAdaptiveBytes(value, multiplier); @@ -138,7 +138,7 @@ if (type === 'histogram') { layout.hovermode = 'closest'; } - var map, mapped, label, fixedValue, key; + var map, mapped, label, fixedValue, key, chartUnit, yValues; // given a value, returns its color and description // according to the color map configuration of this chart function findInColorMap(value) { @@ -178,6 +178,7 @@ // We use the "_key" field to sort the charts // according to the order defined in "data.trace_order" _key: key, + _connectPoints : data.connect_points || false, }, yValuesRaw = data.traces[i][1]; if (type !== 'histogram') { @@ -196,7 +197,10 @@ options.type = 'scatter'; options.mode = 'lines+markers'; options.line = {shape: 'hvh'}; - options.fill = "none"; + options.fill = data.fill; + } + if (options._connectPoints) { + options.mode = 'lines'; } } } @@ -216,6 +220,11 @@ layout.margin.b = 45; } } + + var xValuesRaw = options.x; + if (options._connectPoints) { + options.x = []; + } // adjust text to be displayed in Y values // differentiate between values with zero and no values at all (N/A) for (var c=0; c