Skip to content

Commit 41b89d8

Browse files
committed
updated dessscription in records
1 parent 968c136 commit 41b89d8

File tree

4 files changed

+3
-3
lines changed

4 files changed

+3
-3
lines changed

algorithm_catalog/vito/mogpr_s1s2/benchmark_scenarios/mogp_s1s2.json renamed to algorithm_catalog/vito/mogpr_s1s2/benchmark_scenarios/mogpr_s1s2.json

File renamed without changes.

algorithm_catalog/vito/peakvalley/records/peakvalley.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
"updated": "2025-01-29T00:00:00Z",
1212
"type": "service",
1313
"title": "Detect peaks and valleys in a time series",
14-
"description": "# Peak Valley Detection\n\n## Overview\n\nThe `peakvalley` process provides automated detection of peaks and valleys in time-series data by analysing amplitude changes and slope patterns. It identifies significant drops, recoveries, and inflexion points to classify each time step as a peak, a valley, or a neutral state. \nThis process is particularly useful for applications such as vegetation phenology monitoring, hydrological studies, and climate data analysis.\n\n## Parameters\n\nThe `mogpr_s2` service requires the following parameters:\n\n\n| Name | Description | Type | Default |\n| --------------- | -------------------------------------------------------------- | ------- | ------- |\n| spatial_extent | Polygon representing the AOI on which to apply the data fusion | GeoJSON | |\n| temporal_extent | Date range for which to apply the data fusion | Array | |\n| drop_threshold | Threshold to drop low confidence predictions | Float | 0.15 |\n| recovery_ratio | Ratio to recover from drops in the data | Float | 1.0 |\n| slope_threshold | Threshold to identify steep slopes in the data | Float | 0.007 |\n",
14+
"description": "This process provides automated detection of peaks and valleys in time-series data by analysing amplitude changes and slope patterns. It identifies significant drops, recoveries, and inflexion points to classify each time step as a peak, a valley, or a neutral state.",
1515
"cost_estimate": 10,
1616
"cost_unit": "platform credits per km²",
1717
"keywords": [

algorithm_catalog/vito/phenology/records/phenology.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
"updated": "2025-01-29T00:00:00Z",
1212
"type": "service",
1313
"title": "Phenology metrics from Sentinel-2 NDVI",
14-
"description": "# Phenology\n\n## Description\n\nComputes phenology metrics based on the [Phenolopy](https://github.com/lewistrotter/PhenoloPy) implementation.\nPhenolopy (phenology + python) is a Python-based library for analysing satellite timeseries data.\nPhenolopy has been designed to investigate the seasonality of satellite timeseries data and their relationship with\ndynamic vegetation properties such as phenology and temporal growth patterns.\nThe temporal domain contains essential information about short- and long-term changes within vegetation life cycles.\nPhenolopy can be applied to derive numerous phenometrics from satellite imagery.\n\n\nTherefore, the `phenology` UDP computes phenology metrics from a time series of satellite images. \n\n![image.png](https://github.com/lewistrotter/Phenolopy/raw/main/documentation/images/pheno_explain.png?raw=true)",
14+
"description": "Computes phenology metrics that is designed to investigate the seasonality of satellite timeseries data and their relationship with dynamic vegetation properties such as phenology and temporal growth patterns.",
1515
"cost_estimate": 10,
1616
"cost_unit": "platform credits per km²",
1717
"keywords": [

algorithm_catalog/vito/whittaker/records/whittaker.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
"updated": "2025-01-29T00:00:00Z",
1212
"type": "service",
1313
"title": "Whittaker smoothing on Sentinel-2 NDVI time series",
14-
"description": "# Whittaker\n\n## Description\n\nWhittaker represents a computationally efficient reconstruction method for smoothing and gap-filling of time series.\nThe primary function takes as input two vectors of the same length: the y time series data (e.g. NDVI) and the\ncorresponding temporal vector (date format) x, comprised between the start and end dates of a satellite image\ncollection. Missing or null values, as well as the cloud-masked values (i.e. NaN), are handled by introducing a\nvector of 0-1 weights w, with wi = 0 for missing observations and wi=1 otherwise. Following, the Whittaker smoother\nis applied to the time series profiles, computing therefore a daily smoothing interpolation.\n\nWhittaker's fast processing speed was assessed through an initial performance test by comparing different\ntime series fitting methods. The average runtime is 0.0107 seconds to process a single NDVI temporal profile.\n\nThe smoother performance can be adjusted by tuning the lambda parameter, which penalises the time series roughness:\nThe larger the lambda, the smoother the time series, but at the cost of the fit to the data getting worse. We found a lambda of\n10000 is adequate for obtaining more convenient results. A more detailed description of the algorithm can be\nfound in the original work of Eilers 2003.\n\n\n\n",
14+
"description": "Applies Whittaker smoothing to Sentinel-2 NDVI time series to reduce noise and enhance the signal for better analysis of vegetation dynamics.",
1515
"cost_estimate": 10,
1616
"cost_unit": "platform credits per km²",
1717
"keywords": [

0 commit comments

Comments
 (0)