Skip to content

Commit

Permalink
fix: readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Mohanad Albughdadi committed Jun 21, 2024
1 parent c70ee8f commit 86712a6
Show file tree
Hide file tree
Showing 87 changed files with 2,707 additions and 3,014 deletions.
Binary file modified docs/_build/.doctrees/environment.pickle
Binary file not shown.
Binary file modified docs/_build/.doctrees/eo4eu_intro.doctree
Binary file not shown.
Binary file added docs/_build/.doctrees/instructions.doctree
Binary file not shown.
Binary file modified docs/_build/.doctrees/object_detection.doctree
Binary file not shown.
Binary file modified docs/_build/.doctrees/processing_apis.doctree
Binary file not shown.
2 changes: 1 addition & 1 deletion docs/_build/html/.buildinfo
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 3021d781e28531f0b5d7e393f240b5e7
config: 011316f624dccc884bd628681b752bfa
tags: 645f666f9bcd5a90fca523b33c5a78b7
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Diff not rendered.
12 changes: 12 additions & 0 deletions docs/_build/html/_sources/eo4eu_intro.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,17 @@
# EO4EU Tutorial IGARSS 2024

## Prerequisites

```{Note}
Before starting this tutorial, participants should have:
- Basic knowledge of Python.
- Basic knowledge of geospatial data formats (raster and vector files).
- Basic knowledge of Earth Observation concepts like Copernicus offer and Very High Resolution images.
- Prior exposure to AI concepts and tools is recommended.
- Participants in this tutorial should have their laptops with Python3 installed/access to Pangeo-eosc JupyterHub and internet connection.
```

## EO4EU Objectives

AI-augmented ecosystem for Earth Observation data accessibility with Extended reality User Interfaces for Service and data exploitation, or EO4EU, is a European Commission funded innovation project which aims at creating an advanced platform for searching, discovering, processing and analyzing EO data.​
Expand Down
46 changes: 46 additions & 0 deletions docs/_build/html/_sources/instructions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# How To Setup A Working Environment

```{important}
Before getting started with the practical examples, the user has two options to run them
1. Locally on their own machines. In this case they proceed with the instructions below to create the virtual environment and install the requirements.
2. Getting access to Pangeo-eosc services and hence an access to JupyterHub with Pangeo Notebook environment:
<https://pangeo-data.github.io/pangeo-openeo-BiDS-2023/before/EOSC.html>.
```

To get started follow the instructions below:

1. Clone the repository

```powershell
git clone https://github.com/AlbughdadiM/igarss2024-eo4eu.git
```

2. Go to the repository directory

```powershell
cd igarss2024-eo4eu
```

3. Create a python virtual environment

```powershell
python3 -m venv myvenv
```

4. Activate the virtual environment

```powershell
source myvenv/bin/activate
```

5. Install requirements

```powershell
python3 -m pip install -r requirements.txt
```

6. Go to docs where the notebooks are located.

```powershell
cd docs
```
1,213 changes: 469 additions & 744 deletions docs/_build/html/_sources/object_detection.ipynb

Large diffs are not rendered by default.

62 changes: 18 additions & 44 deletions docs/_build/html/_sources/processing_apis.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,10 @@ In this chapter, we will go through some of the APIs available via the EO4EU pla

Sentinel-2 is a part of the European Space Agency's (ESA) Copernicus Program, which aims to provide comprehensive Earth observation data. It specifically refers to two satellites, Sentinel-2A and Sentinel-2B, which work in tandem to provide high-resolution optical imagery for land monitoring.

### Key Features of Sentinel-2

- High-Resolution Imagery: Sentinel-2 satellites provide images at various resolutions ranging from 10 meters to 60 meters.
- Multispectral Imaging: They carry a multispectral imager with 13 spectral bands, covering visible, near-infrared, and shortwave infrared wavelengths.
Wide Swath Width: Each satellite has a swath width of 290 kilometers, allowing for large areas of the Earth's surface to be imaged in a single pass.
- High Revisit Frequency: The two satellites together provide a revisit time of approximately five days at the equator, ensuring up-to-date imagery.

### Importance of Sentinel-2

- Environmental Monitoring: Sentinel-2 data is crucial for monitoring various environmental parameters, including land cover changes, forest health, and agricultural productivity.
- Agriculture: Farmers and agricultural planners use Sentinel-2 imagery to monitor crop health, plan irrigation, and manage agricultural practices more efficiently.
- Disaster Management: In the event of natural disasters such as floods, wildfires, and hurricanes, Sentinel-2 provides timely data that helps in assessing damage and planning response strategies.
Expand All @@ -27,7 +22,7 @@ This API allows processing and downloading Sentinel-2 data for an ROI or a full

The API is available on [Sentinel-2 API](http://sentinel-api-test.dev.apps.eo4eu.eu/)

### Endpoints
### Sentinel-2 API Endpoints

`POST api/v1/s2l2a/roi/process`

Expand All @@ -43,30 +38,21 @@ Band combination

`GET api/v1/task/status`

## Leaf Area Index API
## Leaf Area Index (LAI) API

Leaf Area Index (LAI) is a crucial biophysical parameter that measures the total leaf area per unit ground area. It is typically expressed as a dimensionless ratio, representing the one-sided green leaf area in square meters per square meter of ground area (m²/m²). LAI is used to quantify the amount of leaf material in plant canopies and is essential for understanding various ecological and agricultural processes.

### Key Features of LAI

- Dimensionless Ratio: LAI is a unitless measure, as it is a ratio of areas.
- Canopy Density Indicator: LAI provides an indication of the density and structure of plant canopies, which is vital for understanding plant health and productivity.
The Leaf Area Index (LAI) Model is a dimensionless biophysical parameter representing the total leaf area per unit ground area, specifically defined as the one-sided green leaf area per unit ground surface area. This parameter is crucial for various environmental applications:

### Importance of LAI

- Photosynthesis and Growth: LAI is directly related to the photosynthetic capacity of plants. A higher LAI typically indicates a greater leaf area available for photosynthesis, leading to increased plant growth and productivity.
- Evapotranspiration and Water Use: LAI influences the rate of transpiration and evaporation from the plant canopy. It helps in modeling water use and understanding the water balance in ecosystems.
- Carbon Cycle: LAI is a critical parameter in carbon cycle models as it affects the amount of carbon dioxide that plants absorb from the atmosphere during photosynthesis.
- Climate Models: LAI data is used in climate models to predict how vegetation interacts with the atmosphere, including the exchange of gases and energy, which affects climate patterns.
- Agricultural Management: Farmers and agronomists use LAI to monitor crop health, optimize planting densities, and manage inputs like water and nutrients more efficiently.
- Forest and Vegetation Management: LAI is used in forest management to assess forest density, health, and growth rates. It helps in making decisions regarding thinning, harvesting, and conservation practices.
- Remote Sensing Applications: Satellite sensors, such as those on Sentinel-2, can estimate LAI over large areas, providing valuable data for monitoring vegetation changes at regional to global scales.
- Plant Growth and Health: LAI serves as an indicator of plant growth and health, with higher values indicating healthy, dense vegetation, and lower values suggesting sparse or stressed vegetation. This makes LAI essential for assessing crop health and yield.
- Photosynthetic Capacity: LAI is directly related to the photosynthetic capacity of plant canopies, affecting the amount of sunlight plants capture for photosynthesis and influencing the carbon uptake of ecosystems.
- Water Balance: LAI impacts transpiration rates and the overall water balance of plants, which in turn affects local and regional hydrology.
- Climate Modeling: LAI plays a role in simulating energy exchange between the land surface and the atmosphere in climate models. It influences albedo, evapotranspiration rates, and canopy conductance, critical for accurate weather and climate predictions.
- Large-Scale Monitoring: High-resolution optical satellite images, such as Sentinel-2 and LANDSAT, can estimate LAI, allowing for large-scale monitoring of vegetation across different landscapes and time periods.

This API estimates leaf area index for a whole Sentinel-2 scene using a deep neural network.

The API is available on [LAI API](http://lai-api-test.dev.apps.eo4eu.eu/)

### Endpoints
### LAI API Endpoints

`POST api/v1/lai/process`

Expand All @@ -78,16 +64,11 @@ The API is available on [LAI API](http://lai-api-test.dev.apps.eo4eu.eu/)

The Segment Anything Model (SAM) is a cutting-edge deep learning model developed by Meta AI that is designed for image segmentation tasks. Image segmentation is the process of partitioning an image into multiple segments or regions, often to simplify or change the representation of an image into something more meaningful and easier to analyze.

### Key Features of the Segment Anything Model (SAM)

- Generalization: SAM is designed to generalize well across a wide variety of images and objects without the need for task-specific fine-tuning.
- Zero-Shot Learning: SAM can perform segmentation tasks on new, unseen images without requiring additional training, making it highly versatile.
- Interactive Segmentation: Users can provide prompts such as points, boxes, or masks to guide the segmentation process interactively.
- Foundation Model: SAM serves as a foundation model that can be adapted for various downstream segmentation tasks with minimal effort.
- Large-Scale Training: SAM is trained on a vast and diverse dataset of images, which enhances its ability to handle a wide range of segmentation challenges.

### Importance of SAM

- Efficiency and Versatility: SAM’s ability to perform zero-shot segmentation means it can be applied to a variety of tasks without the need for extensive task-specific training, saving time and computational resources.
- Broad Applicability: SAM can be used in numerous fields, including medical imaging, autonomous driving, robotics, augmented reality, and more. This makes it a highly valuable tool across industries.
- Improved Accessibility: By enabling interactive segmentation, SAM allows users, even those without extensive technical expertise, to segment images accurately and efficiently.
Expand All @@ -96,7 +77,7 @@ The Segment Anything Model (SAM) is a cutting-edge deep learning model developed

The API of SAM is available on [SAM API](http://sam-api-test.dev.apps.eo4eu.eu)

### Endpoints
### SAM API Endpoints

`POST /api/v1/prompt`

Expand All @@ -112,24 +93,17 @@ The API of SAM is available on [SAM API](http://sam-api-test.dev.apps.eo4eu.eu)

Object detection in remote sensing involves identifying and locating specific objects within images captured by satellite or aerial platforms. These objects can range from buildings, vehicles, and roads to natural features like trees, water bodies, and agricultural fields. Object detection leverages advanced algorithms, often powered by machine learning and deep learning, to analyze vast amounts of remote sensing data efficiently.

### Key Features of Object Detection in Remote Sensing

- High Spatial Resolution: Remote sensing technologies provide high-resolution images that allow for the detection of small and detailed objects.
- Multispectral and Hyperspectral Imaging: These technologies capture data across various wavelengths, enhancing the ability to distinguish between different types of objects.
- Automated Analysis: Advanced algorithms can automatically process large datasets, identifying and categorizing objects with high accuracy and speed.
- Scalability: Object detection systems can handle images covering extensive geographic areas, making it feasible to monitor large regions consistently.

### Importance of Object Detection in Remote Sensing

- Environmental Monitoring: Detecting changes in natural environments, such as deforestation, desertification, and wetland degradation, helps in managing and protecting ecosystems.
- Urban Planning: Accurate detection of buildings, roads, and other infrastructure supports effective urban planning and development, ensuring sustainable growth.
- Disaster Management: In the aftermath of natural disasters like earthquakes, floods, and hurricanes, object detection helps in assessing damage, locating survivors, and planning recovery efforts.
- Agriculture: Identifying crop types, assessing crop health, and monitoring land use changes aid in improving agricultural practices and ensuring food security.
- Security and Defense: Detecting and monitoring military assets, illegal activities (such as smuggling or unauthorized deforestation), and strategic installations enhance national security and defense operations.
- Air Traffic and Infrastructure: Detects planes, helicopters, airports, and helipads to monitor air traffic, infrastructure development, and their impact on noise pollution and local ecosystems.
- Maritime Traffic and Pollution: Identifies ships and harbors to track maritime traffic, assess port activities, monitor pollution, and manage coastal resources.
- Industrial Monitoring: Identifies storage tanks to monitor industrial areas, potential pollution sources, and manage hazardous materials.
- Transportation and Traffic Patterns: Detects large and small vehicles and roundabouts to monitor traffic patterns and plan transportation infrastructure.
- Infrastructure Maintenance: Identifies bridges to inspect infrastructure, assess connectivity, and evaluate the impact of natural disasters.
- Urban Green Spaces: Detects recreational facilities such as baseball diamonds, tennis courts, basketball courts, ground-track fields, soccer fields, and swimming pools to monitor urban green spaces.
- Economic Activities: Identifies container cranes at ports to monitor economic activities.

The object detection API is available on [Object detection API](http://od-api-test.dev.apps.eo4eu.eu)

### Endpoints
### Object Detection API Endpoints

`POST api/v1/yolov8/obb/detect`

Expand Down
104 changes: 89 additions & 15 deletions docs/_build/html/_sphinx_design_static/design-tabs.js
Original file line number Diff line number Diff line change
@@ -1,27 +1,101 @@
var sd_labels_by_text = {};
// @ts-check

// Extra JS capability for selected tabs to be synced
// The selection is stored in local storage so that it persists across page loads.

/**
* @type {Record<string, HTMLElement[]>}
*/
let sd_id_to_elements = {};
const storageKeyPrefix = "sphinx-design-tab-id-";

/**
* Create a key for a tab element.
* @param {HTMLElement} el - The tab element.
* @returns {[string, string, string] | null} - The key.
*
*/
function create_key(el) {
let syncId = el.getAttribute("data-sync-id");
let syncGroup = el.getAttribute("data-sync-group");
if (!syncId || !syncGroup) return null;
return [syncGroup, syncId, syncGroup + "--" + syncId];
}

/**
* Initialize the tab selection.
*
*/
function ready() {
const li = document.getElementsByClassName("sd-tab-label");
for (const label of li) {
syncId = label.getAttribute("data-sync-id");
if (syncId) {
label.onclick = onLabelClick;
if (!sd_labels_by_text[syncId]) {
sd_labels_by_text[syncId] = [];
// Find all tabs with sync data

/** @type {string[]} */
let groups = [];

document.querySelectorAll(".sd-tab-label").forEach((label) => {
if (label instanceof HTMLElement) {
let data = create_key(label);
if (data) {
let [group, id, key] = data;

// add click event listener
// @ts-ignore
label.onclick = onSDLabelClick;

// store map of key to elements
if (!sd_id_to_elements[key]) {
sd_id_to_elements[key] = [];
}
sd_id_to_elements[key].push(label);

if (groups.indexOf(group) === -1) {
groups.push(group);
// Check if a specific tab has been selected via URL parameter
const tabParam = new URLSearchParams(window.location.search).get(
group
);
if (tabParam) {
console.log(
"sphinx-design: Selecting tab id for group '" +
group +
"' from URL parameter: " +
tabParam
);
window.sessionStorage.setItem(storageKeyPrefix + group, tabParam);
}
}

// Check is a specific tab has been selected previously
let previousId = window.sessionStorage.getItem(
storageKeyPrefix + group
);
if (previousId === id) {
// console.log(
// "sphinx-design: Selecting tab from session storage: " + id
// );
// @ts-ignore
label.previousElementSibling.checked = true;
}
}
sd_labels_by_text[syncId].push(label);
}
}
});
}

function onLabelClick() {
// Activate other inputs with the same sync id.
syncId = this.getAttribute("data-sync-id");
for (label of sd_labels_by_text[syncId]) {
/**
* Activate other tabs with the same sync id.
*
* @this {HTMLElement} - The element that was clicked.
*/
function onSDLabelClick() {
let data = create_key(this);
if (!data) return;
let [group, id, key] = data;
for (const label of sd_id_to_elements[key]) {
if (label === this) continue;
// @ts-ignore
label.previousElementSibling.checked = true;
}
window.localStorage.setItem("sphinx-design-last-tab", syncId);
window.sessionStorage.setItem(storageKeyPrefix + group, id);
}

document.addEventListener("DOMContentLoaded", ready, false);

Large diffs are not rendered by default.

Loading

0 comments on commit 86712a6

Please sign in to comment.