This Guidance demonstrates how to process remote sensing imagery using machine learning models that automatically detect and identify objects collected from satellites, unmanned aerial vehicles, and other remote sensing devices. Satellite images are often significantly larger than standard media files. This Guidance deploys highly scalable and available image processing services that support images of this size. These services collect, process, and analyze the images efficiently, giving you more time to assess and respond to what you discovered in your imagery.
- Installation
- Deployment
- Model Runner Usage
- Supporting OSML Repositories
- Useful Commands
- Troubleshooting
- Support & Feedback
- Security
- License
If on a Mac without NPM/Node.js version 18 installed, run:
brew install npm
brew install node@18
Alternatively, NPM/Node.js can be installed through the NVM:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.5/install.sh | bash
source ~/.bash_profile
nvm install 18
If on a Mac without git-lfs installed, run:
brew install git-lfs
Otherwise, consult the official git-lfs installation documentation.
Clone the repository and pull lfs files for deployment:
git clone https://github.com/aws-solutions-library-samples/guidance-for-processing-overhead-imagery-on-aws.git
cd guidance-for-processing-overhead-imagery-on-aws
git-lfs pull
A bootstrap script is available in ./scripts/ec2_bootstrap_ubuntu.sh
to automatically install all necessary
dependencies for an Ubuntu EC2 instance to deploy the OSML demo.
This requires EC2 instance with internet connectivity. Insert into EC2 User Data during instance configuration or run as root once EC2 instance is running.
Known good configuration for EC2 instance:
- 22.04 Ubuntu LTS (ami-08116b9957a259459)
- Instance Type: t3.medium
- 50 GiB gp2 root volume
-
Create an AWS account.
-
Pull your latest credentials into
~/.aws/credentials
and runaws configure
- follow the prompts to set your default region. -
Update the deployment configuration you want per the deployment guidance.
-
Optional: If you want to enable Authentication, please head over to Enabling Authentication in this README.
-
Go into
guidance-for-processing-overhead-imagery-on-aws
directory and execute the following commands to install npm packages:npm i
-
If this is your first time deploying stacks to your account, please see below (Step 9). If not, skip this step:
npm install -g aws-cdk cdk synth cdk bootstrap
-
Make sure Docker is running on your machine:
dockerd
-
Then deploy the stacks to your commercial account:
npm run deploy
-
If you want to validate the deployment with integration tests:
npm run setup npm run integ
-
When you are done, you can clean up the deployment:
npm run destroy
By default, this package uses the osml-cdk-constructs defined in the official NPM repository. If you wish to make changes to the lib/osml-cdk-constructs
submodule in this project and want to use those changes when deploying, then follow these steps to switch out the remote NPM package for the local package.
-
Pull down the submodules for development
git submodule update --recursive --remote git-lfs clone --recurse-submodules
If you want to pull subsequent changes to submodule packages, run:
git submodule update --init --recursive
-
In
package.json
, locateosml-cdk-constructs
underdevDependencies
. By default, it points to the latest NPM package version, but swaps out the version number with"file:lib/osml-cdk-constructs"
. This will tell package.json to use the local package instead. The dependency will now look like this:"osml-cdk-constructs": "file:lib/osml-cdk-constructs",
-
Then cd into
lib/osml-cdk-construct
directory by executing:cd lib/osml-cdk-constructs
-
Execute
npm i; npm run build
to make sure everything is installed and building correctly. -
You can now follow the normal deployment steps to deploy your local changes in
osml-cdk-constructs
.
To start a job, place an ImageRequest
on the ImageRequestQueue
by going into your AWS Console > Simple Queue System > ImageRequestQueue
> Send and receive messages > and enter the provided sample for an ImageRequest
:
Sample ImageRequest:
{
"jobId": "<job_id>",
"jobName": "<job_name>",
"imageUrls": ["<image_url>"],
"outputs": [
{"type": "S3", "bucket": "<result_bucket_name>", "prefix": "<job_name>/"},
{"type": "Kinesis", "stream": "<result_stream_name>", "batchSize": 1000}
],
"imageProcessor": {"name": "<sagemaker_endpoint_name>", "type": "SM_ENDPOINT"},
"imageProcessorTileSize": 512,
"imageProcessorTileOverlap": 32,
"imageProcessorTileFormat": "< NITF | JPEG | PNG | GTIFF >",
"imageProcessorTileCompression": "< NONE | JPEG | J2K | LZW >"
}
Below are additional details about each key-value pair in the image request:
key | value | type | details |
---|---|---|---|
jobId | <job_id> |
string | Unique id for a job, ex: testId1 |
jobName | <job_name> |
string | Name of the job, ex: jobtest-testId1 |
imageUrls | ["<image_url>"] |
list[string] | List of S3 image path, which can be found by going to your S3 bucket, ex: s3://test-images-0123456789/tile.tif |
outputs | {"type": "S3", "bucket": "<result_bucket_name>", "prefix": "<job_name>/"}, {"type": "Kinesis", "stream": "<result_stream_name>", "batchSize": 1000} |
dict[string, string] | Once the OSML has processed an image request, it will output its GeoJson files into two services, Kinesis and S3. The Kinesis and S3 are defined in osml-cdk-constructs package which can be found there. ex: "bucket":"test-results-0123456789" and "stream":"test-stream-0123456789" |
imageProcessor | {"name": "<sagemaker_endpoint_name>", "type": "SM_ENDPOINT"} |
dict[string, string] | Select a model that you want to run your image request against, you can find the list of models by going to AWS Console > SageMaker Console > Click Inference (left sidebar) > Click Endpoints > Copy the name of any model. ex: aircraft |
imageProcessorTileSize | 512 | integer | Tile size represents width x height pixels and split the images into it. ex: 512 |
imageProcessorTileOverlap | 32 | integer | Tile overlap represents the width x height pixels and how much to overlap the existing tile, ex: 32 |
imageProcessorTileFormat | NTIF / JPEF / PNG / GTIFF |
string | Tile format to use for tiling. I comes with 4 formats, ex: GTIFF |
imageProcessorTileCompression | NONE / JPEG / J2K / LZW |
string | The compression used for the target image. It comes with 4 formats, ex: NONE |
Here is an example of a complete image request:
Example ImageRequest:
{
"jobId": "testid1",
"jobName": "jobtest-testid1",
"imageUrls": [ "s3://test-images-0123456789/tile.tif" ],
"outputs": [
{ "type": "S3", "bucket": "test-results-0123456789", "prefix": "jobtest-testid1/" },
{ "type": "Kinesis", "stream": "test-stream-0123456789", "batchSize": 1000 }
],
"imageProcessor": { "name": "aircraft", "type": "SM_ENDPOINT" },
"imageProcessorTileSize": 512,
"imageProcessorTileOverlap": 32,
"imageProcessorTileFormat": "GTIFF",
"imageProcessorTileCompression": "NONE"
}
Here is some useful information about each of the OSML components:
- osml-cdk-constructs
- osml-cesium-globe
- osml-imagery-toolkit
- osml-model-runner
- osml-model-runner-test
- osml-models
- osml-tile-server
- osml-tile-server-test
- osml-data-intake
npm run build
compile typescript to jsnpm run watch
watch for changes and compilenpm run deploy
deploy all stacks to your accountnpm run integ
run integration tests against deploymentnpm run clean
clean up build files and node modulesnpm run synth
synthesizes CloudFormation templates for deployments
This is a list of common problems / errors to help with troubleshooting:
If you encounter an issue where the deployment is reporting this error:
"'MemorySize' value failed to satisfy constraint: Member must have value less than or equal to 3008
The restriction stems from the limitations of your AWS account. To address this issue, you'll need to access your AWS Account
- Go to Service Quotas
- Select
AWS Services
on left sidebar - Find and select
AWS Lambda
- Select
Concurrent executions
- Click
Request increase at account-level
on top right corner - Find
Increase quota value
section and increase it to1000
- Then submit it.
- Select
- This process may require up to 24 hours to complete.
To access further details regarding this matter, please visit: AWS Lambda Memory Quotas and AWS Service Quotas.
If you are facing a permission denied issue where you are trying to git submodule update --init --recursive
, ensure that you have ssh-key setup.
If you are facing this error while trying to execute npm run deploy
,
it indicates that Docker is running out of memory and requires additional ram to support it.
You can increase memory by completing the following steps:
- Open Docker UI
- Click
Settings
gear icon on top-right - Click
Resources
on the left sidebar menu - Click
Advanced
on the left sidebar menu - Find
Memory
and adjust it to 12 GB
If you encounter an error while running npm i
that leads to an error:
error TS2307: Cannot find module ‘osml-cdk-constructs’ or its corresponding type declarations.
Please execute the following command and try again:
npm install osml-cdk-constructs
If you encounter the following error during the deployment of the OSML-DCDataplane stack:
OSML-DCDataplane failed: Error: The stack named OSML-DCDataplane failed creation, it may need to be manually deleted
from the AWS console: ROLLBACK_COMPLETE: Resource handler returned message: "Invalid request provided: Before you can
proceed, you must enable a service-linked role to give Amazon OpenSearch Service permissions to access your VPC.
(Service: OpenSearch, Status Code: 400, Request ID: 11ab9b5f-b59b-418a-9f89-98b1700bd248)"
This error indicates that the deployment could not proceed because the required service-linked role for Amazon OpenSearch Service to access your VPC is not enabled. This is actually an issue with dependency on the custom cloud formation resources used to provision the role; see link
Resolution:
- Re-run the Deployment:
- Simply re-running your deployment should resolve the issue as the service-linked role will be automatically enabled during the subsequent deployment attempt.
To post feedback, submit feature ideas, or report bugs, please use the Issues section of this GitHub repo.
If you are interested in contributing to OversightML Model Runner, see the CONTRIBUTING guide.
See CONTRIBUTING for more information.
MIT No Attribution Licensed. See LICENSE.