Configuration
Running as a Systemd Service
Running in Containers
Running Using AWS Instance Profile Credentials
Troubleshooting
The following environment variables are used to configure the gateway when running as a Container or as a Systemd service.
Name | Required? | Allowed Values | Default | Description |
---|---|---|---|---|
ALLOW_DIRECTORY_LIST |
Yes | true , false |
false |
Flag enabling directory listing |
AWS_SIGS_VERSION |
Yes | 2, 4 | AWS Signatures API version | |
AWS_ACCESS_KEY_ID |
Yes | Access key | ||
AWS_SECRET_ACCESS_KEY |
Yes | Secret access key | ||
AWS_SESSION_TOKEN |
No | Session token. | ||
S3_BUCKET_NAME |
Yes | Name of S3 bucket to proxy requests to | ||
S3_REGION |
Yes | Region associated with API | ||
S3_SERVER_PORT |
Yes | SSL/TLS port to connect to | ||
S3_SERVER_PROTO |
Yes | http , https |
Protocol to used connect to S3 server | |
S3_SERVER |
Yes | S3 host to connect to | ||
S3_STYLE |
Yes | virtual , path , default |
default |
The S3 host/path method. virtual is the method that that uses DNS-style bucket+hostname:port. This is the default value. path is a method that appends the bucket name as the first directory in the URI's path. This method is used by many S3 compatible services. See this AWS blog article for further information. |
DEBUG |
No | true , false |
false |
Flag enabling AWS signatures debug output |
APPEND_SLASH_FOR_POSSIBLE_DIRECTORY |
No | true , false |
false |
Flag enabling the return a 302 with a / appended to the path. This is independent of the behavior selected in ALLOW_DIRECTORY_LIST or PROVIDE_INDEX_PAGE . |
DIRECTORY_LISTING_PATH_PREFIX |
No | In ALLOW_DIRECTORY_LIST=true mode adds defined prefix to links |
||
DNS_RESOLVERS |
No | DNS resolvers (separated by single spaces) to configure NGINX with | ||
PROXY_CACHE_MAX_SIZE |
No | 10g |
Limits cache size | |
PROXY_CACHE_INACTIVE |
No | 60m |
Cached data that are not accessed during the time specified by the parameter get removed from the cache regardless of their freshness | |
PROXY_CACHE_VALID_OK |
No | 1h |
Sets caching time for response code 200 and 302 | |
PROXY_CACHE_VALID_NOTFOUND |
No | 1m |
Sets caching time for response code 404 | |
PROXY_CACHE_VALID_FORBIDDEN |
No | 30s |
Sets caching time for response code 403 | |
PROVIDE_INDEX_PAGE |
No | true , false |
false |
Flag which returns the index page if there is one when requesting a directory. |
JS_TRUSTED_CERT_PATH |
No | Enables the js_fetch_trusted_certificate directive when retrieving AWS credentials and sets the path (on the container) to the specified path |
||
HEADER_PREFIXES_TO_STRIP |
No | A list of HTTP header prefixes that exclude headers client responses. List should be specified in lower-case and a semicolon (;) should be used to as a deliminator between values. For example: x-goog-;x-something- |
||
CORS_ENABLED |
No | true , false |
false |
Flag that enables CORS headers on GET requests and enables pre-flight OPTIONS requests. If enabled, this will add CORS headers for "fully open" cross domain requests by default, meaning all domains are allowed, similar to the settings show in this example. CORS settings can be fine-tuned by overwriting the cors.conf.template file. |
CORS_ALLOWED_ORIGIN |
No | value to set to be returned from the CORS Access-Control-Allow-Origin header. This value is only used if CORS is enabled. (default: *) |
If you are using AWS instance profile credentials,
you will need to omit the AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
and AWS_SESSION_TOKEN
variables from
the configuration.
When running with Docker, the above environment variables can be set in a file
with the --env-file
flag. When running as a Systemd service, the environment
variables are specified in the /etc/nginx/environment
file. An example of
the format of the file can be found in the settings.example
file.
There are few optional environment variables that can be used.
AWS_ROLE_SESSION_NAME
- (optional) The value will be used for Role Session Name. The default value is nginx-s3-gateway.STS_ENDPOINT
- (optional) Overrides the STS endpoint to be used in applicable setups. This is not required when running on EKS. See the EKS portion of the guide below for more details.AWS_STS_REGIONAL_ENDPOINTS
- (optional) Allows for a regional STS endpoint to be selected. When the regional model is selected then the STS endpoint generated will be coded to the current AWS region. This environment variable will be ignored ifSTS_ENDPOINT
is set. Valid options are:global
(default) orregional
.
Listing of S3 directories (folders) is supported when the
ALLOW_DIRECTORY_LIST
environment variable is set to 1
. Directory listing
output can be customized by changing the
XSL stylesheet: common/etc/nginx/include/listing.xsl
.
If you are not using AWS S3 as your backend, you may see some inconsistency in
the behavior with how directory listing works with HEAD requests. Additionally,
due to limitations in proxy response processing, invalid S3 folder requests will
result in log messages like:
libxml2 error: "Extra content at the end of the document"
Another limitation is that when using v2 signatures with HEAD requests, the gateway will not return 200 for valid folders.
The gateway can be configured to prefix all list results with a given string.
This is useful if you are proxying the gateway itself and wish to relocate
the path of the files returned from the listing.
Using the DIRECTORY_LISTING_PATH_PREFIX
environment variable will allow
one to add that prefix in listing page's header and links.
For example, if one configures to DIRECTORY_LISTING_PATH_PREFIX='main/'
and
then uses HAProxy to proxy the gateway with the
http-request set-path %[path,regsub(^/main,/)]
setting, the architecture
will look like the following:
When PROVIDE_INDEX_PAGE
environment variable is set to 1, the gateway will
transform /some/path/
to /some/path/index.html
when retrieving from S3.
Default of "index.html" can be edited in s3gateway.js
.
It will also redirect /some/path
to /some/path/
when S3 returns 404 on
/some/path
if APPEND_SLASH_FOR_POSSIBLE_DIRECTORY
is set. path
has to
look like a possible directory, it must not start with a .
and not have an
extension.
An install script for the gateway shows how to install NGINX from a package repository, checkout the gateway source, and configure it using the supplied environment variables.
To run the script copy it to your destination system, load the environment variables mentioned in the configuration section into memory, and then execute the script. The script takes one optional parameter that specifies the name of the branch to download files from.
For example:
sudo env $(cat settings.example) ./standalone_ubuntu_oss_install.sh
The latest builds of the gateway (that use open source NGINX) are available on the project's Github package repository.
To run with the public open source image, replace the settings
file specified
below with a file containing your settings, and run the following command:
docker run --env-file ./settings --publish 80:80 --name nginx-s3-gateway \
ghcr.io/nginxinc/nginx-s3-gateway/nginx-oss-s3-gateway:latest
If you would like to run with the latest njs version, run:
docker run --env-file ./settings --publish 80:80 --name nginx-s3-gateway \
ghcr.io/nginxinc/nginx-s3-gateway/nginx-oss-s3-gateway:latest-njs-oss
Alternatively, if you would like to pin your version to a specific point in time release, find the version with an embedded date and run:
docker run --env-file ./settings --publish 80:80 --name nginx-s3-gateway \
ghcr.io/nginxinc/nginx-s3-gateway/nginx-oss-s3-gateway:latest-njs-oss-20220310
In order to build the NGINX OSS container image, do a docker build
as follows
from the project root directory:
docker build --file Dockerfile.oss --tag nginx-s3-gateway:oss --tag nginx-s3-gateway .
Alternatively, if you would like to use the latest version of njs, you can build an image from the latest njs source by building this image after building the parent image above:
docker build --file Dockerfile.oss --tag nginx-s3-gateway --tag nginx-s3-gateway:latest-njs-oss .
After building, you can run the image by issuing the following command and
replacing the path to the settings
file with a file containing your specific
environment variables.
docker run --env-file ./settings --publish 80:80 --name nginx-s3-gateway \
nginx-s3-gateway:oss
In the same way, if you want to use NGINX OSS container image as a non-root, unprivileged user, you can build it as follows:
docker build --file Dockerfile.unprivileged --tag nginx-s3-gateway --tag nginx-s3-gateway:unprivileged-oss .
And run the image binding the container port 8080 to 80 in the host like:
docker run --env-file ./settings --publish 80:8080 --name nginx-s3-gateway \
nginx-s3-gateway:unprivileged-oss
It is worth noting that due to the way the startup scripts work, even the unprivileged container will not work with a read-only root filesystem or a specific uid/gid set other then the default of 101
.
In order to build the NGINX Plus container image, copy your NGINX Plus
repository keys (nginx-repo.crt
and nginx-repo.key
) into the
plus/etc/ssl/nginx
directory before building.
If you are using a version of Docker that supports Buildkit, then you can build the image as follows in order to prevent your private keys from being stored in the container image.
To build, run the following from the project root directory:
DOCKER_BUILDKIT=1 docker build \
--file Dockerfile.buildkit.plus \
--tag nginx-plus-s3-gateway --tag nginx-plus-s3-gateway:plus \
--secret id=nginx-crt,src=plus/etc/ssl/nginx/nginx-repo.crt \
--secret id=nginx-key,src=plus/etc/ssl/nginx/nginx-repo.key \
--squash .
Otherwise, if you don't have Buildkit available, then build as follows. If you want to remove the private keys from the image, then you may need to do a post-build squash operation using a utility like docker-squash.
docker build --file Dockerfile.plus --tag nginx-plus-s3-gateway --tag nginx-plus-s3-gateway:plus .
Alternatively, if you would like to use the latest version of njs with NGINX Plus, you can build an image from the latest njs source by building this image after building the parent image above:
docker build --file Dockerfile.plus --tag nginx-plus-s3-gateway --tag nginx-plus-s3-gateway:latest-njs-plus .
After building, you can run the image by issuing the following command and
replacing the path to the settings
file with a file containing your specific
environment variables.
docker run --env-file ./settings --publish 80:80 --name nginx-plus-s3-gateway \
nginx-plus-s3-gateway:plus
AWS instance profiles
allow you to assign a role to a compute so that other AWS services can trust
the instance without having to store authentication keys in the compute
instance. This is useful for the gateway because it allows us to run the
gateway without storing an unchanging AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
and
AWS_SESSION_TOKEN
in a file on disk or in an easily read environment variable.
Instance profiles work by providing credentials to the instance via the AWS Metadata API. When the API is queried, it provides the keys allowed to the instance. Those keys regularly expire, so services using them must refresh frequently.
Following the AWS documentation
we can create a IAM role and launch an instance associated with it. On that
instance, if we run the gateway as a Systemd service there are no additional
steps. We just run the install script without specifying the
AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
and AWS_SESSION_TOKEN
environment variables.
However, if we want to run the gateway as a container instance on that EC2 instance, then we will need to run the following command using the AWS CLI tool to allow the metadata endpoint to be accessed from within a container.
aws ec2 modify-instance-metadata-options --instance-id <instance id> \
--http-put-response-hop-limit 3 --http-endpoint enabled
After that has been run we can start the container normally and omit the
AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
and AWS_SESSION_TOKEN
environment variables.
The commands below all reference the deployments/ecs/cloudformation/s3gateway.cf
file. This file will need to be
modified.
-
Update the following 4 parameters in the
Parameters
section of the CloudFormation file for your specific AWS account:NewBucketName
- any S3 bucket name. Remember that S3 bucket names must be globally uniqueVpcId
- Any VPC ID on your AWS accountSubnet1
- Any subnet ID that's in the VPC used aboveSubnet2
- Any subnet ID that's in the VPC used above
-
Run the following command to deploy the stack (this assumes you have the AWS CLI & credentials setup correctly on your host machine and you are running in the project root directory):
aws cloudformation create-stack \ --stack-name nginx-s3-gateway \ --capabilities CAPABILITY_NAMED_IAM \ --template-body file://deployments/ecs/cloudformation/s3gateway.yaml
-
Wait for the CloudFormation Stack deployment to complete (can take about 3-5 minutes)
- You can query the stack status with this command:
aws cloudformation describe-stacks \ --stack-name nginx-s3-gateway \ --query "Stacks[0].StackStatus"
- You can query the stack status with this command:
-
Wait until the query above shows
"CREATE_COMPLETE"
-
Run the following command to get the URL used to access the service:
aws cloudformation describe-stacks \ --stack-name nginx-s3-gateway \ --query "Stacks[0].Outputs[0].OutputValue"
- Upload a file to the bucket first to prevent getting a
404
when visiting the URL in your browser
# i.e. aws s3 cp README.md s3://<bucket_name>
- Upload a file to the bucket first to prevent getting a
-
View the container logs in CloudWatch from the AWS web console
-
Run the following command to delete the stack and all resources:
aws cloudformation delete-stack \ --stack-name nginx-s3-gateway
If you are planning to use the container image on an EKS cluster, you can use a service account which can assume a role using AWS Security Token Service.
- Create a new AWS IAM OIDC Provider. If you are using AWS EKS Cluster, then the IAM OIDC Provider should already be created as the part of cluster creation. So validate it before you create the new IAM OIDC Provider.
- Configuring a Kubernetes service account to assume an IAM role
- Annotate the Service Account using IAM Role create in the above step.
- Configure your pods, Deployments, etc to use the Service Account
- As soon as the pods/deployments are updated, you will see the couple of Env Variables listed below in the pods.
AWS_ROLE_ARN
- Contains IAM Role ARNAWS_WEB_IDENTITY_TOKEN_FILE
- Contains the token which will be used to create temporary credentials using AWS Security Token Service.
- You must also set the
AWS_REGION
andJS_TRUSTED_CERT_PATH
environment variables as shown below in addition to the normal environment variables listed in the Configuration section.
The following is a minimal set of resources to deploy:
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-s3-gateway
annotations:
eks.amazonaws.com/role-arn: "<role-arn>"
# See https://docs.aws.amazon.com/eks/latest/userguide/configure-sts-endpoint.html
eks.amazonaws.com/sts-regional-endpoints: "true"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-s3-gateway
spec:
replicas: 1
selector:
matchLabels:
app: nginx-s3-gateway
template:
metadata:
labels:
app: nginx-s3-gateway
spec:
serviceAccountName: nginx-s3-gateway
containers:
- name: nginx-s3-gateway
image: "ghcr.io/nginxinc/nginx-s3-gateway/nginx-oss-s3-gateway:latest-20220916"
imagePullPolicy: IfNotPresent
env:
- name: S3_BUCKET_NAME
value: "<bucket>"
- name: S3_SERVER
value: "s3.<aws region>.amazonaws.com"
- name: S3_SERVER_PROTO
value: "https"
- name: S3_SERVER_PORT
value: "443"
- name: S3_STYLE
value: "virtual"
- name: S3_REGION
value: "<aws region>"
- name: AWS_REGION
value: "<aws region>"
- name: AWS_SIGS_VERSION
value: "4"
- name: ALLOW_DIRECTORY_LIST
value: "false"
- name: PROVIDE_INDEX_PAGE
value: "false"
- name: JS_TRUSTED_CERT_PATH
value: "/etc/ssl/certs/Amazon_Root_CA_1.pem"
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: http
readinessProbe:
httpGet:
path: /health
port: http
The default behavior of the container is to return a 404
error message for any non-200
response code. This is implemented as a security feature to sanitize any error response from the S3 bucket being proxied. For container debugging purposes, this sanitization can be turned off by commenting out the following lines within default.conf.template
.
proxy_intercept_errors on;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 420 422 423 424 426 428 429 431 444 449 450 451 500 501 502 503 504 505 506 507 508 509 510 511 =404 @error404;
The REST authentication method used in this container does not work with AWS IAM roles that have MFA enabled for authentication. Please use AWS IAM role credentials that do not have MFA enabled.