You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Greenwood can be automatically deployed to [**AWS**](https://aws.amazon.com/) for static hosting using [**S3**](https://aws.amazon.com/s3/)and[**Cloudfront**](https://aws.amazon.com/cloudfront/) with GitHub Actions.
11
+
Greenwood projects can be deployed to [**AWS**](https://aws.amazon.com/) for static hosting ([**S3**](https://aws.amazon.com/s3/)/[**CloudFront**](https://aws.amazon.com/cloudfront/)) and dynamic serverless hosting of SSR pages and API routes (on [**Lambda**](https://aws.amazon.com/lambda/)). Although static hosting is fairly trivial, for full-stack applications and when leveraging additional AWS services to compliment your application, we recommend leveraging [IaC (Infrastructure as Code)](https://en.wikipedia.org/wiki/Infrastructure_as_code) tools, as we will demonstrate later in this guide.
12
12
13
13
> You can see a complete hybrid project example in our [demonstration repo](https://github.com/ProjectEvergreen/greenwood-demo-adapter-aws).
14
14
15
15
## Static Hosting
16
16
17
-
In this section, we'll share the steps for up S3 and Cloudfront together for static web hosting.
17
+
If you only need static hosting (SSG, SPA), then you may benefit from just a little manual configuration to set up an S3 bucket and CloudFront distribution for your project.
18
18
19
-
1. Configure S3 by following [these steps](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.SimpleDistribution.html)
20
-
1. Once you have followed those steps, run `greenwood build` in your project and upload the contents of the _public/_ directory to the bucket
21
-
1. Finally, setup Cloudfront to use this bucket as an origin by [following these steps](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.SimpleDistribution.html#GettingStartedCreateDistribution):
22
-
23
-
> Keep an eye out for prompts from AWS to enable IAM rules for your function and make sure to invalidate the Cloudfront distribution between tests, since error pages / responses will get cached.
19
+
1. Configure S3 by following [this guide](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.SimpleDistribution.html)
20
+
1. Once you have followed those steps, run `greenwood build` in your project and upload the contents of the _public/_ directory to the bucket.
21
+
1. Finally, setup CloudFront to use the bucket as an origin by following [these steps](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.SimpleDistribution.html#GettingStartedCreateDistribution)
24
22
25
23
You should now be able to access your site at _http://{your-dist}.cloudfront.net/_! 🏆
26
24
27
25
Now at this point, if you have any routes like `/search/`, you'll notice they are not working unless _index.html_ is appended to the path. To enable routing (URL rewriting) for cleaner URLs, follow the _Configure Trigger_ section of [this guide](https://aws.amazon.com/blogs/compute/implementing-default-directory-indexes-in-amazon-s3-backed-amazon-cloudfront-origins-using-lambdaedge/) to attach the Lambda as a [**Lambda@Edge**](https://aws.amazon.com/lambda/edge/) function that will run on every incoming request.
28
26
27
+
> Keep an eye out for prompts from AWS to enable IAM rules for your function and make sure to invalidate the CloudFront distribution between tests, since error pages / responses will get cached.
28
+
29
29
Below is a sample Edge function for doing the rewrites:
30
30
31
31
<!-- prettier-ignore-start -->
@@ -37,7 +37,7 @@ Below is a sample Edge function for doing the rewrites:
37
37
const { request } =event.Records[0].cf;
38
38
39
39
// re-write "clean" URLs to have index.html appended
40
-
// to support routing for Cloudfront <> S3
40
+
// to support routing for CloudFront <> S3
41
41
if (request.uri.endsWith("/")) {
42
42
request.uri=`${request.uri}index.html`;
43
43
}
@@ -50,21 +50,140 @@ Below is a sample Edge function for doing the rewrites:
50
50
51
51
<!-- prettier-ignore-end -->
52
52
53
-
> At this point, you'll probably want to use Route 53 to [put your domain in front of your Cloudfront distribution](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html).
53
+
> At this point, you'll probably want to use Route 53 to [put your domain in front of your CloudFront distribution](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html).
54
54
55
55
## Serverless
56
56
57
-
Coming soon!
57
+
If your Greenwood project has SSR pages and / or API routes that you would like to deploy to AWS Lambda functions, our recommendation is to install [our adapter plugin](https://github.com/ProjectEvergreen/greenwood/tree/master/packages/plugin-adapter-aws) and then add it to your _greenwood.config.js_, which at build time will generate Lambda compatible function code for all your dynamic pages and routes.
Just like Greenwood has its own [standard build output](/docs/reference/appendix/#build-output), this plugin will also generate its own standard adapter output tailored for Lambda, so as to provide a consistent starting point to integrate with your preferred deployment tool of choice.
78
+
79
+
The adapted functions will be output to a folder called _.aws-output_ with the following two folders:
80
+
81
+
-`api/` - All API routes will be in this folder, with one folder per route
82
+
-`routes/` - All SSR pages will be in this folder, with one folder per route
83
+
84
+
Here is an example from a directory listing perspective of what the structure of this folder looks like:
85
+
86
+
```shell
87
+
.aws-output/
88
+
api/
89
+
event/
90
+
event.js
91
+
index.js
92
+
package.json
93
+
search/
94
+
...
95
+
routes/
96
+
admin/
97
+
...
98
+
products/
99
+
index.js
100
+
package.json
101
+
products.route.chunk.jgsTuvlz.js
102
+
products.route.js
103
+
```
104
+
105
+
For **_each_** of the folders in the `api` or `routes` directories, it would be as simple as just creating a zip file for each folder / route, or just pointing your IaC tooling to those output folders, as we'll get into in the next section.
106
+
107
+
### SST (IaC) Example
108
+
109
+
Given the nature of AWS hosting and the plethora of related services that you can use to compliment your application, the Greenwood AWS adapter is specifically designed to output purely compatible Lambda functions, one per folder, that can be plugging into any IaC tool. (or zipped up and deployed manually, if you prefer)
110
+
111
+
While there are many options for IaC tooling, [**SST**](https://sst.dev/) is a very powerful tool which let's you entirely define your AWS infrastructure with TypeScript, combining as few or as many AWS service as you may need. Below is a simple
Although the above example is hardcoded, you'll want use the build output manifest from Greenwood by following the [complete example repo we have](https://github.com/ProjectEvergreen/greenwood-demo-adapter-aws) for deploying a full-stack Greenwood application.
58
178
59
-
> There is no adapter plugin yet for serverless hosting, though it is on [our roadmap](https://github.com/ProjectEvergreen/greenwood/issues/1142).
179
+
> We also have an [**Architect**](https://arc.codes/) example [for reference](https://github.com/ProjectEvergreen/greenwood-demo-adapter-aws/tree/feature/arc-adapter) as well.
60
180
61
181
## GitHub Actions
62
182
63
-
If you're using GitHub, you can use GitHub Actions to automate the pushing of build files on commits to a GitHub repo. This action will also invalidate your Cloudfront cache on each publish.
183
+
If you're using GitHub, you can use GitHub Actions to automate the pushing of build files on commits to a GitHub repo. This can help automate the uploading of your static assets, or in the case of IaC, running your preferred IaC tool to deploy your application for you.
64
184
65
-
1. In your AWS account, create and / or add an AWS Secret Access Key (`AWS_SECRET_ACCESS_KEY`) and Access Key ID (`AWS_SECRET_ACCESS_KEY_ID`) and add them to your repository as [GitHub secrets](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions).
66
-
1. We also recommend adding your bucket name as secret too, e.g. `AWS_BUCKET_NAME`
67
-
1. At the root of your repo add a GitHub Action called _.github/workflows/publish.yml_ and adapt as needed for your own branch, build commands, and package manager.
185
+
1. In your AWS account, create (or use) an AWS Secret Access Key (`AWS_SECRET_ACCESS_KEY`) and Access Key ID (`AWS_SECRET_ACCESS_KEY_ID`) and add them to your repository as [GitHub secrets](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions).
186
+
1. At the root of your repo add a GitHub Action called _.github/workflows/publish.yml_ and adapt as needed for your own branch, build commands, package manager, and tooling.
0 commit comments