Mask S3 bucket for CMS uploaded assets so it looks like the files are on your own server #14
Closed
matthewstick
started this conversation in
Feature Requests & Enhancements
Replies: 1 comment
-
@matthewstick Closing as a duplicate of #9 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We'd like the ability for a file to look like it's on the server, even if it is hosted in S3
AKA,
https://mydomain.com/assets/pdfs/important-document.pdf
actually lives at
https://cdn.craft.cloud/759c0cc3-9a92-46d3-b2f9-90605dc01c35/assets/pdfs/important-document.pdf
Right now we have a site where the client uploads lots of PDFs. They want to link the documents in a friendly way. This actually would be more friendly... don't like the hard coded natured of assets:
https://mydomain.com/pdfs/important-document.pdf
So een if the file is technically there:
https://cdn.craft.cloud/759c0cc3-9a92-46d3-b2f9-90605dc01c35/pdfs/important-document.pdf
The links appear on the website as if it is here
https://mydomain.com/pdfs/important-document.pdf
We've done this a lot in apache or nginx. ChatGpt (unverified) tells me its possible on lamba somthing like this (below).
thanks!
If you're using AWS Lambda to serve a website and you want to rewrite URLs so that files stored on S3 appear as if they're being served from your own domain, you can set up an AWS Lambda function to act as a reverse proxy. This involves using AWS API Gateway in combination with Lambda to intercept requests and fetch the appropriate file from S3. Here's a high-level overview of the steps involved:
1. Set Up AWS Lambda Function
Create a Lambda Function: In the AWS Management Console, create a new Lambda function. Choose an execution role that has permission to access your S3 bucket and CloudWatch Logs for logging.
Implement File Fetching Logic: Write the function's code to parse the incoming request's path, map it to the corresponding S3 object key, and then use the AWS SDK to fetch this object from S3. You might want to handle different content types and potential errors (like
NoSuchKey
when the file doesn't exist).Here's a basic example in Python using the Boto3 AWS SDK:
2. Set Up API Gateway
Create a New API Gateway: Go to the API Gateway service in the AWS Management Console and create a new API.
Configure a Resource and Method: Create a new resource (e.g.,
/pdfs
) and a method (e.g.,GET
) that corresponds to how you want to access the files.Integrate with Lambda: Set the integration type of your method to Lambda Function and select the Lambda function you created in the previous step.
Deploy the API: Create a new deployment for your API and select a stage (e.g.,
prod
). This generates a URL for accessing your API.3. Update DNS Configuration
To use your own domain (e.g.,
https://mydomain.com/pdfs/...
), you need to configure a custom domain name in API Gateway and update your DNS settings:Configure Custom Domain in API Gateway: In the Custom Domain Names section of API Gateway, add your domain and link it to your API deployment.
Update DNS Settings: Create a CNAME record (or an A record if using AWS Route 53 and an Alias) in your DNS configuration to point your domain to the API Gateway domain.
4. Testing and Debugging
After setting up your Lambda function, API Gateway, and DNS settings, test your configuration by accessing a PDF file via your custom URL. Monitor the logs in CloudWatch for any errors and adjust your Lambda function as necessary.
Conclusion
This approach allows you to serve files from S3 through your domain without revealing the S3 bucket's URL. It provides flexibility in handling requests and can serve as part of a serverless architecture, reducing the need for traditional server management.
Beta Was this translation helpful? Give feedback.
All reactions