Photo Ashley Collinge's face

Static Website Deployment (AWS S3 & CloudFront)

Table of Contents

Below is my guide to deploying a static site from GitHub to AWS making use of the following services: S3, CloudFront, Lambda@Edge and Route53.

My personal website is a static website created using Hugo (a fantastic static site generator), and I wanted to host it on one of the major cloud providers inexpensively. I also wanted to use HTTPS so I couldn’t just use an S3 bucket open to public access, I needed to use CloudFront. (If you only require HTTP, you can just enable static access to your S3 bucket from the properties page.)

S3

S3 is an object based storage system, we’re going to use it to host the actual files that make up our website. The files will be pulled from a directory inside a GitHub repo.

  1. S3 bucket - create a brand new S3 bucket. The name needs to match that of your domain(s). You won’t need to allow any public access as you’ll be doing all you’re uploading through GitHub (or through API/web), and CloudFront will be given specific permissions allowing access to S3.

  2. For me I’m using ’test.ashleycollinge.com’ as the domain for my website. I’ve also chosen the AWS eu-west-2 (London) region (it might be better to move it to a different one, I’m not sure!). You can leave the ‘Block all public access’ checkbox checked as it won’t be required.

Image of s3 initial configuration page

  1. I also (optionally) added some tags to the bucket so I can reference them later together as a project.

initial s3 config page, showing tags

  1. Upload your static content. Open your new S3 bucket and under ‘Objects’ select the Upload button. Here you can upload a simple index.html for testing like I’m doing (and set up the Github integration later).

S3 upload status

CloudFront

CloudFront delivers content over the web from edge locations. For me it’s putting a CDN in front of S3.

  1. Open the CloudFront console, goto Distributions, and click Create distribution.
  2. For the ‘Origin Domain’ choose the S3 bucket you created earlier.
  3. Select a new name for the origin, I’ve left it as default.
  4. For ‘S3 bucket access’, select ‘Yes use OAI.
  5. Select ‘Create new OAI’ and select create to create a brand new separate identity which will provide permissions to CloudFront for bucket access.
  6. Select ‘Yes update the bucket policy’ - this will updated the bucket policy with the new OAI.
  7. For my repo, my static files are stored in /public/*, so I would add that to the Origin path. This would add /public/ to the start of all requests to S3.

Image of CloudFront Settings

  1. You can leave the ‘Default cache behavior’ and ‘Function associations’ (for now) section as default.
  2. Under ‘Settings’, you can choose whichever ‘Price class’ you want.
  3. Add your domain and any other subdomains as ‘Alternative domain names’, for me that’s test.ashleycollinge.com and www.test.ashleycollinge.com.

Image of added CNAMES

  1. We need our new domains to be included on any certificates provided to clients, so we need to request a new SSL cert.
  2. Click ‘Request new certificate’ which will open the AWS Certificate Manager and straight into the Request a Certificate wizard.
  3. Under ‘Add domain names’ enter all of the domains you entered previously as CNAMES, for me that’s test.ashleycollinge.com and www.test.ashleycollinge.com and press next.

Image here of entered domain names for new cert

  1. For the validation method we’ll use DNS validation, select this and press next.
  2. Add any tags you feel you need and click next.

Image of added tags

  1. Review all of the options you’ve chosen and click ‘Confirm and request’.
  2. If you’ve added your domain(s) to Route53 you’ll be able to just select the ‘Create record in Route53’ button for each domain, this will create a CNAME record in Route53 which AWS Cert Manager will use to verify you own the domain. If not, you’ll need to manually create the CNAME records in whatever DNS control panel you have access to.

Image of create dns record button

  1. Once you’re records have been created, either in Route53 or elsewhere, press ‘Continue’.
  2. Wait until the Validation status has changed to Success for all of your domains.

Image of validation statuses of certs

  1. Once Validated, you can close the AWS Cert Manager tab, and go back to the CloudFront wizard.
  2. Under ‘Customer SSL certificate’, click the refresh button and then select your new certificate in the drop down menu.

Image of the custom CNAMES and newly added SSL cert

  1. I’m leaving the security policy as default, but you can change if required.
  2. I’m entering ‘index.html’ as the Default root object as folders won’t redirect like normal web servers will do.

Image of default root object

  1. Once happy with your choices, select ‘Create distribution’. This will now deploy, and may take some time. You can check on the status in the CloudFront front page.

  2. If you open up the distribution you’ve just created and copy the Distribution domain name into another tab you should be able to see your test HTML file you uploaded to the S3 bucket earlier. This is being accessed through CloudFront. (Client <-HTTPS-> CloudFront <-REST-> S3 Bucket).

Image of Hello world

Route53

All you need to do now is create CNAME records pointing your domains to the Distribution Domain name, as we’ve made Amazon aware of which domain names we’re using their servers will happily accept the requests (and the SSL cert will be valid too). You can use the alias option in Route53 if you want, it shouldn’t make too much difference.

  1. In Route53, click Add Records. Then choose ‘Simple Routing’.

Select Routing policy

  1. Define your records as either CNAME records to your Distriubtion Domain Name, or as aliases to the CloudFront distribution.

Define simple record

  1. Once completed, click Add Records.

Configure records finished

  1. After a little while, test it out.

Tested working DNS change

Lambda@Edge

In a lot of cases most web servers will normally take this: domain.com/ and internally redirect it to domain.com/index.html (or whatever else has been setup). S3 will also do this, but only for the root of the bucket, not for the deeper directories. So domain.com/post/ will not automatically redirect to domain.com/post/index.html, but fail with either Not Found or Access Denied. This is because directories don’t really exist. To fix this, we will create a Lambda function which will take each request and add index.html to the end if there’s a slash at the end and nothing else. We will also deploy the lambda function to the edge, meaning these requests are dealt with further up the chain.

Here is the lambda function that I’ve used. Notice that it logs the old and new URI, so when testing it’s easier to see if it’s working.

'use strict';
exports.handler = async (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to Lambda@Edge 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;


    var newuri = olduri.replace(/\/$/, '\/index.html');

    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};
  1. Open the Lambda console. Click Create Function.
  2. Choose a function name, select nodeJS for the runtime and select ‘Create a new role with basic Lambda permissions.

Create function window

  1. It will take a moment or two to create. Once done, you can go to Configuration, Tags, and click Manage Tags. Then you can add some optional tags if you need.

Manage tags

  1. Copy the code from the top into the code window, you must click the Deploy button for the code to be saved. The save button its self doesn’t do much.

Copy JS code in

Deploy changes

  1. Create test event using the ‘cloudfront-response-generation’ template which will fill in most details for you - for us you just need to add the trailing slash to the test URL.

Create test event

  1. Confirm that the test has been successful.

Successful test of function

  1. Next, we can publish the function to the edge to begin working. It will create a new version of the function (:1) and deploy it. If you make any changes you must deploy the changes so they save the function, then deploy to the edge again. You’ll be able to see the different version numbers in CloudFront.

Publish to Edge

  1. Ensure you have chosen the correct CloudFront distribution, set CloudFront event to be ‘Origin Request’ (when CloudFront requests the S3 origin). You don’t need to include the body.

Filled in deploy to edge window

  1. You may get an error, this is due to the IAM role being used by the functin doesn’t have the correct permissions, we need to allow some extra permissions before we can deploy to edge.

Possible error, lack of permission

  1. Go to IAM, then Roles and find the role you created earlier for the Lambda function. Then go to Trust Relationship, and click edit trust relationship.

Open trust relationship of role

  1. Copy the code from below in, and click save. This will change the permissions of the role, and will allow you to now deploy to the edge.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "lambda.amazonaws.com",
          "edgelambda.amazonaws.com"
        ]
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Code copied in to fix trust relationship

  1. Once done, you can go back to Properties on the function, go to Triggers and you should see the trigger specific to your chosen CloudFront distribution.

Showing new trigger on function, cloudfront

  1. If you go back to CloudFront, choose your distribution, then go to behaviours and open the default behaviour. If you scroll down to the bottom you will see the associated function, as seen below.

Showing function associated to cloudfront request

Recent Posts

Open Source Software I use in my Homelab
Tue, 16 Sep 2025
Secure Remote PowerShell over WinRM Deployment on Windows Machines
Tue, 16 Sep 2025
Static Website Deployment (AWS S3 & CloudFront)
Fri, 13 Aug 2021
Jessie
Thu, 01 Jul 2021
Jessie the Dog
Camera: Canon 250D
Resolution: 6000 x 4000
Exposure Time: 1/600
Focal Length: 250mm
F/stop: f/5.6
ISO: 200