Hosting SPA's on AWS using Cloudfront and S3 couldn't be simpler. In this how-to I will show you with how you can create and deploy your site using Terraform

Everyone wants to host their web application on a reliable host, and you don’t get much more reliable than AWS. As well as their generous free tier, it’s possible to host a single-page application (SPA) using their architecture for very little money.

A few years ago, I started using AWS’s own Cloud Development Kit (CDK) to deploy Hugo websites like this to Amazon S3. I like that I could write the constructs using Typescript, and it was much easier than writing Cloud Formation Templates.

Roll forward a few years, and I decided to learn and use Terraform to do the same. I found Terraform to be faster and simpler to engage with, plus it was a great addition to my CV. Despite having to learn HCL (HashiCorp Configuration Language), using it to deploy and host my client websites on Amazon S3 was a breeze. I’ve just deployed a new website using the Typescript version of their infrastructure as a code tool CDKTF or Cloud Development Kit for Terraform, which is a CDK based much on AWS’s own CDK. Using Typescript and all its bells and whistles meant I could return to using the same programming language as I used to code (CDKTF is also available in a few other languages).

Using CDKTF has a considerable advantage from the point of view of a learning curve, as developers can use constructs and terminology, linters and test frameworks with which they are already familiar to write the code that will eventually deploy the infrastructure that their applications will use.

In this guide, I want to run through the process steps, the tools you’ll need and the code you can use to deploy your own IaC.

TL;DR If you want to jump straight to the code and dig in, go here

What is Terraform, and why use it

HashiCorp built Terraform, which it is their most famous and most used product; Terraform allows you to manage and deploy infrastructure resources to many cloud providers, including, but not limited to, Amazon Web Services.

If you visit the Terraform Registry, you can find all the major players in the cloud infrastructure space. Using these providers, you can deploy the same infrastructure across these providers. Using one product to deploy assets to AWS, Azure, Google Cloud, and many more is feasible in a multi-cloud setup.

What is Terraform CDK

Terraform CDK (CDKTF) is their Cloud Development Kit that allows you to use familiar programming languages to write IaC for your resources.

CDKTF is available in Typescript, Python, Go, C# and Java. CDKTF translates your code into HCL configuration files needed for Terraform to work.

Terraform State

Terraform stores the state of your current deployment. It keeps track of metadata and changes to deployed resources. When developing as a single developer, it is usual to begin storing the Terraform state files on your local computer. Storing ‘state’ locally becomes unmanageable if you change computers or work with a team. At that point, storing the state files somewhere other people can access them or you can access them from multiple machines as usual.

Storing state files in the cloud is referred to as using a remote backend. There are many remote backends that you can use to store your state files, including Amazon S3 or Terraform Cloud, among others. It’s important to keep state files secure and updated. It’s also important to note that when deploying resources using Terraform, keep the state updated by making changes in the code, not in the UI or console, as this can cause inconsistencies.

Terraform Providers

Providers are pre-built plugins that give Terraform access to external APIs. They act as translators, allowing Terraform to interact with many service providers.

Terraform Stacks

Stacks represent a collection of resources that are deployed to your cloud. You can separate state management using stacks for different environments See this example

Tools and Resources

We will use a couple of things to get this project underway. Before getting into that, though, I will explain the resources we will deploy and why we need them to run our SPA.

AWS Route53

Route53 is the AWS Domain Name Service (DNS). Even if you don’t host or register your primary domain on AWS, it is still possible to use Route53 for this project. This guide assumes that you are hosting your nameservers on Route53, but even if you’re not, it doesn’t prevent you from utilising this guide; you have to make a few adjustments.

AWS Certificate Manager

Using AWS certificate manager is simple in itself. This cloud service helps you create and manage SSL/TLS certificates for AWS services and resources.

  • at no cost to you
  • renewal of your certificates is managed
  • frictionless

Using this guide, you will provision certificates for cloudfront, which will be the front door to your application.

AWS Cloudfront

The global Content Delivery Network (CDN) that is AWS Cloudfront is what we use as the way into our application for visitors. Due to the worldwide nature of the CDN, users all over the world can access your SPA with low latency and get a high-speed response. What’s more, we can utilise caching of frequently accessed pages and data.

Like many CDNs, you can block access from specific geographical regions and host origins on different providers but route traffic from CloudFront to these origins efficiently.

Origin Access Control (OAC)

Origin Access Identity preceded OAC. Both provide secure access to the objects stored in your S3 Bucket, but OAC gives you more security and control and supports more access methods, encrypted objects, and access to S3 in all AWS regions.

AWS S3

AWS Simple Storage Solution (S3) is a cloud object storage solution, which basically means it’s a place to store files in the cloud.

Prerequisites

To successfully deploy your application, there are a couple of things you’ll need and a couple of things you will need to set up to fly. They are

  1. An AWS account.
  2. An IAM user that has an active programmatic access key and secret
  3. A Route53 Hosted Zone
  4. Your SPA

This guide is for SPA using a node/javascript application.

Steps

Install the CDKTF CLI and Setup the project

What we will do will involve using the command line interface (CLI) for CDKTF. You could install it globally because once you’ve done this for one project, you’ll want to use it all the time.

1
npm i -g cdktf-cli@latest

Once installed and you’ve checked it using cdktf --version, we can begin by creating a new project directory. Now, it’s a debate as to whether to keep the terraform code close to your application or not; you will make that decision for yourself; for now, let’s keep it adjacent so it’s always close to the application. In the future, you can keep it in the same repository as your application, especially if you use a monorepo.

1
2
3
mkdir cdktf-first
cd cdktf-first
cdktf init --template=typescript

These commands will start setting up your project for you, asking you a few questions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
  ? Do you want to continue with Terraform Cloud remote state management? (Y/n) n
  ? Project Name (cdktf-first)
  ? Project Description (A simple getting started project for cdktf.)
  ? Do you want to start from an existing Terraform project? (y/N)
  ? Do you want to send crash reports to the CDKTF team? Refer to   
  https://developer.hashicorp.com/terraform/cdktf/create-and-deploy/configuration-file#enable-crash-reporting-for-the-cli 
  for more information (Y/n)

  Note: You can always add providers using 'cdktf provider add' later on
  ? What providers do you want to use? (Press <space> to select, <a> to toggle all, <i> to invert selection, and <enter> to proceed)
  Choose aws, local, null, random, 

Terraform will start building out your project package.json and will install all the dependencies needed for your project.

Let’s get into the code

Once the installation has finished, you will see main.ts in your project folder. Most of what we will do will live here, starting off looking like this.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
import { Construct } from "constructs";
import { App, TerraformStack } from "cdktf";

class MyStack extends TerraformStack {
  constructor(scope: Construct, id: string) {
    super(scope, id);

    // define resources here
  }
}

const app = new App();
new MyStack(app, "cdktf-first");
app.synth();

First, we will tell it that we want to use the AWS provider. Replace the code in your main.ts with the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
import { AwsProvider } from "@cdktf/provider-aws/lib/provider";
import { App, TerraformStack } from "cdktf";
import { Construct } from "constructs";
class MyStack extends TerraformStack {
  constructor(scope: Construct, id: string) {
    super(scope, id);

    new AwsProvider(this, "AWS", {
      region: "eu-west-2",
    });
  }
}

const app = new App();
new MyStack(app, "cdktf-first");
app.synth();

Because we will use Cloudfront, we will use ACM to create SSL certificates in the us-east-1 region as Cloudfront only uses certificates created in that region. We need to set up a second provider in this region for this to work. We do this by repeating the provider block, but with this new region, so add:

1
2
3
4
5

const useProvider = new AwsProvider(this, "AWSEast", {
  region: "us-east-1",
  alias: "useProvider",
});

We will use this in the next section.

Setup SSL Certificates

We will create a bunch of certificates for your domain with AWS Certificates Manager (ACM). We will use example.com in this guide but replace it with your domain. We will create certificates that will cover the apex example.com and also, using a wildcard, we will also cover any subdomains at the next level, e.g. blog.example.com or crm.example.com

To take advantage of ACM-issued certificates, I suggest setting up and managing your domain from AWS Route53; if you are hosting your domain elsewhere and already have SSL certificates, skip this section. However, I am still determining how Cloudfront will react (I have not investigated this).

Firstly, import the ACM and Route53 classes.

1
2
3
4
5
6

import { AcmCertificate } from "@cdktf/provider-aws/lib/acm-certificate";
import { AcmCertificateValidation } from "@cdktf/provider-aws/lib/acm-certificate-validation";

import { DataAwsRoute53Zone } from "@cdktf/provider-aws/lib/data-aws-route53-zone";
import { Route53Record } from "@cdktf/provider-aws/lib/route53-record";

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
const cert = new AcmCertificate(this, "cert", {
  domainName: "example.com",
  // we're going to auto validate using Route53, alternative is to use 'EMAIL'
  validationMethod: "DNS",
  // this takes care of 1st level subdomains
  subjectAlternativeNames: ["*.example.com"], 
  lifecycle: {
    createBeforeDestroy: true,
  },
  provider: useProvider,
  
});

Then, import the zone information.

1
2
3
const zone = new DataAwsRoute53Zone(this, "zone", {
  name: "example.com",
});

We need this zoneId to create the validation records for the certificates to enable DNS validation, which we do use the following code, so insert this into main.ts

1
2
3
4
5
6
7
const record = new Route53Record(this, "record", {
  name: cert.domainValidationOptions.get(0).resourceRecordName,
  type: cert.domainValidationOptions.get(0).resourceRecordType,
  records: [cert.domainValidationOptions.get(0).resourceRecordValue],
  zoneId: zone.zoneId,
  ttl: 60,
});

Finally, add the following code that will trigger the DNS validation.

1
2
3
4
5
 new AcmCertificateValidation(this, "cert-validation", {
  certificateArn: cert.arn,
  provider: useProvider,
  validationRecordFqdns: [record.fqdn],
});

S3 Bucket

The S3 bucket is what will hold your SPA files. All the assets that you build will be contained and delivered from her to the CDN Cloudfront.

1
2
3
import { S3Bucket } from "@cdktf/provider-aws/lib/s3-bucket";
import { S3BucketAcl } from "@cdktf/provider-aws/lib/s3-bucket-acl";
import { S3BucketVersioningA } from "@cdktf/provider-aws/lib/s3-bucket-versioning";
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
const bucket = new S3Bucket(this, "s3Bucket", {
  bucket: "example.com",
  forceDestroy: true, // set false to prevent accidental deletion in production
});

new S3BucketAcl(this, "s3BucketAcl", {
  bucket: bucket.id,
  acl: "private",
});
new S3BucketVersioningA(this, "s3BucketVersioning", {
  bucket: bucket.id,
  //It's a good idea to version the objects in your bucket
  versioningConfiguration: {
    status: "Enabled",
  },
});

Cloudfront distribution

1
2
3
4
5
import { CloudfrontDistribution } from "@cdktf/provider-aws/lib/cloudfront-distribution";
import { CloudfrontOriginAccessControl } from "@cdktf/provider-aws/lib/cloudfront-origin-access-control";
import { DataAwsCallerIdentity } from "@cdktf/provider-aws/lib/data-aws-caller-identity";
import { DataAwsIamPolicyDocument } from "@cdktf/provider-aws/lib/data-aws-iam-policy-document";
import { S3BucketPolicy } from "@cdktf/provider-aws/lib/s3-bucket-policy";

Now it’s time to add the Cloudfront distribution.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
const distribution = new CloudfrontDistribution(this, "distribution", {
  aliases: [domain],
  customErrorResponse: [
    {
      errorCode: 403,
      responseCode: 200,
      responsePagePath: "/",
    },
  ],
  enabled: true,
  defaultRootObject: "index.html",
  defaultCacheBehavior: {
    allowedMethods: ["GET", "HEAD"],
    cachedMethods: ["GET", "HEAD"],
    targetOriginId: bucket.id,
    forwardedValues: {
      queryString: true,
      cookies: {
        forward: "all",
      },
      headers: [
        "Host",
        "Accept-Datetime",
        "Accept-Encoding",
        "Accept-Language",
        "User-Agent",
        "Referer",
        "Origin",
        "X-Forwarded-Host",
      ],
    },
    viewerProtocolPolicy: "redirect-to-https",
    minTtl: 0,
    defaultTtl: 0,
    maxTtl: 0,
  },
  origin: [
    {
      originId: bucket.id,
      domainName: bucket.bucketRegionalDomainName,
    },
  ],
  restrictions: {
    geoRestriction: {
      restrictionType: "none",
    },
  },
  viewerCertificate: {
    acmCertificateArn: cert.arn,
    sslSupportMethod: "sni-only",
  },
});

Create a record in Route53 with an alias to our CloudFront distribution domain name.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
new Route53Record(this, "alias", {
  zoneId: zone.zoneId,
  name: distribution.domainName,
  type: "A",
  alias: {
    name: distribution.domainName,
    zoneId: distribution.hostedZoneId,
    evaluateTargetHealth: false,
  },
});

We use Origin Access Control (OAC) to access our private S3 Bucket

1
2
3
4
5
6
7
new CloudfrontOriginAccessControl(this, "oac", {
  description: "oac",
  name: "oac",
  originAccessControlOriginType: "s3",
  signingBehavior: "always",
  signingProtocol: "sigv4",
});

Once created, we want to grant access to our S3 bucket using this identity. We make a policy document and attach this to our bucket.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
const current = new DataAwsCallerIdentity(this, "current", {});
const oacPolicyDocument = new DataAwsIamPolicyDocument(
  this,
  "oacPolicyDocument",
  {
    statement: [
      {
        actions: ["s3:GetObject"],
        resources: [`${bucket.arn}/*`],

        principals: [
          {
            identifiers: ["cloudfront.amazonaws.com"],
            type: "Service",
          },
        ],
        condition: [
          {
            test: "StringEquals",
            variable: "AWS:SourceArn",
            values: [
              `arn:aws:cloudfront::${current.accountId}:distribution/${distribution.id}}`,
            ],
          },
        ],
      },
    ],
  }
);

new S3BucketPolicy(this, "s3BucketPolicy", {
  bucket: bucket.id,
  policy: JSON.stringify(oacPolicyDocument.json),
});

That’s it!

Plan, Apply & Destroy

To deploy this code, run cdktf plan, cdktf apply. When you’ve finished admiring what you’ve created and you want to pull it all down again, use cdktf destroy.

Additional Steps to Upload Your SPA

I created a Next boilerplate and configured it to export a static SPA with the build command. We then uploaded the static files to S3.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
const appFiles = path.join(__dirname, "../boilerplate/dist");
const fileList = TerraformIterator.fromList(Fn.fileset(appFiles, "**/*"));
  new S3Object(this, "index", {
    forEach: fileList,
    bucket: bucket.id,
    key: fileList.value,
    source: `${appFiles}/${fileList.value}`,
    etag: Fn.filemd5(`${appFiles}/${fileList.value}`),
    lifecycle: {
      preventDestroy: false,
    },
  });

Conclusion

I hope you’ve found this guide helpful and that it starts you on the road to deploying the infrastructure for your SPA easily using Terraform and Typescript.

Feature image generated using Adobe Firefly

If you liked this post, please share it with others: