Zero
Zero
Back

Deploy a Static Website to Amazon Cloudfront with Terraform

Use the Terraform Infrastructure as Code tool to build production-ready infrastructure in AWS.

Sam Magura

Sam Magura

A tree in a grassy field

Cloudfront  is AWS's Content Delivery Network (CDN) service. You can think of Cloudfront as your application's "front door", through which traffic enters and is then routed to the appropriate file or backend API.

One of the most basic ways to use Cloudfront is to serve a static website, with the website's HTML, CSS, and JS stored in Amazon S3 . This post will demonstrate how to implement this use case, using the Terraform  Infrastructure as Code language to create the AWS resources. While Cloudfront + S3 is definitely not the simplest way to deploy a static website, it makes a lot of sense if the rest of your infrastructure will be in AWS too. (If you are looking for the quickest way to deploy a static website, check out Vercel .)

To create our static website, we'll use Next.js, though you can use any framework that produces plain HTML, CSS, and JS code.

We'll also be using the Zero secrets manager to authorize Terraform to create resources in AWS. This streamlines the deployment process and keeps the AWS access key out of your codebase.

🔗 The full code for this example is available in the zerosecrets/examples  GitHub repository.

Secure your secrets conveniently

Zero is a modern secrets manager built with usability at its core. Reliable and secure, it saves time and effort.

Zero dashboard

Why Terraform?

Before we jump into coding, it's worth asking why we should use Terraform to deploy our infrastructure. A few reasons:

  • It makes your deployments repeatable and consistent.
  • Since your infrastructure is defined using code, you don't have to rely on developers remembering how they manually set up resources in the AWS console.
  • And most importantly, Terraform improves the maintainability of your infrastructure as your project scales up in complexity.

These benefits allow Terraform to scale to large teams, and have made it a go-to tool in the DevOps industry. Now, let's the begin the walkthrough.

Creating a Static Website with Next.js

Next.js web apps typically have code that runs on the web server, for example to populate a list page with records from a database during server-side rendering. That said, Next.js is also awesome for creating completely static websites, where there is no backend — just HTML, CSS, and JavaScript files that can be served directly to the browser. Of course, with a static website, you won't be able to use the full feature set of Next. Visit the docs page on Static Exports  to learn which features are supported.

Configuring a Next.js project for static export is really simple. First, create the project as normal:

Terminal
npx create-next-app@latest my-app

After filling out the prompts according to your personal preferences, the project will be created. To enable static export, open next.config.js and add the following line:

next.config.js
const nextConfig = { output: 'export', } module.exports = nextConfig

Then simply run next build. Next will place the static assets in the out directory of your project. Soon, we'll upload these files to S3 so they can be served by Cloudfront.

Storing the AWS Credentials in Zero

Next up, we need to create an AWS access key and store it in Zero — this will allow us to authorize Terraform to create resources in our AWS account.

If you don't already have an AWS account, you can sign up for a free one here . Now, navigate to the IAM  page in the AWS console. Click "My security credentials" on the right side of the page, and from there, create a new access key.

In Zero, you should create a new project to house the AWS credentials. Remember to copy the project's Zero token to a safe location on your local PC. Then paste the AWS access key details into a new Zero secret like so:

Storing the AWS credentials in Zero
Storing the AWS credentials in Zero

Great! Now let's start building out the infrastructure with Terraform.

Getting Started with Terraform

The basic workflow with Terraform is to declaratively define your infrastructure in a .tf file, and then run terraform apply to deploy that infrastructure to the cloud. To get started, please install  the Terraform CLI. I also recommend the Terraform VS Code Extension  for syntax highlighting.

The Terraform code for this project will live in a new terraform directory, separate from the Next.js app. In that directory, create a main.tf file to house the infrastructure declarations. You can split Terraform projects up into multiple files, but a single file will be sufficient for our proof-of-concept.

Start by configuring Terraform and the AWS provider:

main.tf
terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 4.16" } } required_version = ">= 1.8.4" } provider "aws" { region = "us-east-2" }

Then run terraform init to complete the initial setup.

We'll need an S3 bucket to hold our static website files, so let's go ahead and create our first resource:

main.tf
resource "aws_s3_bucket" "bucket" { bucket = "YOUR_USERNAME-terraform-cloudfront" }

Make sure to replace YOUR_USERNAME with a string that's unique to you.

Before deploying infrastructure changes, it's best to have Terraform show you a preview of what will be changed. To accomplish this, run

Terminal
terraform plan

This will actually produce an error, because we haven't defined the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables that Terraform needs to authorize with AWS. The AWS credentials are stored in Zero, so we'll need to write some glue code to connect Zero and the Terraform CLI.

Integration with Zero

If you've been following along with this blog, you're familiar with the Zero TypeScript SDK  — this is how we typically retrieve secrets from Zero. This time, it's a bit less straightforward though because we aren't writing a Node.js application. Still, writing a Node.js wrapper script around the Terraform CLI is a simple way to get the secrets out of Zero and into Terraform.

Since our Node.js script will need to execute the Terraform CLI, I decided to use zx , a tool from Google that makes it very easy to write Shell-like scripts in JavaScript. To install zx and the Zero SDK into your project, write

Terminal
npm install zx @zerosecrets/zero

Then we can create a zero-terraform.mjs script which exchanges the Zero token for the AWS credentials, populates the credentials into environment variables, and then invokes the Terraform CLI.

zero-terraform.mjs
import {zero} from '@zerosecrets/zero' $.verbose = true if (!process.env.ZERO_TOKEN) { throw new Error('Did you forget to set the ZERO_TOKEN environment variable?') } const secrets = await zero({ token: process.env.ZERO_TOKEN, pick: ['aws'], }).fetch() process.env.AWS_ACCESS_KEY_ID = secrets.aws.aws_access_key_id process.env.AWS_SECRET_ACCESS_KEY = secrets.aws.aws_secret_access_key await $`terraform ${argv._.join(' ')}`

In the last line, we pass the script's command-line arguments to Terraform. This lets you do terraform plan or terraform apply without needing a separate script for each.

To test it out, type

Terminal
ZERO_TOKEN='YOUR_ZERO_TOKEN' npx zx zero-terraform.mjs plan

making sure to paste in your actual Zero token. If it worked, Terraform will report that it will create the S3 bucket. Now run zero-terraform.mjs apply to deploy the changes.

Creating a Cloudfront Distribution

The rest of the code in this post is inspired by the Cloudfront + S3 example  in the Terraform AWS docs. The docs have examples of almost every resource, so I highly recommend becoming familiar with them.

The first step is to define the Cloudfront distribution itself:

main.tf
locals { s3_origin_id = "myS3Origin" } resource "aws_cloudfront_distribution" "s3_distribution" { origin { domain_name = aws_s3_bucket.bucket.bucket_regional_domain_name origin_id = local.s3_origin_id s3_origin_config { origin_access_identity = aws_cloudfront_origin_access_identity.oai.cloudfront_access_identity_path } } enabled = true is_ipv6_enabled = true default_root_object = "index.html" default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = local.s3_origin_id forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "redirect-to-https" } price_class = "PriceClass_200" restrictions { geo_restriction { restriction_type = "none" } } viewer_certificate { cloudfront_default_certificate = true } }

Next, define the Cloudfront Origin Access Identity (OAI),

main.tf
resource "aws_cloudfront_origin_access_identity" "oai" { }

This resource enables us to authorize Cloudfront to read from the S3 bucket. To make that work end-to-end, we need to update the S3 bucket's bucket policy , a JSON document that defines what's allowed to interact with the bucket. The policy can actually be defined directly in the .tf file, using the data keyword.

main.tf
resource "aws_s3_bucket_policy" "allow_access_from_cloudfront" { bucket = aws_s3_bucket.bucket.id policy = data.aws_iam_policy_document.allow_access_from_cloudfront.json } data "aws_iam_policy_document" "allow_access_from_cloudfront" { statement { principals { type = "AWS" identifiers = [aws_cloudfront_origin_access_identity.oai.iam_arn] } actions = [ "s3:GetObject" ] resources = [ "${aws_s3_bucket.bucket.arn}/*", ] } }

In plain English, this is saying that the Cloudfront Origin Access Identity can perform the GetObject action on any object within the bucket.

Now, run zero-terraform.mjs plan and zero-terraform.mjs apply again. This will create the Cloudfront distribution and connect it to S3.

Deploying the Static Website

Our infrastructure is complete, so the last step is to upload the static website to S3. To do this, navigate to the S3 bucket in the AWS console and click the "Upload" button. In the file picker, navigate to the out folder of your Next.js app and select the files. Make sure to select the files themselves, and not the out directory. The upload will take just a few seconds and then you'll see the files in the bucket:

The static website files in the S3 bucket
The static website files in the S3 bucket

Uploading the files through the AWS console is great for a proof-of-concept, but in a real application, it's better to automate your deployment using a CI/CD platform. If you want to automate the upload to S3, it's easy to do that using the AWS CLI , specifically the aws s3 commands.

Viewing the Hosted Website

Each Cloudfront distribution has an automatically-generated domain name, which we can use to view our website. To find it, simply navigate to the Cloudfront distribution in the AWS console using the search bar.

The Cloudfront distribution in the AWS console
The Cloudfront distribution in the AWS console

Copy the URL into your address bar and you'll see the static website!

If you're deploying a production app, you'll want the website to be hosted on a custom domain name. You can find the steps to do that in this official AWS tutorial  on hosting static websites using Cloudfront.

Cleaning Up

If you'd like to delete the AWS resources we created for this example project, first delete the contents of the S3 bucket through the console, then run

Terminal
ZERO_TOKEN='YOUR_ZERO_TOKEN' npx zx zero-terraform.mjs destroy

Final Thoughts

This tutorial walked you through getting started with Terraform on AWS. In my opinion, the tricky bits of this project were figuring out how to integrate Zero with a command-line tool (the Terraform CLI), and figuring out how to link the Cloudfront distribution to the S3 bucket. Cloudfront and S3 are both highly secure and configurable services, but this comes with a more challenging learning curve. There's also an additional learning curve to creating the resources using Infrastructure as Code, rather than through the AWS console.

All that said, once you gain traction with Terraform and the AWS platform, you can be confident that your deployments will remain consistent and repeatable even if you build out your infrastructure at a rapid pace.


Other articles

Old-school mailboxes at a post office

Using Notion as a Human-Readable Database

Capture form submissions from your web app and store them where your team works.

A bucket

Managing Files in Amazon S3 from Node.js

Adding and removing files from S3 is a breeze with the AWS JavaScript SDK.

Secure your secrets

Zero is a modern secrets manager built with usability at its core. Reliable and secure, it saves time and effort.