| [CSA]: Static site with AWS
Creating a basic static site with CloudFormation on AWS
Aug

Disclaimer:

This is partially based on the requirements of Coursera’s AWS Solution Architect’s course and the exam itself. You can think of these more as notes for myself as I’ve gone through the course in preparation for the exam. Also, these are written as done from the CLI, and using CloudFormation, which I prefer to the web console.

You can find the YAML file for CloudFormation in this repo, along with other files related to any posts on this site starting with [CSA].

Objective

In this article we create a static webpage using a S3 bucket. The objective is to be able to automate the creation such that it is brought up and down with a pair of scripts. This is not likely the best or easiest way to do things, but it is a way, and that’s good enough for me.

Before we get going, you’ll need your AWS CLI setup in order to do this. You can check out this in order to get an idea of how to do that.

Setting up

Before we do anything related to the site itself, we’ll create an environment variable which will hold the name of our stack. A stack in AWS is a way to manage resources via CloudFormation such that we can create, update and delete them all together. It’s a group of things.

The .env is simple enough, and will look like this:

 
BASIC_SITE=your-site-name-which-must-be-unique

In the same folder create a YAML file which will hold our CloudFormation setup. We need a way to use the variable from our .env in the YAML, so we’ll do two things:

  1. Define a StackName variable in the YAML.
  2. Pass it it via the CLI when we create the stack.

The first part looks like this:

 
AWSTemplateFormatVersion: "2010-09-09"
Description: A template to create an S3 bucket for static website hosting with a custom bucket name and public access.

Parameters:
  StackName:
    Type: String
    Description: Enter the stack name to be prefixed to bucket name.
    Default: some-random-stack-name-1234

This will then need to be passed in at the command line when we create the initial stack. We’re going to manage everything via two bash scripts: up.sh and down.bash. Below is a shortened version of the up.bash script, in which you’ll see our call to AWS and CloudFormation which takes four parameters:

  • stack-name: This is pulled from our environment variable.
  • template-body-file: Our Cloudformation YAML.
  • parameters: Here we associate the StackName as in the YAML with the environment variable BASIC_SITE in our .env.
  • capabilities This is necessary in order to create a policy resource (more below).

We’ll add more to this up.bash later, but for now we have:

 
source .env

if [ -z "$BASIC_SITE" ]; then
  echo "Error: BASIC_SITE environment variable is not set."
else
  echo "Creating stack: $BASIC_SITE"
  aws cloudformation create-stack \
      --stack-name $BASIC_SITE \
      --template-body file://basic_s3_html_site.yaml \
      --parameters ParameterKey=StackName,ParameterValue=$BASIC_SITE \
      --capabilities CAPABILITY_IAM
fi

So what we’ve done is create an environment variable. This variables is passed via the CLI to our CloudFormation setup as the StackName variable. We’re doing this so you can later change the environment variable and create a new site based on a different stack. You can also easily called the down.bash script, which will pull the name from the same .env to bring it all down. Essentially, we have named everything in one location. Yay!

Now, the script above gives us the setup we need in order to create our resources: a bucket and a policy.

Resources

The first of our resources, the bucket, will be created using the StackName variable, to which we’ll append the word bucket and AccountId which should make it uniquely named (all buckets need a unique name):

 
Resources:
  MyWebsiteBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${StackName}-bucket-${AWS::AccountId}
      WebsiteConfiguration:
        IndexDocument: index.html
        ErrorDocument: error.html
      PublicAccessBlockConfiguration:
        BlockPublicAcls: false
        IgnorePublicAcls: false
        BlockPublicPolicy: false
        RestrictPublicBuckets: false
      OwnershipControls:
        Rules:
          - ObjectOwnership: BucketOwnerPreferred

Under PublicAccessBlockConfiguration and OwnershipControls is where we make changes that affect the accessibility of the bucket. The ACL settings allow the content to be publicly visible, while the ObjectOwnership attribute makes it such that the bucket creator retains control over the bucket’s elements. This means that while everyone might be able to access the bucket and read the data, they cannot take ownership of the resource.

Next, we create the policy for the bucket which allows read-only access to the general public:

 
Resources:
 …
  MyBucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref MyWebsiteBucket
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Sid: PublicReadGetObject
            Effect: Allow
            Principal: "*"
            Action: "s3:GetObject"
            Resource: !Sub ${MyWebsiteBucket.Arn}/*

Breaking this down, you can see that the Bucket refers via !Ref MyWebsiteBucket to the bucket resource itself (created above). The creation is broken into two parts, creating the bucket itself and then the associated policy. This separation in AWS allows one to create a policy which can be reused in multiple places. The policy attributes do the following:

  • Sid: An identifier that can be anything you want.
  • Effect: We will allow the actions here.
  • Principal: Open to all users, making our bucket public.
  • Action: We allow s3:GetObject only.
  • Resource: Using Sub! we get our bucket ARN.

In total, we’ve created two resources and made them publically readable. Next, we’ll want to get information about the bucket, such as the URL.

Output

The last thing we need for the YAML is to be able to get back the bucket name and URL of our newly created site:

 
Outputs:
  WebsiteURL:
    Value: !GetAtt MyWebsiteBucket.WebsiteURL
    Description: URL of the S3 bucket to host the website.
  BucketName:
    Value: !Ref MyWebsiteBucket
    Description: Name of the S3 bucket.

In this case we’re providing shortened names that are accessible via the CLI, and these will be be used in the next section with our bash scripts to return the website URL.

Bringing it up

We round out our up.sh script in order to upload our index.html for the site and get back our site address. You’ll need to create or copy this index.html and put it alongside the YAML. This gives us a complete script as:

 
#!/bin/bash

source .env

if [ -z "$BASIC_SITE" ]; then
  echo "Error: BASIC_SITE environment variable is not set."
  return
fi

echo "Creating stack: $BASIC_SITE"
aws cloudformation create-stack \
    --stack-name $BASIC_SITE \
    --template-body file://basic_s3_html_site.yaml \
    --parameters ParameterKey=StackName,ParameterValue=$BASIC_SITE \
    --capabilities CAPABILITY_IAM

echo "Waiting for stack to be created..."
aws cloudformation wait stack-create-complete --stack-name $BASIC_SITE

export BUCKET_NAME=$(aws cloudformation describe-stacks --stack-name $BASIC_SITE --query "Stacks[0].Outputs[?OutputKey=='BucketName'].OutputValue" --output text)

echo "Uploading index.html to bucket: $BUCKET_NAME"
aws s3 cp ./index.html s3://$BUCKET_NAME/index.html --acl public-read

WEBSITE_URL=$(aws cloudformation describe-stacks --stack-name $BASIC_SITE --query "Stacks[0].Outputs[?OutputKey=='WebsiteURL'].OutputValue" --output text)
echo "Website URL: $WEBSITE_URL"

It’s a bit complex, but to summarize what it’s doing:

  1. Loading our environment variable (stack name).
  2. Creating the stack using the environment variables as StackName.
  3. Waiting for stack to be created.
  4. Getting the newly created bucket name.
  5. Uploading our index.html to it.
  6. Getting the URL.

Let’s go through the parts not already covered above:

 
echo "Waiting for stack to be created..."
aws cloudformation wait stack-create-complete --stack-name $BASIC_SITE

This will wait for the stack to complete. If we don’t use this it will likely fail to upload as the bucket won’t exist once it reaches the upload statement. Next, we’ll query in order to determine our bucket name, though we should already know because we assigned it the name within the YAML:

 
export BUCKET_NAME=$(aws cloudformation describe-stacks --stack-name $BASIC_SITE --query "Stacks[0].Outputs[?OutputKey=='BucketName'].OutputValue" --output text)

Here we use describe-stacks to get the name of our bucket. If you run this command on your stack you’ll see a long output of everything that happened in setup. To get the information we want, we use the JMESPath query language for JSON to pull out the BucketName, which we make an environment variable to be used later.

Next, we’ll upload our web page:

 
echo "Uploading index.html to bucket: $BUCKET_NAME"
aws s3 cp ./index.html s3://$BUCKET_NAME/index.html --acl public-read

This uses the s3 cp, or copy command to add our file. The –acl public-read is needed, because while we set the bucket to read, the individual files also need to be set (I’m not 100% on this, though it seems to be the case).

 
WEBSITE_URL=$(aws cloudformation describe-stacks --stack-name $BASIC_SITE --query "Stacks[0].Outputs[?OutputKey=='WebsiteURL'].OutputValue" --output text)
echo "Website URL: $WEBSITE_URL"

We then output the URL for our site using a similar method as above when getting the BUCKET_NAME. You can then run the the up.sh script using:

 
. up.sh

This might take a bit to run, but be patient as it should work. If not, you’ll see appropriate error messages telling you what went wrong.

Taking it down

This site should be as easy to bring down and it was to bring up. I say easy, but in truth, this is quite a convoluted way to bring up a single HTML page. In the old days you could just use an FTP client and drag and drop it into a folder and be done with it. That being said, there are some benefits with this setup, such as we have HTTPS and we can use it in order to up more sites on the fly to different buckets just by changing the environment variable.

Getting back to taking the site down, we’ll look at the down.sh that you’ll find alongside the up.sh. I won’t go over sections where we’re using the query language to pull out the stack and bucket names, but I will go over bucket deletion:

 
echo "Emptying the bucket..."
aws s3 rm "s3://$BUCKET_NAME" --recursive

if [ $? -eq 0 ]; then
  echo "Bucket emptied successfully."
else
  echo "Error: Failed to empty the bucket. Proceeding with stack deletion anyway."
  return
fi

# Not sure if the **--force** is necessary.
echo "Attempting to force delete the bucket..."
aws s3 rb "s3://$BUCKET_NAME" --force

Here, we recursive remove all the files in our bucket and then force remove it. I wasn’t sure if the –force remove was necessary, as it worked without it, but I figured better safe than sorry.

Wrapping it up

And that should about do it. You can now bring up and down a site by simply changing an environment variable and running a script. AWS may change some things going forward that make these scripts error out, but let me know and I’ll fix them.

When creating these, I did find there was a lot of conflicting information on buckets and making them publically available. Some of the attributes had changed in the last few years, making it so the bucket was not accessible.