Using GitHub Workflows to automatically deploy a Jekyll website to Amazon S3
Intro
GitHub Actions is GitHub’s CI/CD tool for creating pipelines. In this article we will look at how to add a GitHub Actions workflow to your repo to:
- Automate compiling a Jekyll website each time you push code to your remote repo.
- Automate uploading the compiled website files to an S3 bucket where they will be served as a static website.
Prerequisites
- A GitHub account.
- An AWS account and S3 bucket set up to use for your static website.
- Have followed the steps here to configure GitHub as an OIDC provider in your AWS account and create an IAM role that will be used by the workflow.
Create and configure the workflow
First, we will create the yaml file for our workflow.
GitHub reads workflows from the folder .github/workflows/
, so we need to create our file in this folder.
The name of the file can be whatever you like as long as it has a .yaml or .yml file extension, for example:
.github/workflows/site-upload.yaml
Next, add the following configuration to your yaml file:
name: Build and upload Jekyll site to S3
on: # Configure when the workflow runs. Here we trigger on every push to main branch
push:
branches: [ main ]
env: # Sets variables available to jobs in the workflow
AWS_REGION : ap-northeast-1
permissions: # Modifies permissions granted to the GITHUB_TOKEN, secret used for accessing GitHub
id-token: write
contents: read
Let’s break this job apart a little bit:
on
says when the workflow will be triggered. Here we trigger it on push events, but you can also trigger on pull requests, etc.env
sets the variables available to jobs in the workflow.permissions
modifies the default permissions granted to the GITHUB_TOKEN (link assigning permissions to jobs).id-token
is to fetch an OpenID Connect token with read permissions, which will be used to access AWS.contents
gives read permission for the contents of the git repo.
Note: GITHUB_TOKEN is a secret (ie password) created each time the workflow runs, to restrict permissions given to the workflow.
Both env and permissions can be set at the workflow or the job level.
Create the build job
Next, we will add our first job, Build, which will build our Jekyll site files and produce a build artifact.
This job is defined with two steps:
- Build - install Ruby and build the jekyll repo.
- Archive production artifacts - upload the build output from Build step, so that we can use it in our second job.
jobs:
Build:
runs-on: ubuntu-latest # Specify OS of the runner
steps:
- uses: actions/checkout@v4
- uses: ruby/setup-ruby@v1
with:
ruby-version: '3.1.3'
- run: gem install bundler # set up the Ruby environment
- run: bundle install
- run: bundle exec jekyll build
- uses: actions/upload-artifact@v4
with:
name: build-output
path: _site # where to find the files that are to be uploaded
Breaking this job apart:
runs-on
specifies which OS to use to run the job.uses
runs a specified action, part of a workflow that is moved out to its own repo or file for easy reuse in many different workflows.with
is a map of input parameters passed to the action.run
runs command line programs.
We use two actions here
actions/checkout@v4
to checkout the git repo into the runner machine.
actions/upload-artifact@v4
to upload build artifacts to GitHub for re-use in other jobs.
So after running this job, we will end up with a build artifact build-output
, which is what we will upload to our S3 bucket in the next job.
Create the S3 upload job
Now lets add our second job, uploading our Jekyll site files to our AWS S3 bucket.
This job consists of 4 steps:
- Checkout repo code.
- Download build artifacts - the Jekyll site files created in the previous job.
- Configure AWS credentials.
- Upload Site Files - Upload the build artifacts to your S3 bucket.
UploadToS3:
needs: Build # define jobs that must run before this one
runs-on: ubuntu-latest
steps:
- name: Checkout repo code # Name displayed in GitHub UI
uses: actions/checkout@v4
- name: Download build artifacts
uses: actions/download-artifact@v4
with:
name: build-output
path: ./site/
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::<id>:role/<role-name> #change to reflect your IAM role’s ARN
role-session-name: <name-of-role-session> # the name that will be shown eg. in AWS logs when this workflow accesses AWS
aws-region: $
- name: Upload Site Files
run: aws s3 sync ${SOURCE_DIR} s3://${AWS_S3_BUCKET}
env:
AWS_S3_BUCKET: www.lisajd.com
SOURCE_DIR: $/site/
needs
specifies that this job depends on another job, and will run after the other job.actions/download-artifact@v4
is used to download the artifact build-output.
Note that here we have an example of env set at job level.
Now when we push any commits to our GitHub repo, the workflow will be triggered automatically. The workflow output can be viewed from Actions tab of your code repo page.
Here we see Build runs first and UploadToS3 runs second. We can also see the artifact produced.
If we click on one of the jobs, we can get more details about each step of the job, and if we drill down further into the steps we can see exactly what the output of the commands on the runner were.
Conclusion
In this tutorial we set up a GitHub Actions workflow to automate building and uploading our Jekyll website to an AWS S3 bucket. This removes the time and effort spent doing this manually each time we make changes to our website.
Other concepts / further reading
-
More info about GitHub workflows: https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions
-
More info about OIDC: https://auth0.com/docs/authenticate/protocols/openid-connect-protocol OpenID Connect (OIDC) is part of the OAuth framework, to simplify user authentication. For example, when a website lets you log in using your Facebook or Google account, this uses OAuth.