9 min read

Ghost blogging on Azure Container Apps

Ghost blogging on Azure Container Apps
Photo by CHUTTERSNAP / Unsplash

Introduction

Hosting a blog these days can easily be done without having to cost anything. There are a lot of solutions in the likes of Medium, Weebly, Wix,... But for the more technology-minded people like us, who want to go the extra mile, we didn't go for the easiest solution. We chose to run our blog on Azure Container Apps using the Ghost blogging platform.

In this post, I'll go deeper into how the site is hosted as well as how the deployment is done at this moment, which is not yet automated.

The software

Ghost is a well-known blogging solution in the self-hosted community. It's available as a SAAS service over here. But you can also self-host it as a containerized application with container images being provided over at docker hub or self-built over here.

There are lots of similar frameworks like Ghost in the self-hosted space. One of the best-known ones is WordPress, it also has to be closely maintained because it has an attractive target to search vulnerabilities for. Then there are also the static code generators, which are based on Markdown (a lightweight markup language). These have the advantage of being easy to import into other tools and version via Git. However, we also wanted to be able to write articles on the move and have the ability for the users to comment. Another advantage is that Ghost is a very nice package as is, Robbe did some customization with custom Javascript and HTML but, it's not necessary if you just want to get started!

Architecture

Now that the decision on the software has been made, the bigger question is, where do we host it? We were contemplating hosting it in an on-premises Kubernetes cluster somewhat as a challenge, but then came along a new container solution in Azure. This container platform gives us the best of all the other Azure offerings for our use case. However just to be sure let's look at our requirements.

Requirements

I needed a solution that allows the following:

  • Scalable both for performance & pricing
  • Low idle pricing, if possible serverless
  • No warm-up time (scaling to zero is not an option)
  • Support containers as the Docker Hub compiled version will be deployed
  • No support for databases is required as we will use PAAS here

Azure Container Solutions

1. Azure Container Instances 2. Docker Runtime in Azure App Services 3. Azure Container Apps 4. Azure Kubernetes Services 5. Red Hat Open Shift / Service Fabric

For good measure a few sentences about why one should choose Azure Container Apps above the other solutions currently available in Azure. There are lots of articles out there that go into great detail about the numerous differences and use cases, but I'll specify for our blog:

  • NEW = COOL (technical fact)
  • It has a lot of K8S features baked in, some of which are being backported to AKS native (KEDA autoscaler, Envoy proxy, versioning, ...)
  • Completely serverless, although can use Workload Profiles with dedicated hardware
  • Scalable down to zero, but in our case, scalable to 1 to avoid a cold start
  • As it is built on AKS it supports all types of containers

Azure Architecture

1. Azure File Share 2. Storage Account (General Purpose V2) 3. Azure Container App 4. Azure Container Apps Environment 5. MySQL DB 6. MySQL Server 7. AKS (managed by Microsoft) 8. Log Analytics for ContainerInsights 9. Mailgun integration 10. Recovery Services Vault

Above is a design I made which has evolved over time. There are 4 big parts in the architecture:

  • Storage
  • Database
  • Compute
  • External services

Below you'll get a short overview on how to configure these items. As all things start, I've deployed everything the first time via the Azure Portal and then reverse-engineered the setup if it was tested & verified to write it in code.

Storage

The software requires some persistent storage, for mostly the static data. These are amongst others, images, themes and uploaded files shown on the blog. Since this is normally mapped to a regular folder in a docker environment, we decided to use a volume mount (which is equivalent to an AKS Persistent Volume with Azure File CSI driver but abstracted away). In this case, the simplest and most cost-effective solution was a General Purpose V2 storage account with one file share for all the content as there is no real reason to separate into multiple file shares. This is by far the easiest component of the whole setup!

Database

The database is already a lot trickier, the only officially supported database is MySQL 8.0 at this moment. However, when we started this at the end of 2022 MySQL 5.7 was still the required version. The migration proved to be a problem because between versions the collation changed, which meant, doing a database migration! The migration path required us to execute commands on the Ghost container, which we were not too keen on, so we chose the easy way and took a MySQL export, did some string replacements where the collation was defined, and hoped that we didn't violate any of the collation rules (string lengths, etc).

The reason we also had to move to MySQL 8.0 is because of the deprecation (EOL) of MySQL 5.7 as announced by Oracle in October 2023. I will also append the script that I used to replace the collation definitions manually, but use it at your own risk!

		
Code loading...
      	
    
Show on Github

As you will see in the bicep code we had to disable secure transport (SSL), sadly it was impossible to make it work although according to the documentation, it should be doable.

Compute

Considering computing we of course chose the state-of-the-art Azure Container Apps which finally after a year have the most requested features implemented like Key Vault integration, storage mounts, workload profiles and much more!
For scaling we use the built-in KEDA autoscaler, configured with the default HTTP scaler, which is useful as we're hosting a website. Something to notice is, to set your minimum amount of replicas to 1 for websites, otherwise, you'll have a warm-up time, as by default the service scales to 0.

External mailing

Last but not least is the Ghost mailing integration. By default, Ghost advertises the integration they have with Mailgun for both login confirmation mailing and bulk newsletter mailing. However Mailgun is a paying service, and since this is a hobby project, we tried to cheap out. Connecting to Exchange Online worked fine via SMTP. However, we added the container app environment outbound IP to and SPF record and enabled DKIM but in the end, we were unable to have the verification mails sent without some of them arriving in the junk. As such we decided to finally use Mailgun. The configuration went without a hitch some things to pay attention to:

  • Use the flex plan, it's not shown on their site and has to be requested via support but has these features (credit card required)
    • 1000 messages per month for free and 1$ / 1000 extra messages
    • Custom domains
    • 5 Routes
    • Long data retention of 5 days
  • Don't use the sending API key but use a general account API key

Infrastructure as Code

As I am most proficient in Azure Bicep and our solution is hosted in Microsoft Azure Cloud we choose Bicep as our code of choice. I will be going over each module of the code and adding some annotations. In this code, private endpoint has been enabled, for our case however, they haven't been enabled to save on costs (I know, not best of practises 😄).

Network

		
Code loading...
      	
    
Show on Github

Very simple module creating one subnet for all services, as this is a very small application. Microsoft tells us that you should use one subnet per private endpoint resource type, however, this seems like a lot of overhead for our use case (and a lot of customers as well!)

Storage

		
Code loading...
      	
    
Show on Github

Half of our meat and potatoes, this is storage for all the static data. This contains mostly images and some files saved in the blog articles. We are using Standard GRS , this might be overkill as our database and compute aren't region resilient! The data is stored on a fileshare called websitecontent, which is transaction optimized. At this very moment we are at 190MB used, so the settings don't matter too much. If required, private endpoint can be enabled via the bicepparam file.

Database

		
Code loading...
      	
    
Show on Github

The database is where things get interesting, the storage account can be called a true Azure native service, but the MySQL is a "third party" database replicated in Azure as a PAAS service. As you can see we run it on a B1ms machine, which is the smallest tier available, with very low IOPS of 360 an retention of 7 days. As you can see we are now on version 8.0 of MySQL as opposed to 5.7 in the past. As with the storage solution, private endpoint is to be enabled in the bicepparam file. In this same file we also have the setting that disabled SSL connection, a great amount of time was spent, to make this work, but in the end, I couldn't get it to work. However traffic is running over the Azure backbone network, as such, interception will be quite hard and the impact of the traffic being listened too is not that much of a concern as well. The public endpoint is protected by a firewall rule, so only the public IP of the Azure Container Apps environment is allowed.

Backup

		
Code loading...
      	
    
Show on Github

Backup for the storage account is also included with a Recovery Services Vault, a daily backup policy with 30 days retention for the file share is enabled by default, no action required on the user side to make this work!

Container App (Environment)

		
Code loading...
      	
    
Show on Github

And finally the compute side, the Container App Environment and the App itself. At this moment we use the Ghost image directly from the Docker Hub registry, for production environments, a Container Registry is recommended, so as to not be dependent on the third-party registry. As you can see we group all our vars, secrets, volumes,... in a variable because it is easier if you need to change properties. First We deploy the Container Apps Environment in which we redirect logs to the Log Analytics Workspace. After that, we define the volume mount which in our case is an Azure File Share. In the background, this uses the AKS CSI driver for Azure File. Something to note is the customDomains property, here we define the SNI (Server Name Indicator) settings. This basically means you can put multiple applications behind the same ingress controller with TLS offloading done by for free by Microsoft. In the beginning we setup the blog, this feature was very unstable, however, in the last few months it has been working well! As you can see in the code for the managedCertificate, we use HTTP validation, which means when you first deploy this, you will need to add the ASUID record to your DNS zone and set an A record to the public IP of your Container Apps Environment.

Miscellaneous

		
Code loading...
      	
    
Show on Github

One of the most basic resources except for the Network Watcher, is the Log Analytics Workspace, it will receive all container logs from the Azure Container Apps Environment.

		
Code loading...
      	
    
Show on Github

The Main file which calls all modules above, as you can see the network part is conditionally deployed as are all private endpoints in the separate modules. Our Azure Key Vault is called here to retrieve the MySQL secret and the Mailgun API secret.

		
Code loading...
      	
    
Show on Github

Our last file is the bicepparam file which is coupled with the main.bicep file, which contains all non-secret values, with this file we conclude the bicep code for our blog infrastructure. Lots of improvement is possible, but the budget is sadly limited, more to come on optimizations and backup!

Conclusion

This article has been a year in the making, we have had some issues in the past with MPN subscriptions being deallocated/deactivated, but in the end, all turned out well. We have been very happy about the stability of the Azure Container Apps solution, and the Ghost blogging platform as well. The image versions have been updated automatically over the past year, keeping the software up to date , without creating downtime so that's awesome. There lots of improvements to be made, and more information to come on how to backup the storage account and database data to another subscription, but that's for the next article!