This month AWS announced the release of App Runner which is a fully managed container-native service designed to make it easier for developers to quickly deploy APIs, micro-services, and web applications using source code repositories and container images.

Connect your code repo or ECR hosted container to App Runner and it automatically builds and deploys your web application on scalable resources with load balancing and encryption. There are no servers to build and maintain, no orchestrators to configure, no build pipelines to create, TLS certs to rotate or load balancers or scaling to worry about.

The intention is to free you up from getting involved in servers and infrastructure, leaving you more time to focus on building your application and bringing the new features on your roadmap to life.

The service will provide you with a URL endpoint to access the application, managed TLS that auto-renews. There’s no need to define a VPN or set up load balancing or autoscaling, you don’t need a docker image or runtime as App Runner does all this behind the scenes. You simply choose a base virtual CPU and Memory size, give App Runner the location of your container or code repo and App Runner will build the container and infrastructure and create the application endpoint on a port you nominate. Currently Python3 and Node.js are supported application runtimes.

Depending on the code repo you use it is possible to get App Runner to automatically deploy new code as it is pushed to the branch you connected to. You can use your own containers or use the integrated container build service to go directly from code repo to deployed application.

Deploying a container with App Runner.

You will need to log into the App Runner onsole

Then create an App Runner service.

On this page, you can choose whether you want to use a container registry or a source code repository like Github. For this example, we’ll choose an existing public container.

Enter the container image location in the container image URI box. This will be a container image in your AWS ECR account.

Then you can choose to manually deploy images going forward, or you can select automatic, which will set up App Runner to automatically monitor your container for new image pushes and triggers deployment when a new version is detected.

The next step is to configure the service.

Here you enter a unique service name and select the number of virtual CPUs and the amount of initial memory. At the time of writing you can select one or two CPUs and two, three or four GB of memory.

You also need to nominate a port for the service to use.

App Runner will autoscale in response to traffic loads. As concurrent requests hit a nominated threshold, new instances are provisioned and when traffic subsides they are scaled back.

The default auto scaling settings are that 100 concurrent requests will trigger the provisioning of a new instance up to a total of 25 instances. If your application performance dictates a lower (or higher) number of users per instance, you can choose custom configuration and enter your desired autoscaling settings for App Runner to adhere to.

App Runner will continually poll the TCP port to ensure operational continuity. This health check can be configured.

App Runner Security

As a default App Runner will utilise an AWS owned KMS key. In the security settings you can nominate an IAM role to associate with the service and also a private KMS key should you have specific policies and permission you would like to assign to the application.

App Runner then allows you to review and edit all the settings before deployment.

Once you are happy with the configuration settings you click the orange “Create and Deploy” button.

After a few minutes the service will be built and available at the URL endpoint shown under “Default domain”. Obviously that’s not ideal from a user perspective, so App Runner makes it very simple to attach a Custom Domain name to the service.

Until your domain is connected, you can reach your App Runner generated application deployment via the *.awsapprunner.com url generated. In this instance, the running test application pulled from the AWS public ECR looks like this:

When you are finished with the App Runner service instance, you can delete it from the App Runner services console using the actions button.

Deploying an Application using GitHub with App Runner

Connect to your GitHub repo and specify the branch to pull the code from. Once established, you can tell App Runner to monitor the branch and deploy new code as it appears (or you can leave it set to manually deploy)

Next you specify the build settings:

You need to nominate a build runtime, at the time of writing you can select from either Python 3 or Node.js 12.

You can add a build command that is run in the root directory of GitHub when a new code version is deployed. This can be used to add dependencies or compile code for instance.

You also get to enter any service start commands which are run when the service is started and can access environment variables that either you or App Runner have created.

And finally you get to specify the port that will be exposed to access the application once the service is running.

Is is also possible to add a YAML file called apprunner.yaml into your repo to define what happens during deployment.

So that wraps up our first look at AWS App Runner. It’s serverless in so far as you don’t have to provision and hardware instances, you also don’t need to worry about setting up VPCs or choosing appropriate EC2 instances, you don’t need to worry about auto scaling or load balancing as the service takes care of all of that for you.

The service also manages and updates TLS certificates automatically and can be set up to automate deployment of application updates as new container images are detected in ECR or new code is pushed to the attached GitHub code branch.

It’s early days but on the whole, App Runner looks to be an impressive addition to the AWS Services catalogue and has lots of new language support and features on the roadmap.

If you are building or managing applications hosted on AWS, Azure or GCP, Hava.io can automate your network topology diagrams, capture and store version history as changes are detected and also allow you to export your fully interactive diagrams in a number of formats for presentation purposes or for externally editing your diagrams without compromising the integrity of the diagrams generated and stored within the application.

You can try Hava for yourself here: https://www.hava.io

Originally published at https://www.hava.io.

Tech Writer, Developer, Marketer and Generator of Leads.