Scroll to discover
Schedule Live Demo
Skip to content

Deploying IriusRisk to AWS high availability infrastructure

Enterprise organisations are turning to high availability (HA) infrastructure to maximise uptime and speed, withstand crashes and failures, and secure access to critical applications.

In this article, we will explain how we were approached by a large financial services organisation that wanted to deploy IriusRisk on high availability infrastructure. Our Engineering team built a solution harnessing AWS infrastructure which enabled our client to house several thousands of projects and users concurrently – meeting acceptable service levels.

The challenge
IriusRisk is a web application that runs as a set of servlets with an application container, such as Tomcat. We usually divide the application’s deployment into three main components:

  • A reverse proxy that allows us to offload HTTPS and provides some flexibility when redirecting to some application’s endpoints, or to customize error pages.
  • A Tomcat application server that runs the IriusRisk core application.
  • A PostgreSQL database that holds all the data IriusRisk needs to work.
These three components can, initially, be deployed onto the same server or separate ones.

To deploy our infrastructure we chose Amazon Web Services, using pure EC2 instances to deploy our SaaS service for IriusRisk. This allowed us to scale up horizontally and vertically to support the organisation’s intensive use of the application.

The solution: mapping our deployment to AWS services

Reverse proxy: we have two components in AWS that can play the role of the reverse proxy; the Classic Load Balancer (CLB) and the Application Load Balancer (ALB). The first operates at connection and request-level, but it does not allow path-based routing. As path-based routing is required for our deployment, we opted for the second option, ALB, which operates at the request-level.

Tomcat application server: we use classic EC2 instances to run the Tomcat server, which we have packed into Docker containers. What results is that each EC2 instance runs a Docker container executing the Tomcat server. This provides greater flexibility when we need to perform application upgrades. The EC2 instances are part of an autoscaling group that we will be able to dimension based on the load the application is facing.

PostgreSQL database: This database is replaced by an AWS Relational Database Service (RDS), which gives us several advantages:

  • We measured the performance of the service and it was able to manage a large setup of users with concurrent connections – and remain stable.
  • A hot swap instance can be configured on a different availability zone which provides greater service reliability.
  • As we store the application data within the RDS service, we can add or terminate server instances without impacting the service.
  • We can automate backups and recovery in a simple way.
This target architecture, leveraging the ALB, also allows us to create high load autoscaling groups. A good example of this is our API; it is served from the domain/API/path which means we can create a new autoscaling group that deals with all the API calls using the ALB redirection – without impacting UI performance.

The target architecture can also be easily represented in an AWSCloudFormation template (infrastructure as code implementation). This allows us to deploy or recover HA deployments in hours and to automate their creation from external sources, such as a Slack integration.

We are proud to have a number of high availability clients that are able to scale IriusRisk both up and across their organizations – enabled and supported by AWS infrastructure.