System Administration & Network Administration
amazon-web-services autoscaling
Updated Thu, 11 Aug 2022 23:19:15 GMT

AWS autoscaling stack of multiple servers

I have a stack of 2 AWS instances running 3 servers in a chain:

  • (Server 1) Apache2 and PHP-FPM
  • (Server 2) Varnish + Nginx for SSL termination

These servers are separate as I expect the Apache Server to be CPU bound, the other server to be memory bound.

I now want to make use of auto scaling for reduncancy and cost efficiency. A shared EFS file system will hold website files as well as configuration as suggested here Server configuration is already managed with Ansible.

Currently I am leaning towards using a network load balancer to eliminate the need to map 25 SSL certificates each onto a load balancer.

I cannot find a way documented to launch multiple servers as part of auto scaling, so that my stack of 2 servers is recreated when scaling up or on failure.

if there is no such way, am I better off:

a) Combining the 2 instances into one with the risk of memory being competed over by Varnish and PHP


b) Creating a fixed number of SSL/Varnish servers that don't autoscale and only scale Apache servers.


There's no way in AWS to have the AutoScaling Group (ASG) scale two different servers at the same time. For that you would need to make two different ASG's which would scale independently of each other (the two ASG's could be based on the same set of alarm's, so theoretically they would scale at the same times).

One big advantage of keeping everything on the same instance is that you will likely need two load balancers if you don't (one front end going to the apache ASG instances, then those instances would make a request to the internal load balancer to get to the second set of ASG instances). However, as you said, ideally the applications would be separate and on their own instances to avoid resource conflicts.

For load balancer type: Network Load Balancers (NLB) would allow you to avoid adding the certificates to it, since you could configure a TCP listener on it and have the client connect directly to the backend instances with the certs configured there. However, I would recommend adding the certs to the load balancer (NLB's support this now, but you could also use an Application Load Balancer (ALB)). That way you could have the SSL decrypted on the ALB/NLB and not have to worry about running NGINX on the instances (if that's all it was doing) by having the connection to the instance be via HTTP.

There is a limit of 25 certs per load balancer, so if you're planning on adding more, you might want to look into using SNI (the ability to put multiple domain names on a single cert)

Also, AWS will give you free certs where they automatically handle the renewal if you're using the cert of a load balancer through the ACM (Amazon Certificate Manager) service.

Comments (1)

  • +0 – Thanks for your answer. I already have 70 certs to deploy/ 35 if I combine them and cannot afford for one website cert to be associated with the certificate of another site. But the complexity of managing what to combine together is already too complex and even harder to automate. I have also found some additional issues with keeping the machines separate as Varnish does not route based on hostname, only IP address. I will flesh this out in a separate info answer later — Mar 28, 2019 at 22:06