Implementing a simple microservices architecture using the Netflix OSS

● AWS Elastic Load Balancer

● AWS Elastic Container Service

● Netflix Eureka

● Netflix Zuul

Containerizing the microservices:

The first task in setting up the infrastructure is to get the microservices containerized and running on ECS/ Docker. Assuming the microservices are Java and built as JAR files, a sample Dockerfile would be:

Amazon’s ECS is preferred because we could leverage the service to auto-scale depending on the incoming requests to the microservice.

Alternative to ECS could be Docker Swarm.

Assuming we are load balancing across 2 nodes, we will set up Eureka and Zuul on both nodes. Eureka and Zuul are both SpringBoot applications, whose codebase can be pre-configured and downloaded from Spring Initializer.

Configuring Eureka for High Availability / Peer-Awareness mode:

Copy the Eureka codebase to both nodes. Only configuration changes needed will be done in the /src/main/resources/application.yml file on both nodes.

On node 1, make the following changes in the application.yml:

On node 2, make the following changes in the application.yml:

Also, make sure the following dependencies are added in both nodes in the pom.xml file (assuming we’re using Maven):

Note: edit /etc/hosts file on both nodes to assign hostnames, so they can be reached from each other. It should be similar to this:

Do the same for node 2.

When Eureka server instances boot up they will look for each other. All microservices will register with them automatically, so if one goes down the other server instance will be always there. On both Eureka instances you will be able to see all the registered microservices. Like this you can scale-up and have multiple server instances in a production environment.

Setting up Zuul API Gateway:

Like Eureka, Zuul is also a Springboot application which will act as the API Gateway for all incoming requests. Download and build the codebase from Spring Initializer. Run it as a standalone JAR or as a container on both nodes.


Just send request to http://localhost:8080/sample-service/api/sample-api The Zuul proxy will redirect this request to sample-service/api/sample-api and by using the service registry it will resolve the exact path of the sample-service which is : http://localhost:port.

Set up AWS ELB:

Provision an Elastic Load Balancer and configure it to handle incoming requests between the 2 nodes from your AWS account.

Every company is a software company; even if it’s not in the software business