Second Biweekly Project Update
- Nelson Gomez Bohorquez
- Mar 7
- 5 min read
1.PLANNED ACTIVITIES:
Design the test scenario.
AWS Environment Setup
Deploy a basic AWS environment for initial tests with the AWS Classic Load Balancer.
Integrate monitoring into the basic AWS environment.
Load Testing Implementation: Review JMeter load scenarios to simulate varying levels of concurrency.
2. PROGRESS UPDATE:
2.1. DESIGN THE TEST SCENARIO.
The design of the test scenario has been structured to ensure an evaluation of the existing load-balancing algorithms within a data-center networking environment and take into consideration the available credits or resources in the AWZ cloud.
2.1.1. Zone selection.
The initial step involved selecting the appropriate AWS region and availability zones. According to AWS documentation, the United States (N. Virginia) "us-east-1" region was chosen primarily for its cost-effectiveness, broader service availability, and better compatibility with AWS Free Tier benefits, making it the most suitable option for the project.
Lower Free Tier and On-Demand Costs: AWS provides the most cost-effective pricing in us-east-1, making it ideal for deploying non-critical academic testing without unnecessary expenses [1].
Broader Service Availability: The US-east-1 region offers a complete range of AWS services, ensuring that testing load balancers and monitoring can deploy without potential setbacks due to a lack of functionalities or support [2].
Zones including Canada (Central) and US West (Oregon), despite their geographical proximity, suffer from limited service availability and elevated resource pricing [3]
2.1.2. Testing Virtual Machines (VMs)
Amazon Elastic Compute Cloud (EC2) enables users to deploy virtual machines tailored to specific workload requirements within the AWS cloud. As part of the AWS Free Tier, 750 hours per month of free usage are available for eligible EC2 instances such as the “t2.micro” instance which has the following technical specifications [4]:
Name | vCPU | RAM (GiB) | CPU Credits / hr |
t2.micro | 1 | 1 | 6 |

2.1.3. EC2 VM deployment testing scenario.
The deployment testing scenario includes:
One EC2 (t2.micro) instance is dedicated to traffic generation, which simulates user requests (HTTP) to evaluate the performance of the different types of Load balancers.
Two or three EC2 (t2.micro) instances functioning as backend servers behind different types of AWS Load Balancer.
Both types of VMs will be running on Ubuntu Server 24.04 OS Image, which also is part of the free tier offered by AWZ.

2.1.4. Testing App Tools.
Apache JMeter V 5.6.3.
The traffic generator VMs use Apache JMeter to apply different load conditions, testing the responsiveness of the different types of load balancers in distributing traffic among backend instances. On the VM EC2 (t2.micro) will be installed JMeter CLI - Ubuntu version 5.6.3 to run the load HTT queries tests, and we will use the Apache JMeter GUI - Windows version 5.6.3 for the configuration (file jmx) of the different load tests.
HTTP Server – NGINX
NGINX application running on Ubuntu EC2 VMs, serving as the backend web servers for handling incoming requests.
Amazon CloudWatch
CloudWatch service will be utilized for monitoring key performance metrics such as CPU utilization, response times, and instance health status, network use either for the EC2 VMs and different types of load balancers, to analyze the system’s behaviour under varying workloads.
2.2. AWS ENVIRONMENT SETUP
The AWS environment setup involves deploying a cloud infrastructure to test load-balancing algorithms. As the first step, we started using the AWS Classic Load Balancer (CLB), which uses mainly the Round Robin algorithm.
The first Deployed scenario schema on the AWZ cloud is shown in the following diagram:

The traffic generator EC2 using Apache JMeter and running on Ubuntu OS was deployed on the Availability Zone (AZ) “us-east-1-a”, while the HTTP Backend servers were deployed on the AZ “us-east-1-c”. The reason for deploying on different AZs is to simulate a real scenario, where the clients are not in the same location of the servers, which are usually in a data center. Moreover, in the Amazon Cloud, every AZ is considered a different physical data center.
The connectivity of EC2 and the load balancer was performed using two Security Groups, which are considered logic firewalls in AWZ cloud, allowing inbound traffic for SSH (remote management) and HTTP (testing traffic).
Every EC2 instance has a public IP public for management, and a private IP for the internal traffic, which will be used for the testing scenario.

Classic Load Balancer CLB deployment and working verification.
The CLB (Classic Load Balancer) was deployed using private IP addressing, so the EC2 traffic generator VM was configured to send the HTTP requests directly to this CLB interface.

In sum, the IP addressing used on this first deployment is:
Host Name | Type AWZ Service | Private IPv4 |
First Backend HTTP Server | EC2 – T2.micro | 172.31.16.152 |
Second Backend HTTP Server | EC2 – T2.micro | 172.31.20.199 |
LoadGenJmeter1 | EC2 – T2.micro | 172.31.6.113 |
CLBTest2 | Classic Load Balancer | 172.31.17.128 |
The verification of the correct operation of the CLB is shown in the following figure:

Thus, we can see how the load balancer distributes the HTTP request between the two Backend servers on every request.
2.3 LOAD TESTING IMPLEMENTATION: Review JMeter load scenarios to simulate varying levels of concurrency.
During this second biweekly period, a load testing plan was implemented using JMeter to evaluate the backend performance under different concurrency. The tests aimed to analyze system stability, request throughput, response times, and error rates before and after integrating a Classic Load Balancer with two EC2 instances.
2.3.1. Baseline Test (Single EC2 Instance, No Load Balancer)
The following load test parameters were obtained through progressive load testing:
Concurrent Users: 400.
Ramp-Up Period: 60s.

Results after running the test:
Max Throughput: ~752 req/s
Max Response Time: 12.9s (severe latency under load spikes)
Error Rate: 30 errors (0.02%), mostly due to timeouts

In summary, the single EC2 instance became a bottleneck, impacting HTTP service performance and causing high response times under increased concurrency.
2.3.2. CLB with Two EC2 Backend Instances
Using the same previous baseline test configuration, but now with the complete scenario (two Backend servers and a CLB):
Concurrent Users: 400
Ramp-Up Period: 60s
Results after running the test:
Max Throughput: ~781 req/s
Max Response Time: 1.6s (reduced from 12.9s, improved stability)
Error Rate: 0 (No errors detected)

Therefore, the CLB successfully distributed the traic between the two instances, reducing peak response
times and eliminating timeouts.
2.3.3. Stress Test on CLB load balancer
After various tests with a progressive load, we obtained the following configuration load test parameters:
Concurrent Users: 1,000
Ramp-Up Period: 90s
Results after running the test:
Max Throughput: 2161 req/s (significantly increased)
Max Response Time: 14s (high latency under heavy load)
Error Rate: 31 (0.00%)

Despite the high load (2287 req/s), the error rate remained low (31 errors total), indicating the system is still stable, but some requests experienced extreme latencies (~14s) due to instance exhaustion.
The load testing results indicate that while the Classic Load Balancer improved request distribution, it also led to increased latency under high concurrency, highlighting the limitations of the resources of the T2.micro EC2 instances. The next step is to deploy an Application Load Balancer (ALB), which could provide more efficient load-balancing mechanisms compared to CLB’s algorithm.
3. NEXT STEPS:
The next steps for our project involve:
Continuing AWS Environment Setup:
Deploy an AWS environment for tests with the AWS Application Load Balancer (ALB).
Analysis of the difference in the performance and improvements between CLB algorithm and ALB algorithm.
Load Testing Implementation: Continue JMeter load scenarios to simulate varying levels of concurrency with other load-balancing algorithms.
Research relevant metrics on Amazon CloudWatch and JMeter, which could be valuable to analyze load balancer performance.
References:
[1] Amazon Web Services. "COST07-BP02 Choose Regions Based on Cost." AWS Well-Architected Framework, 2025.
[2] Amazon Web Services, "AWS Services by Region", 2025. https://aws.amazon.com/about-aws/globalinfrastructure/ regional-product-services/.
[3] E. Márquez, "Choose-your-aws-region-wisely" Concurrency Labs, 2024. https:// www. Concurrency labs. com/blog/ choose-your-aws-region-wisely
[4] Amazon Web Services, "T2 Instances," AWS Documentation, 2025. https://aws.amazon.com/ec2/instancetypes/ t2/.
Comments