Explore what round-robin load balancing is, how it works, its pros and cons, and key differences between different load balancing tools.

Round robin is one of the most extensively used load balancing techniques. Despite the development of many advanced load balancing methods, round robin is still relevant because it’s easy to understand and implement.

It’s a relatively simple way of cyclically distributing client requests to multiple servers. This is particularly useful in high traffic situations where end users can evenly distribute the server load and prevent overload or complete server failure.

Table of Contents
 

What is round robin load balancing?

Round robin load balancing is a load balancing technique that cyclically forwards client requests via a group of servers to effectively balance the server load. It works best when these servers have similar computational and storage capabilities. 

Round robin is the easiest way to balance the server load and provide simple fault tolerance. In this method, multiple identical servers are set up to deliver the same services or applications. Although they all have the same internet domain name, each server has a unique IP address. A load balancer keeps a list of all the unique IP addresses linked with the internet domain name.

How does round robin load balancing work?

Round robin operates on a simple mechanism. With round robin network load balancing, connection requests are cyclically moved between servers. The requests are sequenced based on the order they’re received. Let’s take an example to help you understand how this works.

When servers receive session requests associated with an internet domain name, the requests are assigned randomly or in a rotating sequence. For example, the first request receives server 1’s IP address; the second gets server 2’s IP address, and so on. These requests resume at server 1 once all servers are assigned an access request in a cycle. This sequential movement of client requests helps keep the load balanced even during high traffic. 

In a nutshell, round-robin network load balancing rotates connection requests among web servers in the order they’re received. Consider an organization with a cluster of three servers: A, B, and C.

  • Server A receives the first request
  • Server B receives the second request
  • Server C receives the third request

According to this directive, the load balancer continues to send requests to servers. This distributes the server load evenly, allowing the balancer to manage high traffic.

What is weighted round robin load balancing?

The weighted round robin load balancing approach is based on the round robin load balancing method. In a weighted round robin, the network administrator assigns a pre-set numerical weight to each server in the pool. The most efficient and top-performing server is given a weighted score of 100. A server with half the processing capability is given a weight of 50, and so on for the rest of the farm’s servers.

More requests are sent to servers with a higher weight. For example, a server with a weight of 100 would receive twice as many requests as a server with 50. Alternatively, a server with a weight of 25 will receive four times as many requests. However, requests are still assigned cyclically. The servers with higher weights receive more sessions in each cycle.

Weighted load balancing vs. round robin load balancing

The biggest drawback of using the round-robin algorithm for load balancing assumes that servers are similar enough to handle equivalent demands. There is no way for the algorithm to distribute extra requests to servers with more CPU, RAM, or other resources. As a result, lower capacity servers may get overcrowded and fail more frequently as capacity on additional servers grows.

The weighted round-robin load-balancing method allows site managers to assign weights to each server based on criteria such as traffic-handling capacity. Client requests are more uniformly dispersed among servers with higher weights. Consider a cluster of three servers in an organization:

  • Server A can handle 15 requests per second on average 
  • Server B can handle ten requests per second on average
  • Server C can handle five requests per second

Suppose the load balancer receives six requests in a row. Server A receives three requests, Server B two, and Server C, one. This is how the weight is distributed in a weighted round-robin algorithm.

Load balancer sticky session vs. round robin load balancing

A load balancer that keeps sticky sessions builds a session object for each client. Each request from the same client is sent to the same web server, which stores and updates the data as long as the session is active. Sticky sessions can be more efficient since no unique session-related needs to be transported from server to server.

On the other hand, sticky sessions can become inefficient if a single server collects multiple sessions with heavy workloads, disrupting the server balance.

The round-robin algorithm is used to route a user’s first request to a web server when sticky load balancers are used to load balance in a round-robin manner. Requests are then forwarded to the same server until the sticky session expires. At this point, a new sticky session is created via the round-robin method. The round-robin strategy is applied to all requests, regardless of whether they’re from the same client if the load balancer is non-sticky.

Round robin DNS vs. round robin

Instead of requiring a dedicated hardware load balancer like round robin, round-robin DNS uses a DNS server to balance traffic using the round-robin method. Each website or service is hosted on a cluster of redundant web servers typically distributed geographically using round-robin DNS. Each server assigns a unique IP address to the same website or server.

The DNS server distributes the load across the servers by utilizing the round-robin approach for rotating ip addresses.

Round robin DNS vs. network load balancing

Round-robin DNS is a load-balancing mechanism employed by DNS servers, as previously indicated. On the other hand, network load balancing is a broad term that refers to network traffic management without using complicated routing protocols like the Border Gateway Protocol (BGP).

Network load balancing uses the least connections load balancing method to deliver requests to servers with the fewest active connections, lowering the danger of server overload. On the other hand, round-robin load balancing rotates server requests even if some servers have more active connections than others.

Benefits of round robin load balancing

Round robin load balancing is simple to understand and apply. The round-robin algorithm’s simplicity is also its main drawback, which is why many load balancers utilize weighted round-robin or more complicated algorithms.

The critical advantage of round-robin load balancing is that it’s straightforward to set up. However, because many round robins load balancers presume that all servers are the same: currently up, currently handling the same load, and with the same storage and compute capabilities, it may not always result in the most accurate or efficient traffic distribution.

The following round-robin versions take into consideration additional criteria and can result in improved load balancing:

  • Weighted round-robin: Each server is given a weight based on the site administrator’s criteria; the most specific measure is the server’s traffic handling capacity. The bigger the weight, the more client requests the server will receive. If server A is given a weight of three and server B is given one, the load balancer delivers three requests to server A for each sent to server B.
  • Dynamic round-robin: Each server is given a weight based on real-time information about its present load and idle capacity. 

Drawbacks of round-robin load balancing

Round robin load balancing is easy to set up, works on a straightforward mechanism, and has a simple framework. However, many round robin load balancers assume that all servers have the same storage and compute capabilities, so they may not always provide accurate or efficient traffic distribution. Some of the common drawbacks of round robin load balancing are:

  • It lacks built-in fault detection or tolerance. It also doesn’t offer dynamic load balancing.
  • Round robin is still the most widely used load balancing method, and there are very few alternatives.
  • You can’t ensure connecting to the same server twice when needed.
  • Doesn’t provide any update on the server status – whether it’s working or down.
  • The unknown proportion of users who have DNS cache data with varying time-to-live (TTL) cannot be considered. As a result, visitors can be directed to the “wrong” server even after the TTL has expired.
  • Since it can’t determine the amount of server load, the load may not be distributed fairly.
  • A public IP address is required for each server.

Round robin load balancing combined with faster and more efficient cloud load balancing can go a long way toward balancing server loads and protecting your business from annoying disruptions. If you are still not convinced, here’s a deep dive article about why your business application needs a load balancer!