VergeCloud’s Load Balancer intelligently manages and distributes incoming traffic across multiple origin servers, providing seamless multi-cloud load balancing. This ensures high availability, stable performance, and consistently low response times for users across different regions. Acting as a smart traffic distribution layer in front of your infrastructure, it prevents any single server from receiving more load than it can handle and ensures that every request is routed to the optimal server at that moment.
The system continuously evaluates several factors such as server health, geographic proximity, real-time performance indicators, and your custom load distribution rules. Based on these signals, it determines the most suitable destination for each individual request. VergeCloud enhances traditional load balancing with two major capabilities: Active Health Check and Geo-Steering, both working together to maintain uninterrupted service.
Active Health Check performs automatic, real-time monitoring of server responsiveness through an intelligent server health check framework. If a server becomes slow, returns unexpected status codes, or fails to respond, it is temporarily removed from traffic rotation until it recovers. This automated isolation protects your application from outages and degraded performance. It is especially critical for workloads that demand continuous uptime, such as OTT platforms, e-commerce storefronts, multiplayer gaming backends, and SaaS applications.
Geo-Steering adds another layer of intelligence by directing users to the nearest healthy server pool based on their physical location. This significantly reduces latency and improves load times for global applications with users distributed across regions. Together, these capabilities ensure fast delivery, optimal resource utilization, and resilience during traffic surges or regional disruptions.
Pools consist of multiple origin servers identified by IP addresses or hostnames. You can assign a name, description, and select a load distribution method.
Round Robin distributes requests sequentially across servers.
Client IP Hash ensures session persistence by routing the same client to the same server.
Within each pool, you can enable the origin server health check feature. VergeCloud continuously monitors server status using configurable endpoints and removes unhealthy servers from rotation to ensure reliability.
In this phase, geolocations are mapped to pools. For example, you may route Indian traffic to an Asia pool and US traffic to a North America pool. If a region is not explicitly mapped, the primary pool is used as the fallback. Geo-Steering significantly reduces latency since the nearest pool almost always provides faster responses.
Monitoring is a crucial part of ensuring application resilience. Within the Edit Monitoring panel, you can configure the protocol, request path, HTTP method, acceptable response codes, and the regions from which monitoring nodes will test your servers. Results may place pools in a healthy, warning, or unhealthy state.
If monitoring is disabled, VergeCloud sends no checks and assumes all servers are healthy. In Non-Critical mode, health checks run normally and email alerts are sent for unhealthy servers, but traffic routing remains unchanged. In Critical mode, servers that fail health checks are automatically removed from distribution after alerts are sent.
It is essential to whitelist VergeCloud edge IPs within your firewall to avoid health check failures due to blocked traffic.
All configuration steps follow a simple sequence. Create a load balancer, configure pools, assign geolocation rules, and fine-tune monitoring settings. You can modify or update any of these elements at any time from the VergeCloud dashboard.
Automate Load Balancer configuration and management can use VergeCloud’s complete Load Balancer API. The API allows you to create and manage load balancers, configure server pools, add or remove origins, update routing settings, view health check details, and streamline traffic distribution directly through your automation pipelines.
You can integrate these endpoints into your CI/CD workflows, infrastructure scripts, or custom tools to fully control how traffic is routed across your origin servers.
Explore all available endpoints here: https://api.vergecloud.com/docs#tag/load-balancer