Overview
Rate limiting is a method used to control how many requests a user or system can send to your website or API within a specific time frame. It helps protect your servers from abuse, ensures fair use of resources, and maintains stable performance for all users.
Think of it like a traffic signal for network requests . It prevents any single user or bot from overloading your system with too many requests at once.
Why Rate Limiting Is Important
Rate limiting is one of those behind-the-scenes controls that quietly keeps your system healthy. Think of it as a safety buffer that helps you stay in control when traffic becomes unpredictable. Without any limits in place, even a small surge intentional or accidental can slow things down or bring your servers to a halt. Here’s what it protects you from:
•
DDoS or DoS attacks
When attackers flood your application with an unusually high number of requests, the goal is simple, overwhelm your backend. By enforcing rate limits, you can keep such attacks contained and prevent them from taking down your services.
• Brute-force attempts
Login pages, contact forms, and authentication APIs are common targets for bots trying thousands of combinations. With rate limiting, you can stop these attacks early by restricting repeated attempts from the same source.
• Sudden traffic spikes
Even genuine users sometimes cause unexpected traffic bursts maybe due to a sale, a viral link, or internal testing. Without limits, these spikes can overload your system. A rate limit helps spread the load so your servers stay stable.
• API misuse or overconsumption
Some users might unintentionally (or deliberately) make excessive API calls, consuming more resources than they should. Proper limits ensure everyone gets a fair share without affecting overall performance.
By putting well-defined thresholds in place, you create a controlled environment where the application runs smoothly, abusive traffic is filtered out, and legitimate users can interact with your service without interruptions. It’s a simple mechanism, but it plays a huge role in keeping your platform fast, stable, and secure.
Rate Limit Configuration
To configure rate limits in VergeCloud:
- Go to Security → Rate Limit Rules
- Click Add Rule to create a new policy
Each rule you create allows control over how traffic behaves.
URL Path
Specify where the rule applies.
Supports glob patterns like:
/api/*
/login/**
/checkout/*
Number of Requests
This defines how many requests an IP can send before triggering the limit.
Time Frame
Choose how long the system will measure requests:
Seconds
Hours
Days
Excluded Methods
Exclude specific HTTP methods such as GET, POST, or PUT if you want the rule to apply only to certain request types.
Excluded IPs
Add internal IPs or trusted partners so they are never limited. This is useful for monitoring tools, office IPs, or API partners.
After you finish filling in the fields, click Save to activate the rule.
Rate Limit Behavior
When a request goes beyond the allowed threshold, VergeCloud reacts in one of two ways:
Block : The system stops further requests until the time frame resets.
Challenge : Instead of blocking immediately, the system presents a validation challenge (like a Captcha or cookie-based verification). This is useful when you want to stop bots but still give legitimate users a chance.
Both options are helpful depending on how strict you want your protection to be.
Example Scenario
You can limit access to a specific endpoint such as: www.example.com/api/contact/form → 20 requests per day
If this threshold is exceeded, the IP will be blocked for 24 hours.
You can exclude trusted IPs (like 1.2.3.4) or allow only specific methods (e.g., POST).
Prioritizing Rules
VergeCloud evaluates rate limit rules based on priority, starting with the highest priority (priority 1).
Once a request matches a rule, lower-priority rules are ignored.
Example :
Path
| Requests
| Time Frame
| Priority
|
/api/login/**
| 5
| 60 seconds
| 1
|
/api/**
| 10
| 60 seconds
| 2
|
This ensures tighter control for login endpoints while keeping general API traffic more flexible.
Best Practices
- Define Clear Conditions – Apply rate limits to specific URLs, HTTP methods, or IPs for better control.
- Use Multiple Rules – Set stricter limits for sensitive routes (like /login) and more lenient ones for general traffic.Combine Short and Long Time Windows.
- Use short windows to stop sudden bursts and longer windows to control sustained traffic
Let’s say your login API endpoint is often targeted by bots.
You can apply two rules together:
Rule 1: 5 requests per 10 seconds — stops rapid brute-force attempts.
Rule 2: 100 requests per 10 minutes — limits consistent automated traffic over tim
This combination helps block both spikes and steady abuse without affecting genuine users who log in occasionally
- Monitor and Adjust - Review logs and traffic behavior to fine-tune thresholds for optimal balance between security and usability
curl https://api.vergecloud.com/v1/rate-limit/example.com/reprioritize \
--request POST \
--header 'Content-Type: application/json' \
--header 'X-API-Key: YOUR_SECRET_TOKEN' \
--data '{
"after_rule_id": "",
"before_rule_id": "",
"rule_id": ""
}'
Testing and Validation
To verify your rate limit setup:
1. Use curl to simulate repeated requests and check for HTTP 429 (Too Many Requests) responses.
2. Visit the configured path in a browser to confirm whether it’s blocked or challenged when limits are reached.
3. Optionally use dig for DNS-related rate limit checks if applicable