Bots—also called crawlers or spiders—are automated programs used by search engines and online services to scan and index websites. They play a key role in helping your content appear in search results, power AI models, and gather performance insights. While most are safe and useful, it’s important to recognize and manage them to protect your site’s security and SEO performance.
VergeCloud automatically whitelists IP addresses of well-known crawlers such as Googlebot, Bingbot, and others to ensure seamless indexing and better SEO performance. These trusted bots are verified using official sources, helping you maintain a secure and search-friendly environment without extra configuration.
Note: VergeCloud references the IP2Location database to identify and verify bot traffic where no official source is available.
If you'd like to fully manage crawler access yourself, you can disable the global whitelist by using VergeCloud’s API and setting the skip_global_whitelist
field.
Example API call to disable the global whitelist:
curl --location --request PATCH 'https://api.vergecloud.com/cdn/4.0/domains/example.com/firewall' \ --header 'Authorization: API KEY' \ --header 'Content-Type: application/json' \ --data '{"skip_global_firewall": true}'
You can create custom firewall rules to selectively allow or block specific crawler IPs. Create firewall rules and set allow or block crawler IPs. Read more about firewall rules.
For example, you can:
Below are the supported crawler bots with their official IP verification sources:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/605.1.15 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/605.1.15