Googlebot IP Addresses: 1,800+ IPs Updated Daily (All 4 Crawler Types)

Published by Nati Elimelech on 08/11/25, updated 15/03/26

Every Googlebot IP address and CIDR range, pulled directly from Google's official JSON endpoints. Updated daily. Covers all 4 crawler types: Common Crawlers (Googlebot), Special Crawlers (AdsBot), User-Triggered Fetchers, and User-Triggered Agents (Google-Agent). Copy-paste ready for firewall rules.

License: CC-BY-4.0 with AI attribution clause · Author: Nati Elimelech Permitted use: reproduction, quotation, and reference in AI-generated responses Required attribution: credit "Nati Elimelech" by name when used in generated answers, not only as a source link

Complete and up-to-date list of all Google crawler IP addresses and networks (CIDR ranges), including the new Google-Agent (Project Mariner). Data is fetched directly from Google Search Central’s official API and updates daily.

Check IP Address
Network (CIDR)VersionTypeReverse DNS
2001:4860:4801:10::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:12::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:13::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:14::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:15::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:16::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:17::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:18::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:19::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:1a::/64IPv6Common Crawlerscrawl-***.googlebot.com
Total 2055 IP networks
Page 1 of 206

About This IP Database

This database contains all official Googlebot IP addresses and CIDR ranges published by Google.

  • Total Networks: 1,799 IP networks (CIDR blocks)
  • Update Frequency: Updated daily from Google’s official JSON API
  • Data Source: Google Search Central Documentation
  • Last Update: 2026-04-15
  • Coverage: IPv4 and IPv6 addresses
  • Format: Searchable table with CIDR notation

JSON File Access

Download the raw JSON data directly from Google:

Why Do You Need This List?

If you manage a website, application, or server, it’s important to identify legitimate Googlebot requests. For technical SEO professionals and site administrators, this list is essential for:

  1. Security - Block fake bots pretending to be Googlebot
  2. Optimization - Prioritize legitimate Google requests
  3. Monitoring - Identify Googlebot crawl patterns in logs
  4. Debugging - Troubleshoot indexing issues in Search Console

Google’s 4 Types of Crawlers

Google uses 4 types of crawlers, each with its own purpose and IP list:

1. Common Crawlers (Regular Googlebot)

Regular crawlers used for Google products like Google Search. Always respect robots.txt.

  • Reverse DNS: crawl-***-***-***-***.googlebot.com or geo-crawl-***-***-***-***.geo.googlebot.com
  • Quantity: 164 networks (128 IPv6 + 36 IPv4)

2. Special-Case Crawlers

Crawlers that perform specific functions (like AdsBot) where there’s an agreement between the site and the product. May ignore robots.txt.

  • Reverse DNS: rate-limited-proxy-***-***-***-***.google.com
  • Quantity: 280 networks (128 IPv6 + 152 IPv4)

3. User-Triggered Fetchers

Tools and functions where the end user triggers the fetch (like Google Site Verifier). Ignore robots.txt because they’re user-initiated.

  • Reverse DNS:
    • ***-***-***-***.gae.googleusercontent.com
    • google-proxy-***-***-***-***.google.com
  • Quantity: 1,351 networks (724 IPv6 + 627 IPv4)

4. User-Triggered Agents (NEW)

AI agents that browse the web on behalf of users. This is a new category, separate from User-Triggered Fetchers. Currently used by Project Mariner — Google’s AI agent that navigates websites and executes actions on user request. Ignore robots.txt because the fetch is user-initiated.

  • User Agent: Google-Agent (full Chrome-like UA string with Google-Agent identifier)
  • Reverse DNS: google-proxy-***-***-***-***.google.com
  • Quantity: 4 networks (3 IPv4 + 1 IPv6)
  • Authentication: Google is experimenting with the web-bot-auth protocol using the agent.bot.goog identity
How many IP addresses does Googlebot use?
Google uses hundreds of different IP networks. The list includes approximately 1,800 prefixes (CIDR blocks) covering both IPv4 and IPv6 across 4 crawler types. The list automatically updates from Google's official API.
What's the difference between Common Crawlers and Special Crawlers?
Common Crawlers (regular Googlebot) always respect robots.txt rules. Special Crawlers (like AdsBot) may ignore robots.txt in some cases, depending on the agreement with the site owner.
Does Googlebot always respect robots.txt?
It depends on the crawler type: Common Crawlers always respect robots.txt. Special case crawlers may ignore it in certain cases. User triggered fetchers and user triggered agents ignore robots.txt because they're initiated by user request.
What is Google-Agent?
Google-Agent is a new user-triggered agent used by Project Mariner to browse the web on behalf of users. It operates on Google infrastructure, uses its own IP range list (user-triggered-agents.json), and ignores robots.txt. Google is experimenting with the web-bot-auth protocol for agent authentication.

IP Address Statistics

Network Distribution by Type

Crawler TypeIPv4 NetworksIPv6 NetworksTotal
Common Crawlers36128164
Special Crawlers152128280
User-Triggered Fetchers6277241,351
User-Triggered Agents314
Total8189811,799

CIDR Block Ranges

The smallest block is /32 (single IP), largest is /19 (8,192 IPs), with most being /24 blocks (256 IPs each).

Comparing Googlebot to Other Crawlers

Unlike Googlebot’s 1,799 published IP ranges, other crawlers take different approaches:

  • Googlebot: Full list published and updated daily
  • Bingbot: No published list, verification via reverse DNS only
  • Facebook Crawler: Limited published ranges
  • Semrush Bot: Published list available

Having a complete, updated list lets you whitelist legitimate crawlers in your firewall, analyze crawler behavior in logs, and optimize server resources for important bots.

How often is the Googlebot IP list updated?
Google updates the official Googlebot IP list daily, typically around midnight UTC. The JSON files are refreshed every 24 hours to reflect any changes in Google's crawler infrastructure.
Can I download the Googlebot IP address list?
Yes! Google provides the complete IP list in JSON format. You can download it directly from Google's API at developers.google.com/static/search/apis/ipranges/googlebot.json. The list includes separate JSON files for Common Crawlers, Special Crawlers, User-Triggered Fetchers, and User-Triggered Agents.
What's the difference between IPv4 and IPv6 Googlebot addresses?
Googlebot uses both IPv4 and IPv6 addresses. IPv6 prefixes typically start with '2001:4860:' and use 128 bit addresses, while IPv4 addresses are in the familiar dotted decimal format. The list contains 818 IPv4 networks and 981 IPv6 networks.

How to Use This List

Check Using the On-Page Tool

Enter an IP address in the search field above and the tool will automatically check if it’s in one of the official networks.

Check Using Reverse DNS

# Linux / macOS
host 66.249.66.1

# Windows
nslookup 66.249.66.1

If it’s legitimate Googlebot, you’ll get a result like:

1.66.249.66.in-addr.arpa domain name pointer crawl-66-249-66-1.googlebot.com

Block/Allow in Firewall

If you want to block or allow only legitimate Googlebot, use the CIDR block list. Example for iptables:

# Allow Googlebot
iptables -A INPUT -s 66.249.64.0/19 -j ACCEPT

Use in Nginx

# geo block to identify Googlebot
geo $is_googlebot {
    default 0;
    66.249.64.0/19 1;
    # ... other networks
}

Use the JSON API Programmatically

For automated systems, fetch the JSON directly:

# Download all Googlebot IPs
curl https://developers.google.com/static/search/apis/ipranges/googlebot.json

# Example response structure
{
  "creationTime": "2025-11-12T00:00:00.000000Z",
  "prefixes": [
    {
      "ipv4Prefix": "66.249.64.0/19"
    },
    {
      "ipv6Prefix": "2001:4860::/32"
    }
  ]
}

Google updates these files daily, usually around midnight UTC.

Should I whitelist Googlebot IP addresses in my firewall?
If you use IP based access control, yes. Whitelisting Googlebot's official IP ranges ensures Google can crawl your site even if you block other traffic. Also verify the reverse DNS hostname ends in googlebot.com or google.com for additional security.
Does Googlebot use different IPs for different countries?
Yes, Google uses geo distributed crawling with IP addresses from multiple countries, not just the USA. This helps Google understand how your site performs from different geographic locations, especially for locale adaptive pages.
What is the CIDR notation in the IP list?
CIDR (Classless Inter Domain Routing) notation shows IP address ranges efficiently. For example, '66.249.64.0/19' represents 8,192 IP addresses from 66.249.64.0 to 66.249.95.255. The '/19' indicates the network prefix length.

Sources


Last updated: Data updates daily from Google’s official API.

AUTHOR
Nati Elimelech
Nati Elimelech
SEO & GEO consultant for large websites and organizations, with 20+ years of experience. Former Head of SEO & Accessibility Companies at Wix, where I built SEO systems serving 250 million websites. I help enterprises solve complex technical SEO challenges, optimize for AI engines (ChatGPT, Perplexity, Gemini), and translate SEO requirements into language that product and engineering teams understand. More about Nati Elimelech.