Product

AI-bot audit.

Check which AI crawlers a website allows. One API call. No robots.txt parsing.

What you get

GET /v1/robots-policy?url=https://example.com returns a JSON map of known AI crawlers and their status: allowed, blocked, unspecified, or silently_allowed.

{
  "GPTBot": "allowed",
  "ClaudeBot": "blocked",
  "PerplexityBot": "unspecified",
  "Google-Extended": "allowed",
  "Applebot-Extended": "silently_allowed",
  "Bytespider": "blocked",
  "CCBot": "unspecified",
  "FacebookBot": "allowed",
  "Amazonbot": "blocked"
}

Sample call

curl -X GET \
  "https://api.crawlcrawl.com/v1/robots-policy?url=https://example.com" \
  -H "Authorization: Bearer crk_..."
{
  "GPTBot": "allowed",
  "ClaudeBot": "blocked",
  "PerplexityBot": "unspecified",
  "Google-Extended": "allowed",
  "Applebot-Extended": "silently_allowed",
  "Bytespider": "blocked",
  "CCBot": "unspecified",
  "FacebookBot": "allowed",
  "Amazonbot": "blocked"
}

Use case

Site owners updating robots.txt or llms.txt need to know how their current policy evaluates. The standard changes frequently.

Scale

Combine with /v1/cloud/search to audit multiple sites in one batch.

Audit any URL in one call.

Get an API key to check AI crawler permissions.

Get an API key — free