Check which AI crawlers a website allows. One API call. No robots.txt parsing.
GET /v1/robots-policy?url=https://example.com returns a JSON map of known AI crawlers and their status: allowed, blocked, unspecified, or silently_allowed.
{
"GPTBot": "allowed",
"ClaudeBot": "blocked",
"PerplexityBot": "unspecified",
"Google-Extended": "allowed",
"Applebot-Extended": "silently_allowed",
"Bytespider": "blocked",
"CCBot": "unspecified",
"FacebookBot": "allowed",
"Amazonbot": "blocked"
}
curl -X GET \
"https://api.crawlcrawl.com/v1/robots-policy?url=https://example.com" \
-H "Authorization: Bearer crk_..."
{
"GPTBot": "allowed",
"ClaudeBot": "blocked",
"PerplexityBot": "unspecified",
"Google-Extended": "allowed",
"Applebot-Extended": "silently_allowed",
"Bytespider": "blocked",
"CCBot": "unspecified",
"FacebookBot": "allowed",
"Amazonbot": "blocked"
}
Site owners updating robots.txt or llms.txt need to know how their current policy evaluates. The standard changes frequently.
Combine with /v1/cloud/search to audit multiple sites in one batch.