Free Robots.txt Checker
Enter a URL to fetch and analyse its robots.txt file. We'll flag any Disallow rules that block Googlebot from crawling important pages.
What is robots.txt?
The robots.txt file sits at the root of your domain (e.g. yoursite.com/robots.txt) and tells search engine crawlers which pages they are allowed to visit. It's the first file Google reads when it discovers your site.
A misconfigured robots.txt can silently block Google from indexing your entire site or your most important pages — often without any warning in Search Console until rankings have already dropped.
What we check
- Whether a robots.txt file exists at your domain root
- Whether any
Disallowrules block Googlebot from crawling key pages - Whether a blanket
Disallow: /rule is accidentally blocking your entire site
Common mistakes
The most dangerous mistake is Disallow: / under User-agent: *, which blocks all crawlers from the entire site. This is often added during development and accidentally left in place after launch.