WordPress Robots.txt Generator
FreeGenerate the optimal robots.txt file for WordPress. Block wp-admin, allow Googlebot, include your sitemap URL — ready to upload in one click.
What's next
Settings guide
Always disallow:
- ·
/wp-admin/— admin interface, not for indexing - ·
/?s=— search result pages (thin, duplicate-like content) - ·
/trackback/— legacy WordPress feature, no SEO value
Always allow (even in wp-admin block):
- ·
/wp-admin/admin-ajax.php— required by many plugins and themes for functionality
Conditionally disallow:
- ·
/tag/— if your tags are not keyword-focused, blocking prevents thin tag archives from consuming crawl budget - ·
/author/— on single-author blogs, author archives duplicate post content - ·
/page/— paginated archives; Google can crawl these, but low-traffic sites may benefit from restricting
Sitemap line:
End your robots.txt with Sitemap: https://yourdomain.com/sitemap.xml. WordPress generates a sitemap at this URL if you are using Yoast SEO, RankMath, or WordPress's built-in sitemap feature.
Format comparison
Robots.txt vs WordPress SEO plugin noindex settings:
Robots.txt controls crawling — it tells bots which URLs to visit. A noindex meta tag controls indexing — it tells bots not to include the page in search results. For low-value pages like tag archives, you have two options: disallow in robots.txt (saves crawl budget but the page can still be indexed if linked) or noindex in meta tags (page is crawled but not indexed). For most WordPress sites, noindex on tag and author archives is better than robots.txt disallow.
WordPress's virtual robots.txt vs a real file:
WordPress generates a virtual robots.txt file via PHP if no physical file exists at the root. An SEO plugin like Yoast overrides this virtual file with its own output. If you upload a physical robots.txt file to the server root, it takes precedence over both — giving you full control.
How it works
Enter your WordPress URL
Add your domain so the sitemap URL and canonical paths generate correctly.
Select WordPress-specific blocks
Choose which WordPress paths to block: wp-admin, search pages, tag archives, author pages.
Review and download
Get the complete robots.txt file with all WordPress-optimized rules and your sitemap URL.
Upload to your server
Upload the file to your WordPress root directory via FTP or your host's file manager, replacing any existing robots.txt.
About this format
WordPress generates a default virtual robots.txt that is functional but not fully optimized for SEO. It does not block crawling of wp-admin directories, does not specify your sitemap URL, and does not restrict unnecessary content like tag pages, author archives, or search results pages from being crawled and potentially indexed.
This generator produces a WordPress-optimized robots.txt that follows best practices: blocks all crawlers from `/wp-admin/` (while allowing admin-ajax.php which some plugins require), disallows low-value URL patterns like `/?s=` (search results), `/tag/`, and `/page/`, includes your sitemap URL for crawler discovery, and keeps all important content paths open to Googlebot.
WordPress sites have specific crawl budget concerns that generic robots.txt templates do not address. Thin content pages — pagination, tag archives, author pages on single-author sites, and search result pages — consume crawl budget without adding SEO value. Blocking these with robots.txt frees up Googlebot to spend more time on your actual content pages.