robots.txt Generator

Generate robots.txt files with custom User-agent rules, Allow/Disallow paths, and Sitemap URLs.

Rule Group 1
Global Options
robots.txt
User-agent: *
Disallow: /admin/
Disallow: /private/

Sitemap: https://example.com/sitemap.xml

What Is a robots.txt Generator?

A robots.txt generator helps you create a valid robots.txt file without writing the syntax by hand. The robots.txt file sits at the root of your website and instructs web crawlers — such as Googlebot, Bingbot, and others — which pages they may or may not access. This tool lets you define multiple User-agent rule groups, set Allow and Disallow paths, add Sitemap URLs, and configure a Crawl-delay. The output updates in real time and can be downloaded directly as robots.txt.

How to Use the robots.txt Generator

  1. Set the User-agent for the first rule group. Use * to target all crawlers, or enter a specific bot name (e.g., Googlebot).
  2. Enter Disallow paths — one per line — for URLs you want to block.
  3. Optionally add Allow paths to override a broader Disallow rule.
  4. Click Add Rule Group to add rules for additional bots.
  5. Add your Sitemap URL in the Global Options section.
  6. Click Download to save the file, or Copy to copy it to your clipboard.

Features

  • Multiple User-agent rule groups — target all bots or specific crawlers
  • Allow and Disallow paths with one-per-line input
  • Sitemap URL field (supports multiple URLs)
  • Optional Crawl-delay setting
  • Live preview updates as you type
  • One-click download as robots.txt
  • Copy to clipboard

FAQ

What is a robots.txt file?

A robots.txt file is a plain text file placed at the root of a website (e.g., https://example.com/robots.txt) that tells web crawlers which pages or sections they are allowed or not allowed to access. It follows the Robots Exclusion Protocol.

Does robots.txt block all bots?

No. robots.txt is a voluntary standard. Well-behaved crawlers like Googlebot and Bingbot respect it, but malicious bots may ignore it entirely. For sensitive content, use authentication or server-level access controls instead.

What is the difference between Allow and Disallow?

Disallow tells a crawler not to access a path. Allow explicitly permits access to a path that would otherwise be blocked by a broader Disallow rule. Allow rules take precedence over Disallow rules when both match a URL.

Where should I place the robots.txt file?

The robots.txt file must be placed at the root of your domain — for example, https://example.com/robots.txt. It cannot be placed in a subdirectory.