# This is the robots.txt file for my website. It is used to communicate with web crawlers # and other web robots that visit my site. # The "User-agent" line specifies which web robots the rules apply to. In this case, the # asterisk (*) indicates that the rules apply to all web robots. User-agent: * # The "Disallow" lines tell web robots which pages or resources they should not access. # In this case, the path "/private" indicates that web robots should not access any # pages or resources in the /private directory on my website. You can specify multiple # "Disallow" lines to exclude multiple directories or pages from being accessed. Disallow: /private/ # The "Allow" lines tell web robots which pages or resources they should access. # In this case, the path "/public" indicates that web robots should access any # pages or resources in the /public directory on my website. You can specify multiple # "Allow" lines to include multiple directories or pages. Allow: /public/ # You can also use the "Crawl-delay" line to specify how long web robots should wait # between requests to your site. The value is given in seconds. For example, to tell # web robots to wait 5 seconds between requests, you can use the following line: Crawl-delay: 5 # The "Sitemap" line tells web robots where to find your site's sitemap. A sitemap is # a file that lists all of the pages and resources on your site, and is used to help # web crawlers and other web robots discover and index your content. Sitemap: https://www.gigglepocket.com/sitemap.xml