Forums Forums White Hat SEO Can I use my robots.txt to block absolute URLs?

  • Can I use my robots.txt to block absolute URLs?

    Posted by seohelper on July 26, 2021 at 12:12 pm

    I want to prevent crawlers from accessing dev.mywebsite.com. I can see from my main property on GSC that all the pages in that domain are Crawled but not indexed and I just want to stop wasting my crawl budget.

    Would the following robots.txt work?

    User-agent: *
    Disallow: https://dev.mywebsite.com

    morphalex90 replied 4 years, 8 months ago 1 Member · 2 Replies
  • 2 Replies
  • morphalex90

    Guest
    July 26, 2021 at 12:17 pm

    How about you add a htpassword protection?
    It’s 100% effective

  • _squik

    Guest
    July 26, 2021 at 12:31 pm

    You need to add a separate robots.txt file to dev.mywebsite.com. Robots.txt should be at the root of each subdomain and covers URLs in that subdomain.

    If it doesn’t need to be public, though, I would recommend basic password protection or IP blocking. It’s more effective.

Log in to reply.