Forums Forums White Hat SEO Blocked by robots.txt BUT the page is not blocked…

  • Blocked by robots.txt BUT the page is not blocked…

    Posted by seohelper on March 9, 2020 at 10:12 am

    Hi,

    ​

    I have been investigating an issue with a page on a website where Google won’t index the page and Search Console tells me:

    ​

    **Crawl allowed?** No: blocked by robots.txt

    **Page fetch Failed:** Blocked by robots.txt

    ​

    Yet, If I do a live test on the URL the test says:

    ​

    **Crawl allowed?** Yes

    **Page fetch:** Successful

    ​

    The page code and screenshot preview are also correct in the live test.

    ​

    I have double-checked the robots.txt and there is nothing blocking this specific page from being blocked, there is no nofollow or no-index in the code and nothing in the HTTP header.

    ​

    Anyone got an idea of what possibly could be causing this issue?

    WoogieDG replied 5 years, 3 months ago 1 Member · 3 Replies
  • 3 Replies
  • alex_3410

    Guest
    March 9, 2020 at 11:29 am

    is it in a subfolder? if so have you checked to make sure there is nothing in that directory that may be interfering with it?

  • boostedgts

    Guest
    March 9, 2020 at 1:58 pm

    Caching.

  • WoogieDG

    Guest
    March 9, 2020 at 7:34 pm

    If its WordPress, did you have “Discourage Search Engines” off? Once your turn it on, it takes a while for cache to clear.

Log in to reply.