Post by account_disabled on Feb 27, 2024 4:22:14 GMT -5
The do a site search to see all the pages that Google is indexing from your site in order to discover pages that you forgot about and clean those out of that average grade Google is going to give your site by setting meta robots noindexfollow or blocking in robots.txt. Generally the weakest pages that still made the index are going to be listed last in a site search. Noindex vs. robots.txt.
Theres an important but subtle difference between using meta robots and Kazakhstan Phone Number using robots.txt to prevent indexation of a page. Using meta robots noindexfollow allows the link equity going to that page to flow out to the pages it links to. If you block the page with robots.txt youre just flushing that down the toilet. In the example above Im blocking pages that arent real pages theyre tracking scripts so Im not losing link equity as these pages DO NOT have the header with the main menu links etc. Think of a page like a Contact Us page or a Privacy Policy page probably linked to by every single page on your site via either the main menu or the footer menu. So theres a ton of link juice going to those pages do you just want to throw that away.
Or would you rather let that link equity flow out to everything in your main menu Easy question to answer isnt it Crawl bandwidth management When might you actually want to use robots.txt instead Perhaps if youre having crawl bandwidth issues and Googlebot is spending lots of time fetching utility pages only to discover meta robots noindexfollow in them and having to bail out. If youve got so many of these that Googlebot isnt getting to your important pages then you may have to block via robots.txt. ranking improvements across the board by cleaning up their XML sitemaps and noindexing their utility pages Do I really have.
Theres an important but subtle difference between using meta robots and Kazakhstan Phone Number using robots.txt to prevent indexation of a page. Using meta robots noindexfollow allows the link equity going to that page to flow out to the pages it links to. If you block the page with robots.txt youre just flushing that down the toilet. In the example above Im blocking pages that arent real pages theyre tracking scripts so Im not losing link equity as these pages DO NOT have the header with the main menu links etc. Think of a page like a Contact Us page or a Privacy Policy page probably linked to by every single page on your site via either the main menu or the footer menu. So theres a ton of link juice going to those pages do you just want to throw that away.
Or would you rather let that link equity flow out to everything in your main menu Easy question to answer isnt it Crawl bandwidth management When might you actually want to use robots.txt instead Perhaps if youre having crawl bandwidth issues and Googlebot is spending lots of time fetching utility pages only to discover meta robots noindexfollow in them and having to bail out. If youve got so many of these that Googlebot isnt getting to your important pages then you may have to block via robots.txt. ranking improvements across the board by cleaning up their XML sitemaps and noindexing their utility pages Do I really have.