There is lot URLs that Google keeps in its cache when we update URLs. The older versions of URLs are not easily removed from Google cache. In order to eliminate these URLs from Google SERP results we can implement three different methods.
- 404 or 410 HTTP status codec
- robots.txt file
- noindex meta tag
404 errors: The Browsed URLs usually returns this error when the page requested is not present in the server. This Error displays when the requested page is not present in server or any false rules written in .htaccess file or any function calls the old page that is not present in the server.
robots.txt file : robots.txt file present under the root folder and Google usually starts it s crawling program by reading the rules inside robots.txt file. If you place any particular rule inside the robots.txt file in order to remove a page from crawling the google bot will not request for the respected URL.
For example:
User agent: Google
Disallow: /test/test.html
Which means that file ‘test.html’ under the folder ‘test’ should not be crawled at the time of crawling.
Noindex meta tag : This tag means for preventing Crawlers from indexing the respective file. The syntax is as follows:
- <meta name=”robots” content=”noindex”> - To prevent all robots from indexing file
- <meta name=”googlebot” content=”noindex”> - Prevent Google bot from indexing the file
First of all you can go for Google web master tool https://www.google.com/webmasters/tools/removals and can make a request there by adding the respective URLs which you want to remove from SERP results. Some times, this will take upto 90 days for complete Removal and will be on process. If you are not satisfied with this tool You can go for either of the method what we discussed earlier.