txt file is then parsed and can instruct the robotic as to which webpages will not be to be crawled. Being a search engine crawler may retain a cached copy of this file, it may now and again crawl web pages a webmaster does not wish to crawl. Pages typically prevented from being crawled contain login-precise webpages such as buying carts and person