Googlebot (or any search engine spider) crawls the web to process information. Until Google is able to capture the web through osmosis, this discovery phase will always be essential. Google, based on data generated during crawl time discovery, sorts and analyzes URLs in real time to make indexation decisions.
A web crawler (also known as a web spider or search engine robot) is a programmed script that browses the World Wide Web in a methodical, automatic manner. This process is called as web crawling or spearing.
Here is some Web Crawling Sites