Many people realize that the search engine is just a simple page with it you just type your word or phrase into the search box and click the button, waiting a few seconds then hundreds of references to related pages will appear, but in fact, it is not.
The search engine is a piece of sophisticated software or programs that uses application to collect data and information about websites. And this collected information is usually the keywords and word phrases which indicate of what contained on the web page as a whole, and the source of those keywords may be the URL of the page, the source code, and the links into and out of the page.
The previous software is usually has a user interface that the seekers enter their search terms in to find specific information, and this information is ordered according to complicated algorithm which rank the website in the result pages according to many criteria.
But how do they collect all these overwhelming information? The short answer is by crawlers.
The crawlers look for every URL on the web and collect the keywords and phrases then include these keywords in a huge database that powers a search engine.
The major engines that you are probably familiar with may include, Google, Yahoo, Bing, Web Crawler, Lycos, InfoSeek, Altvista, iInktomi, Ask Jeeves, and more. Of course not all the previous search engines give you the same results for the same query, I think that you know the reason; it is the search engine algorithm that controls the web pages ranking.
The main task for the algorithm in any search engine that gives you the most relevant pages for your given keyword or phrase, so you can find the specific information you need in just a part of the second.
In the end, if you consider the number of website estimates over the internet with bills and increasing every day with thousands of pages, this may be summarized in just one word ….. Overwhelming.