Thursday, 10 January 2013

Working of search engine


What is search engine:

A search engine is a program to find the information stored on a computer system such as the World Wide Web, or a personal computer. The search engine allows a single word or phrases and retrieving a list of references that match those words or phrase. Generally crawlers use regularly updated indexes to operate quickly and efficiently.


 There are 3 parts to a search engine:

  1. The search engines use spiders to search the internet for websites. The spiders also known as robots.
  2. After we giving the keywords the results of the robots travels are put in an database which is then indexed based on keywords found and where these keywords were found.
  3. The users of search engine search for keywords or phrases related to what they are looking for and the search engine index returns related sites.









A web crawler is a program which browses the World Wide Web in a methodical and automated manner. Web crawler is also known as the web spider. It is one type of bot. Web spider keep a copy of all the pages for the later processing.

Working of web Crawlers:

Web crawlers use a process called crawling the web. They start with the heavy traffic and most popular web pages.
The web crawler sets out from the search engine's base computer system looking for websites to index.
The web crawler collects the information about the website and it's links like the website url.web page tittle,meta teg information,web page content, link on the page and where they go to.

Here I am also adding the flow chart images to get a good idea about what is search engine and how it is work.




 

No comments:

Post a Comment