What are Web crawlers?
To find out what content is on your Web site, search engines like Google and MSN Search use programs called Web crawlers( (also known as a Web spider or Web robot)). These programs analyze millions of Web pages, then decide which sites and pages are most relevant for various search terms. With good SEO, you can show those Web crawlers that your site is a great one for people who are interested in vintage telephones, as opposed to office telephones or a thousand other telephone-related search possibilities.
A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.
Every search engine–Google, MSN, and so on–has its own formula for ranking Web sites, and they keep these formulas secret. Generally, search engines look at a number of factors about each Web site to judge its relevance to a topic and its overall importance. The words and phrases that you use on each page of your site are an important factor. The engines find the keyword weight of the phrases you use–the concentration of keywords on your pages–to determine each page’s relevance to a search term.
Keyword weight isn’t something that you need to worry about: some web publishers try to game the search engines by heavily using certain keywords, increasing the site’s weight for those keywords. Don’t do this; write for people, not search engines.
Also, search engines factor in the links from other sites to your site, including the number of links to your site (that is, its popularity), the text in those links, and the quality of the sites that link to yours .