Share on facebook
Share on google
Share on twitter
Share on linkedin

How Search Works

Discover how search engines operate. Leveraging the basics will help you understand how SEO factors into your website’s ranking.

Search Engine Structure

The structure of search engines is broken into 3 main elements:

  1. Web Crawlers
  2. Indexes (Catalogs)
  3. Search Engine Software (Algorithms)

Web Crawlers (Spiders)

“Spiders” travel (“crawl”) across the internet, inspecting websites and all content on websites. They read and trace the content of a site and all links on a domain to create site maps. These maps are stored for later indexing and updating of websites in order to maintain databases that search engines look through for results.

Crawlers are capable of reading all front-facing code (HTML, images, CSS, and JavaScript) in order to derive the basis for ranking and categorizing the sites that they observe. Through the reading of sites, the crawler is able to determine headings, keyword patterns, and a basic understanding of the site’s subject matter.

This information is then compiled and cataloged for later use in algorithmic searching. These catalogs are updated on a frequent basis by crawlers in order to give accurate search results.

Web Crawler graphic

Indexes (Catalogs)

a diagram of a database

The information gathered by web crawlers is stored in a catalog that is unavailable until each page is viewed by the “Spider”. Without the index where all information that is gathered gets stored, the results would take a substantial amount of time to create because the “Spider” would have to seek that website to catalog it prior to showing the search result.

The index contains all websites that have been crawled by spiders. This information is categorized into databases that can be queried by search algorithms to provide results for the user.

Search Algorithms

The search engine software (each search engine has its own unique algorithm) is utilized to rank and display pages based on various components. These pages are given a rank based on the the search engines calculation of the relevancy of the site to the consumer’s search parameters. While each search engine uses a different algorithm, or method for calculating the ranking of each result, and the algorithms are closely-held secrets, the results are primarily derived from the following:

  • Location and frequency of keywords (and key phrases) on a web page
    • Title tags are considered more relevant
    • Keywords are found in the bodies of text (patterns of use are more relevant)
  • Off-page ranking
    • Link analysis – How pages link together and how much “authority” each has
      • Artificial link spamming is discounted (i.e., paid links)
    • Click through measurement (transaction rate)


Here’s a quick recap of how search works, the video is short but covers the important facts we discussed above.