To the young ones among us, it may seem like search engines and the Internet have been around forever, but it really it has only been about 20 years. According to Wikipedia, the first tool for searching the Internet, created in 1990, was called “Archie”. It downloaded directory listings of all files located on public anonymous FTP servers; creating a searchable database of file names. A year later “Gopher” was created. It indexed plain text documents. “Veronica” and “Jughead” came along to search Gopher’s index systems. The first actual Web search engine was developed by Matthew Gray in 1993 and was called “Wandex.”

There are basically three types of search engines: Those that are powered by robots (called crawlers; ants or spiders) and those that are powered by human submissions; and those that are a hybrid of the two.

Crawler-based search engines are those that use automated software agents (called crawlers) that visit a Web site, read the information on the actual site, read the site’s meta tags and also follow the links that the site connects to performing indexing on all linked Web sites as well. The crawler returns all that information back to a central depository, where the data is indexed. The crawler will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the administrators of the search engine.

Human-powered search engines rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index.

TheSearchEnginelist.com lists 247 commercial search engines under 28 different categories. We are all familiar with the general search engines like Google, Yahoo and Bing, but there are a number […]