Lompat ke isi

Mesin pencari: Perbedaan antara revisi

Dari Wikipedia bahasa Indonesia, ensiklopedia bebas
Konten dihapus Konten ditambahkan
Tidak ada ringkasan suntingan
Tidak ada ringkasan suntingan
Baris 60: Baris 60:
===Pranala Menuju Mesin Pencari===
===Pranala Menuju Mesin Pencari===
*[http://www.google.com Google]
*[http://www.google.com Google]
*[http://www.yooci.com Yooci]
*[http://www.giveramp.com giveRAMP]
*[http://www.giveramp.com giveRAMP]
*[http://www.yahoo.com Yahoo!]
*[http://www.yahoo.com Yahoo!]

Revisi per 12 Januari 2005 20.51

Sebuah mesin pencari merupakan program komputer yang didesain untuk membantu seseorang mengakses file-file yang disimpan dalam sebuah komputer, misalnya sebuah server umum di web (WWW). Mesin pencari memperbolehkan kita untuk meminta sebuah content media yang memenuhi kriteria yang spesifik (biasanya yang berisi kata atau frasa yang diminta) dan mengembalikan hasil pencarian yang merupakan file-file yang memenuhi kriteria tersebut. Berbeda dari dokumen indeks yang mengatur file dalam suatu cara yang sudah lebih dahulu ditentukan, mesin pencari melakukan pencarian hanya setelah pengguna menentukan kriteria pencarian.

Dalam konteks Internet, masin pencari biasanya merujuk kepada WWW dan bukan protokol ataupun area lainnya. Selain itu mesin-mesin pencari mengumpulkan data yang tersedia di newsgroup, database besar, atau direktori-direktori terbuka seperti DMOZ.org. Karena pengumpulan datanya dilakukan secara otomatis, mesin pencari berbeda daripada direktori Web yang diawasi manusia.

Mayoritas daripada mesin-mesin pencari dijalankan oleh perusahaan-perusahaan swasta yang menggunakan algoritma kepemilikan dan database tertutup - yang paling populer di antara semuanya adalah Google (MSN Search dan Yahoo! tertinggal sedikit di belakang). Telah ada beberapa percobaan untuk menciptakan mesin pencari dengan sumber-terbuka (open-source), contohnya adalah Htdig, Nutch, Egothor dan OpenFTS. [1]

Bagaimana Mesin Pencari Bekerja

Web search engines work by storing information about a large number of web pages, which they retrieve from the WWW itself. These pages are retrieved by a web crawler — an automated web browser which follows every link it sees. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages is stored in an index database for use in later queries. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages.

When a user comes to the search engine and makes a query, typically by giving key words, the engine looks up the index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text.

There is another main type: Real-time search engines, like Orase. Such search engines don't use an index. The information that a search engine needs is only collected if a new query is started. Compared to the index based systems of Google-like search engines this real-time system has some advantages: The information are always up-to-date, there are (almost) no dead links and less system resources are needed. (Google uses almost 100.000 computers, Orase only one.) But there are some disadvantages, too: A search needs longer to be finished, for example.

The usefulness of a search engine depends on the relevance of the results it gives back. While there may be millions of Web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve.

Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the controversial practice of allowing advertisers to pay money to have their listings ranked higher in search results.

Sejarah

The first Web search engine was "Wandex", a now-defunct index collected by the World Wide Web Wanderer, a web crawler developed by Matthew Gray at MIT in 1993. Another very early search engine, Aliweb, also appeared in 1993 and still runs today. One of the first engines to later become a major commercial endeavor was Lycos, which started at Carnegie Mellon University as a research project in 1994.

Soon after, many search engines appeared and vied for popularity. These included WebCrawler, Hotbot, Excite, Infoseek, Inktomi, and AltaVista. In some ways they competed with popular directories such as Yahoo!. Later, the directories integrated or added on search engine technology for greater functionality.

In 2002, Yahoo! acquired Inktomi and in 2003, Yahoo! acquired Overture, which owned AlltheWeb and Altavista. In 2004, Yahoo! launched its own search engine based on the combined technologies of its acquisitions and providing a service that gave pre-eminence to the Web search engine over the directory.

In December 2003, Orase published the first version of its new real-time search technology. It came with many new functions and the performance increase a lot.

Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s. Several companies entered the market spectacularly, recording record gains during their initial public offerings. Some have completely taken off their public search engine, and are marketing Enterprise-only editions, such as Northern Light which use to be part of the 8 or 9 early search engines after Lycos came out.

Before the advent of the Web, there were search engines for other protocols or uses, such as the Archie search engine for anonymous FTP sites and the Veronica search engine for the Gopher protocol.

Osmar R. Zaïane's From Resource Discovery to Knowledge Discovery on the Internet details the history of search engine technology prior to the emergence of Google.

Recent additions to the list of search engines include a9.com, AlltheWeb, Ask Jeeves, Clusty, Gigablast, Teoma, Wisenut, GoHook, Kartoo, and Vivisimo.

Google

Around 2001, the Google search engine rose to prominence. Its success was based in part on the concept of link popularity and PageRank. Each page is ranked by how many pages link to it, on the premise that good or desirable pages are linked to more than others. The PageRank of linking pages and the number of links on these pages contribute to the PageRank of the linked page. This makes it possible for Google to order its results by how many web sites link to each found page. Google's minimalist user interface was very popular with users, and has since spawned a number of imitators.

Researchers at NEC Research Institute claim to have improved upon Google's patented PageRank technology by using web crawlers to find "communities" of websites. Instead of ranking pages, this technology uses an algorithm that follows links on a webpage to find other pages that link back to the first one and so on from page to page. The algorithm "remembers" where it has been and indexes the number of cross-links and relates these into groupings. In this way virtual communities of webpages are found.

Tantangan yang Dihadapi Mesin-mesin Pencari

  • The web is growing much faster than any present-technology search engine can possibly index (see distributed crawling).
  • Many web pages are updated frequently, which forces the search engine to revisit them periodically.
  • The queries one can make are currently limited to searching for key words, which may results in many false positives.
  • Dynamically generated sites, which may be slow or difficult to index, or may result in excessive results from a single site.
  • Many dynamically generated sites are not indexable by search engines; this phenomenon is known as the invisible web.
  • Some search engines do not order the results by relevance, but rather according to how much money the sites have paid them.
  • Some sites use tricks to manipulate the search engine to display them as the first result returned for some keywords. This can lead to some search results being polluted, with more relevant links being pushed down in the result list.

Lihat Pula

Pranala Luar

Pranala Menuju Mesin Pencari

Engine MetaSearch

Tutorial Singkat Mengenai MetaSearching