Googlebot is Google's web crawling bot (sometimes also called a

Googlebot is Google's web crawling

Googlebot is Google's web crawling bot (sometimes also called a "spider"). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.

We use a huge set of computers to fetch (or "crawl") billions of pages on the web. Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.

Googlebot's crawl process begins with a list of webpage URLs, generated from previous crawl processes and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links (SRC and HREF) on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.

For webmasters: Googlebot and your site

How Googlebot accesses your site

For most sites, Googlebot shouldn't access your site more than once every few seconds on average. However, due to network delays, it's possible that the rate will appear to be slightly higher over short periods. In general, Googlebot should download only one copy of each page at a time. If you see that Googlebot is downloading a page multiple times, it's probably because the crawler was stopped and restarted.

Googlebot was designed to be distributed on several machines to improve performance and scale as the web grows. Also, to cut down on bandwidth usage, we run many crawlers on machines located near the sites they're indexing in the network. Therefore, your logs may show visits from several machines at google.com, all with the user-agent Googlebot. Our goal is to crawl as many pages from your site as we can on each visit without overwhelming your server's bandwidth. Request a change in the crawl rate.

Blocking Googlebot from content on your site

It's almost impossible to keep a web server secret by not publishing links to it. As soon as someone follows a link from your "secret" server to another web server, your "secret" URL may appear in the referrer tag and can be stored and published by the other web server in its referrer log. Similarly, the web has many outdated and broken links. Whenever someone publishes an incorrect link to your site or fails to update links to reflect changes in your server, Googlebot will try to download an incorrect link from your site.

If you want to prevent Googlebot from crawling content on your site, you have a number of options, including using robots.txt to block access to files and directories on your server.

Once you've created your robots.txt file, there may be a small delay before Googlebot discovers your changes. If Googlebot is still crawling content you've blocked in robots.txt, check that the robots.txt is in the correct location. It must be in the top directory of the server (e.g., www.myhost.com/robots.txt); placing the file in a subdirectory won't have any effect.

If you just want to prevent the "file not found" error messages in your web server log, you can create an empty file named robots.txt. If you want to prevent Googlebot from following any links on a page of your site, you can use the nofollow meta tag. To prevent Googlebot from following an individual link, add the rel="nofollow" attribute to the link itself.

Here are some additional tips:

Test that your robots.txt is working as expected. The Test robots.txt tool on the Blocked URLs page (under Health) lets you see exactly how Googlebot will interpret the contents of your robots.txt file. The Google user-agent is (appropriately enough) Googlebot.
The Fetch as Google tool in Webmaster Tools helps you understand exactly how your site appears to Googlebot. This can be very useful when troubleshooting problems with your site's content or discoverability in search results.
Making sure your site is crawlable

Googlebot discovers sites by following links from page to page. The Crawl errors page in Webmaster Tools lists any problems Googlebot found when crawling your site. We recommend reviewing these crawl errors regularly to identify any problems with your site.

If you're running an AJAX application with content that you'd like to appear in search results, we recommend reviewing our proposal on making AJAX-based content crawlable and indexable.

If your robots.txt file is working as expected, but your site isn't getting traffic, here are some possible reasons why your content is not performing well in search.

Problems with spammers and other user-agents

The IP addresses used by Googlebot change from time to time. The best way to identify accesses by Googlebot is to use the user-agent (Googlebot). You can verify that a bot accessing your server really is Googlebot by using a reverse DNS lookup.

Googlebot and all respectable search engine bots will respect the directives in robots.txt, but some nogoodniks and spammers do not. Report spam to Google.

Google has several other user-agents, including Feedfetcher (user-agent Feedfetcher-Google). Since Feedfetcher requests come from explicit action by human users who have added the feeds to their Google home page or to Google Reader, and not from automated crawlers, Feedfetcher does not follow robots.txt guidelines. You can prevent Feedfetcher from crawling your site by configuring your server to serve a 404, 410, or other error status message to user-agent Feedfetcher-Google. More information about Feedfetcher.
0/5000
Dari: -
Ke: -
Hasil (Bahasa Indonesia) 1: [Salinan]
Disalin!
Googlebot adalah Google web merangkak bot (kadang-kadang juga disebut "spider"). Merangkak adalah proses dimana Googlebot menemukan baru dan diperbarui halaman yang akan ditambahkan ke indeks Google.Kami menggunakan seperangkat komputer yang besar untuk mengambil (atau "merangkak") miliaran halaman di web. Googlebot menggunakan proses algoritma: program komputer menentukan situs mana yang akan dirayapi, seberapa sering, dan berapa banyak halaman untuk diambil dari tiap situs.Googlebot merangkak proses dimulai dengan daftar URL halaman web, dihasilkan dari proses merangkak sebelumnya dan ditambah dengan data Sitemap yang disediakan oleh webmaster. Googlebot kunjungan setiap situs itu mendeteksi link (SRC dan HREF) pada setiap halaman dan menambahkan mereka ke daftar halaman merangkak. Situs baru, perubahan ke situs yang sudah ada, dan mati link mencatat dan digunakan untuk memperbarui indeks Google.Untuk webmaster: Googlebot dan situs AndaBagaimana Googlebot mengakses situs AndaUntuk sebagian besar situs, Googlebot tidak boleh mengakses situs Anda lebih dari sekali setiap beberapa detik rata-rata. Namun, karena keterlambatan jaringan, mungkin bahwa tingkat akan muncul untuk menjadi sedikit lebih tinggi jangka pendek. Secara umum, Googlebot harus mendownload satu salinan dari setiap halaman pada satu waktu. Jika Anda melihat bahwa Googlebot adalah download halaman beberapa kali, hal ini mungkin karena crawler berhenti dan restart.Googlebot dirancang untuk didistribusikan di beberapa mesin untuk meningkatkan kinerja dan skala sebagai web tumbuh. Juga, untuk mengurangi penggunaan bandwidth, kami menjalankan banyak crawler mesin terletak dekat situs mereka sedang pengindeksan dalam jaringan. Oleh karena itu, log Anda mungkin menunjukkan kunjungan dari beberapa mesin di google.com, semua dengan agen-pengguna Googlebot. Tujuan kami adalah untuk merangkak halaman dari situs Anda sebanyak mungkin pada setiap kunjungan tanpa bandwidth server Anda yang luar biasa. Permintaan perubahan dalam tingkat merangkak.Memblokir Googlebot dari konten di situs AndaIanya hampir mustahil untuk menjaga rahasia server web dengan tidak menerbitkan link untuk itu. Segera setelah seseorang mengikuti link dari server "rahasia" ke server web lain, URL "rahasia" mungkin muncul dalam tag pengarah dan dapat disimpan dan diterbitkan oleh server web lain dalam log referer nya. Demikian pula, web memiliki banyak link usang dan rusak. Setiap kali seseorang menerbitkan sebuah salah link ke situs Anda atau gagal untuk memperbarui link untuk mencerminkan perubahan di server Anda, Googlebot akan mencoba men-download link yang salah dari situs Anda.Jika Anda ingin mencegah Googlebot merangkak konten di situs Anda, Anda memiliki sejumlah pilihan bersantap, termasuk menggunakan robots.txt untuk memblokir akses ke file dan direktori pada server Anda.Setelah Anda membuat file robots.txt Anda, mungkin ada penundaan sebelum Googlebot menemukan perubahan Anda. Jika Googlebot masih merangkak konten Anda telah diblok dalam robots.txt, periksa bahwa robots.txt terletak di lokasi yang benar. Itu harus di atas direktori server (misalnya, www.myhost.com/ robots.txt); menempatkan file dalam subdirektori tidak memiliki efek apapun.Jika Anda hanya ingin mencegah "file tidak ditemukan" kesalahan pesan log server web Anda, Anda dapat membuat file kosong bernama robots.txt. Jika Anda ingin mencegah Googlebot mengikuti link pada halaman situs Anda, Anda dapat menggunakan nofollow meta tag. Untuk mencegah Googlebot mengikuti link individu, menambahkan rel = "nofollow" atribut untuk link itu sendiri.Berikut adalah beberapa tips tambahan:Tes yang robots.txt bekerja seperti yang diharapkan. Uji perangkat robots.txt pada memblokir URL halaman (dalam kesehatan) memungkinkan Anda melihat persis bagaimana Googlebot akan menafsirkan isi dari robots.txt file. Agen-pengguna Google adalah (cukup tepat) Googlebot.Fetch sebagai alat Google Webmaster Tools membantu Anda memahami persis bagaimana situs Anda muncul untuk Googlebot. Hal ini dapat sangat berguna ketika pemecahan masalah masalah dengan konten situs Anda atau dapat ditemukan dalam hasil pencarian.Memastikan situs Anda crawlableGooglebot menemukan situs berikut link dari halaman ke halaman. Merangkak halaman kesalahan dalam Webmaster Tools daftar masalah Googlebot ditemukan ketika menjelajahi situs Anda. Kami menyarankan meninjau Galat perayapan ini secara teratur untuk mengidentifikasi masalah dengan situs Anda.If you're running an AJAX application with content that you'd like to appear in search results, we recommend reviewing our proposal on making AJAX-based content crawlable and indexable.If your robots.txt file is working as expected, but your site isn't getting traffic, here are some possible reasons why your content is not performing well in search.Problems with spammers and other user-agentsThe IP addresses used by Googlebot change from time to time. The best way to identify accesses by Googlebot is to use the user-agent (Googlebot). You can verify that a bot accessing your server really is Googlebot by using a reverse DNS lookup.Googlebot and all respectable search engine bots will respect the directives in robots.txt, but some nogoodniks and spammers do not. Report spam to Google.Google has several other user-agents, including Feedfetcher (user-agent Feedfetcher-Google). Since Feedfetcher requests come from explicit action by human users who have added the feeds to their Google home page or to Google Reader, and not from automated crawlers, Feedfetcher does not follow robots.txt guidelines. You can prevent Feedfetcher from crawling your site by configuring your server to serve a 404, 410, or other error status message to user-agent Feedfetcher-Google. More information about Feedfetcher.
Sedang diterjemahkan, harap tunggu..
 
Bahasa lainnya
Dukungan alat penerjemahan: Afrikans, Albania, Amhara, Arab, Armenia, Azerbaijan, Bahasa Indonesia, Basque, Belanda, Belarussia, Bengali, Bosnia, Bulgaria, Burma, Cebuano, Ceko, Chichewa, China, Cina Tradisional, Denmark, Deteksi bahasa, Esperanto, Estonia, Farsi, Finlandia, Frisia, Gaelig, Gaelik Skotlandia, Galisia, Georgia, Gujarati, Hausa, Hawaii, Hindi, Hmong, Ibrani, Igbo, Inggris, Islan, Italia, Jawa, Jepang, Jerman, Kannada, Katala, Kazak, Khmer, Kinyarwanda, Kirghiz, Klingon, Korea, Korsika, Kreol Haiti, Kroat, Kurdi, Laos, Latin, Latvia, Lituania, Luksemburg, Magyar, Makedonia, Malagasi, Malayalam, Malta, Maori, Marathi, Melayu, Mongol, Nepal, Norsk, Odia (Oriya), Pashto, Polandia, Portugis, Prancis, Punjabi, Rumania, Rusia, Samoa, Serb, Sesotho, Shona, Sindhi, Sinhala, Slovakia, Slovenia, Somali, Spanyol, Sunda, Swahili, Swensk, Tagalog, Tajik, Tamil, Tatar, Telugu, Thai, Turki, Turkmen, Ukraina, Urdu, Uyghur, Uzbek, Vietnam, Wales, Xhosa, Yiddi, Yoruba, Yunani, Zulu, Bahasa terjemahan.

Copyright ©2025 I Love Translation. All reserved.

E-mail: