I. INTRODUCTIONThe self organizing map (SOM) proposed by Kohonen [1],  terjemahan - I. INTRODUCTIONThe self organizing map (SOM) proposed by Kohonen [1],  Bahasa Indonesia Bagaimana mengatakan

I. INTRODUCTIONThe self organizing

I. INTRODUCTION
The self organizing map (SOM) proposed by Kohonen [1], has been widely used in industrial applications such as pattern recognition, biological modeling, data compression, signal processing and data mining [2]. It is an unsupervised and nonparametric neural network approach. The success of the SOM algorithm lies in its simplicity that makes it easy to understand, simulate and be used in many applications.
The basic SOM consists of neurons usually arranged in a two-dimensional structure such that there are neighborhood relations among the neurons. After completion of training, each neuron is attached to a feature vector of the same dimension as input space. By assigning each input vector to the neuron with nearest feature vectors, the SOM is able to divide the input space into regions (clusters) with common nearest feature vectors. This process can be considered as performing vector quantization (VQ) [3].
Also, because of the neighborhood relation contributed by the inter-connections among neurons, the SOM exhibits another important property of topology preservation.
Clustering algorithms attempt to organize unlabeled input vectors into clusters such that points within the cluster are more similar to each other than vectors belonging to different clusters [4]. The clustering methods are of five types: hierarchical clustering, partitioning clustering, density-based clustering, grid-based clustering and model-based clustering [5].
In this paper, a new two-level clustering algorithm is proposed. The idea is that the first level is to train the data by the SOM neural network and the clustering at the second
level is a rough set based incremental clustering approach [6], which will be applied on the output of SOM and requires only a single neurons scan. The optimal number of clusters can be found by rough set theory which groups the given neurons into a set of overlapping clusters (clusters the mapped data respectively).
This paper is organized as following; in section II the basics of SOM algorithm are outlined. The basic of incremental clustering and rough set based approach are described in section III. In section IV the proposed algorithm is presented. Section V is dedicated to experiment results and section VI provides brief conclusion and future works.
II. SELF ORGANIZING MAP AND CLUSTERING
Competitive learning is an adaptive process in which the neurons in a neural network gradually become sensitive to different input categories, sets of samples in a specific domain of the input space. A division of neural nodes emerges in the network to represent different patterns of the inputs after training.
The division is enforced by competition among the neurons: when an input x arrives, the neuron that is best able to represent it wins the competition and is allowed to learn it even better. If there exist an ordering between the neurons, i.e. the neurons are located on a discrete lattice, the competitive learning algorithm can be generalized. Not only the winning neuron but also its neighboring neurons on the lattice are allowed to learn, the whole effect is that the final map becomes an ordered map in the input space. This is the essence of the SOM algorithm.
The SOM consist of m neurons located on a regular low-dimensional grid, usually one or two dimensional. The lattice of the grid is either hexagonal or rectangular.
The basic SOM algorithm is iterative. Each neuron i has a d -dimensional feature vector wi = [wi 1,...,wid ] . At each training step t , a sample data vector x(t) is randomly chosen for the training set. Distance between x(t) and all feature vectors are computed. The winning neuron, denoted by c , is the neuron with the feature vector closest to ) (tx:
i {m}
~ 1,...,. (1)
A set of neighboring nodes of the winning node is denoted as Nc . We define hic (t) as the neighborhood kernel function around the winning neuron c at time t . The

neighborhood kernel function is a non-increasing function of time and of the distance of neuron i from the winning neuron c. The kernel can be taken as a Gaussian function:

0/5000
Dari: -
Ke: -
Hasil (Bahasa Indonesia) 1: [Salinan]
Disalin!
I. INTRODUCTIONThe self organizing map (SOM) proposed by Kohonen [1], has been widely used in industrial applications such as pattern recognition, biological modeling, data compression, signal processing and data mining [2]. It is an unsupervised and nonparametric neural network approach. The success of the SOM algorithm lies in its simplicity that makes it easy to understand, simulate and be used in many applications.The basic SOM consists of neurons usually arranged in a two-dimensional structure such that there are neighborhood relations among the neurons. After completion of training, each neuron is attached to a feature vector of the same dimension as input space. By assigning each input vector to the neuron with nearest feature vectors, the SOM is able to divide the input space into regions (clusters) with common nearest feature vectors. This process can be considered as performing vector quantization (VQ) [3].Also, because of the neighborhood relation contributed by the inter-connections among neurons, the SOM exhibits another important property of topology preservation.Clustering algorithms attempt to organize unlabeled input vectors into clusters such that points within the cluster are more similar to each other than vectors belonging to different clusters [4]. The clustering methods are of five types: hierarchical clustering, partitioning clustering, density-based clustering, grid-based clustering and model-based clustering [5].In this paper, a new two-level clustering algorithm is proposed. The idea is that the first level is to train the data by the SOM neural network and the clustering at the secondlevel is a rough set based incremental clustering approach [6], which will be applied on the output of SOM and requires only a single neurons scan. The optimal number of clusters can be found by rough set theory which groups the given neurons into a set of overlapping clusters (clusters the mapped data respectively).This paper is organized as following; in section II the basics of SOM algorithm are outlined. The basic of incremental clustering and rough set based approach are described in section III. In section IV the proposed algorithm is presented. Section V is dedicated to experiment results and section VI provides brief conclusion and future works.II. SELF ORGANIZING MAP AND CLUSTERINGCompetitive learning is an adaptive process in which the neurons in a neural network gradually become sensitive to different input categories, sets of samples in a specific domain of the input space. A division of neural nodes emerges in the network to represent different patterns of the inputs after training.The division is enforced by competition among the neurons: when an input x arrives, the neuron that is best able to represent it wins the competition and is allowed to learn it even better. If there exist an ordering between the neurons, i.e. the neurons are located on a discrete lattice, the competitive learning algorithm can be generalized. Not only the winning neuron but also its neighboring neurons on the lattice are allowed to learn, the whole effect is that the final map becomes an ordered map in the input space. This is the essence of the SOM algorithm.The SOM consist of m neurons located on a regular low-dimensional grid, usually one or two dimensional. The lattice of the grid is either hexagonal or rectangular.
The basic SOM algorithm is iterative. Each neuron i has a d -dimensional feature vector wi = [wi 1,...,wid ] . At each training step t , a sample data vector x(t) is randomly chosen for the training set. Distance between x(t) and all feature vectors are computed. The winning neuron, denoted by c , is the neuron with the feature vector closest to ) (tx:
i {m}
~ 1,...,. (1)
A set of neighboring nodes of the winning node is denoted as Nc . We define hic (t) as the neighborhood kernel function around the winning neuron c at time t . The

neighborhood kernel function is a non-increasing function of time and of the distance of neuron i from the winning neuron c. The kernel can be taken as a Gaussian function:

Sedang diterjemahkan, harap tunggu..
Hasil (Bahasa Indonesia) 2:[Salinan]
Disalin!
I. PENDAHULUAN
The mengorganisir diri peta (SOM) yang diusulkan oleh Kohonen [1], telah banyak digunakan dalam aplikasi industri seperti pengenalan pola, pemodelan biologi, kompresi data, pemrosesan sinyal dan data mining [2]. Ini adalah pendekatan jaringan saraf tanpa pengawasan dan nonparametrik. Keberhasilan algoritma SOM terletak pada kesederhanaan yang membuatnya mudah untuk memahami, mensimulasikan dan digunakan dalam berbagai aplikasi.
The SOM dasar terdiri dari neuron biasanya diatur dalam struktur dua dimensi seperti yang ada hubungan lingkungan antara neuron. Setelah selesai pelatihan, setiap neuron melekat vektor fitur dari dimensi yang sama sebagai ruang masukan. Dengan menetapkan setiap vektor masukan ke neuron dengan vektor fitur terdekat, SOM mampu membagi ruang input ke daerah (cluster) dengan vektor fitur terdekat umum. Proses ini dapat digolongkan sebagai vektor kuantisasi (VQ) [3].
Juga, karena hubungan lingkungan disumbangkan oleh antar-koneksi antara neuron, SOM menunjukkan sifat penting lain dari pelestarian topologi.
Algoritma Clustering mencoba untuk mengatur vektor input berlabel dalam cluster sehingga poin dalam cluster lebih mirip satu sama lain dari vektor milik kelompok yang berbeda [4]. Metode pengelompokan yang lima jenis: pengelompokan hirarki, clustering partisi, clustering berdasarkan kepadatan, clustering berbasis grid dan pengelompokan model berbasis [5].
Dalam makalah ini, algoritma clustering dua tingkat yang baru diusulkan. Idenya adalah bahwa tingkat pertama adalah untuk melatih data dengan jaringan SOM saraf dan pengelompokan di kedua
level set kasar berdasarkan pendekatan clustering inkremental [6], yang akan diterapkan pada output dari SOM dan hanya membutuhkan satu neuron memindai. Jumlah klaster yang optimal dapat ditemukan dengan teori set kasar yang kelompok neuron yang diberikan menjadi satu set tumpang tindih cluster (cluster data dipetakan masing-masing).
Makalah ini disusun sebagai berikut; dalam bagian II dasar-dasar algoritma SOM diuraikan. Dasar incremental clustering dan pendekatan berbasis set kasar yang diuraikan dalam bagian III. Pada bagian IV algoritma yang diusulkan disajikan. Bagian V didedikasikan untuk bereksperimen hasil dan bagian VI memberikan kesimpulan singkat dan karya masa depan.
II. DIRI PENGORGANISASIAN MAP DAN CLUSTERING
pembelajaran kompetitif adalah proses adaptif dimana neuron dalam jaringan saraf secara bertahap menjadi sensitif terhadap kategori input yang berbeda, set sampel dalam domain tertentu dari ruang input. Sebuah divisi dari node saraf muncul dalam jaringan untuk mewakili pola yang berbeda dari masukan setelah pelatihan.
Divisi ini diberlakukan oleh persaingan antara neuron: ketika input x tiba, neuron yang terbaik dapat mewakili menang kompetisi dan diperbolehkan belajar lebih baik. Jika terdapat suatu urutan antara neuron, yaitu neuron yang terletak pada kisi diskrit, algoritma pembelajaran kompetitif dapat digeneralisasi. Tidak hanya neuron pemenang tetapi juga neuron tetangganya di kisi diperbolehkan untuk belajar, efek keseluruhan adalah bahwa peta akhir menjadi peta memerintahkan dalam ruang input. Ini adalah inti dari algoritma SOM.
The SOM terdiri dari neuron m terletak di grid rendah dimensi biasa, biasanya satu atau dua dimensi. Kisi grid adalah baik heksagonal atau persegi panjang.
Algoritma SOM dasar berulang. Setiap neuron i memiliki iklan berdimensi fitur vektor wi = [wi 1, ..., wid]. Pada setiap pelatihan langkah t, vektor data sampel x (t) secara acak dipilih untuk training set. Jarak antara x (t) dan semua vektor fitur dihitung. Neuron menang, dinotasikan dengan c, adalah neuron dengan vektor fitur yang paling dekat dengan) (tx:
i {m}
~ 1, ..., (1.)
Satu set node tetangga dari node menang dilambangkan sebagai Nc. Kami mendefinisikan hik (t) sebagai fungsi kernel lingkungan sekitar menang neuron c pada waktu t. The fungsi kernel lingkungan adalah fungsi non-meningkatkan waktu dan jarak dari neuron i dari pemenang neuron c. Kernel dapat diambil sebagai fungsi Gaussian:



Sedang diterjemahkan, harap tunggu..
 
Bahasa lainnya
Dukungan alat penerjemahan: Afrikans, Albania, Amhara, Arab, Armenia, Azerbaijan, Bahasa Indonesia, Basque, Belanda, Belarussia, Bengali, Bosnia, Bulgaria, Burma, Cebuano, Ceko, Chichewa, China, Cina Tradisional, Denmark, Deteksi bahasa, Esperanto, Estonia, Farsi, Finlandia, Frisia, Gaelig, Gaelik Skotlandia, Galisia, Georgia, Gujarati, Hausa, Hawaii, Hindi, Hmong, Ibrani, Igbo, Inggris, Islan, Italia, Jawa, Jepang, Jerman, Kannada, Katala, Kazak, Khmer, Kinyarwanda, Kirghiz, Klingon, Korea, Korsika, Kreol Haiti, Kroat, Kurdi, Laos, Latin, Latvia, Lituania, Luksemburg, Magyar, Makedonia, Malagasi, Malayalam, Malta, Maori, Marathi, Melayu, Mongol, Nepal, Norsk, Odia (Oriya), Pashto, Polandia, Portugis, Prancis, Punjabi, Rumania, Rusia, Samoa, Serb, Sesotho, Shona, Sindhi, Sinhala, Slovakia, Slovenia, Somali, Spanyol, Sunda, Swahili, Swensk, Tagalog, Tajik, Tamil, Tatar, Telugu, Thai, Turki, Turkmen, Ukraina, Urdu, Uyghur, Uzbek, Vietnam, Wales, Xhosa, Yiddi, Yoruba, Yunani, Zulu, Bahasa terjemahan.

Copyright ©2024 I Love Translation. All reserved.

E-mail: