and 105GB. We need to remember that these initial sizes are collective terjemahan - and 105GB. We need to remember that these initial sizes are collective Bahasa Indonesia Bagaimana mengatakan

and 105GB. We need to remember that

and 105GB. We need to remember that these initial sizes are collective overall figures, so
we need to divide them among the six files. Set the growth increment to 20 to 25 percent
of the initial sizes to minimize the SQL Server grow operation. Remember that we
need to maintain the size of the data and log files manually (say every six months, with
monitoring every month); we should use autogrowth only for emergencies.
• If the expected daily load is 5GB to 10GB and we keep five days of data on the stage, set
the stage database’s initial size to 50GB with a growth increment of 10GB. For the
metadata database, we expect it would be about 10GB in a year and to be 20GB in two
years, so let’s set the initial size to 10GB with a 5GB increment. The principle to setting
the increment is that it is only for emergencies; we should never have to use autogrowth
because we increase the file size manually. The increment for the earlier stage
database was set to 20 percent to 25 percent as per the DDS and the NDS. This large
percentage is selected to minimize fragmentation if the database file did become full.
The increment for the metadata database is set to 50 percent (5GB) because the metadata
database contains audit and usage metadata that could fluctuate by significant
amounts depending on ETL processes.
• The log file size depends on the size of the daily load, recovery model, and the loading
method (ETL or ELT, stage or not stage; I’ll discuss this in the next chapter) as well as
index operations. Let’s set it to a 1GB initial size with a 512MB increment for both the
DDS and the NDS. For the stage, set a 2GB initial size with a 512MB increment. For
metadata, set it to 100MB with a 25MB increment. The transaction log contains database
changes. The transaction log space required depends on how much data we load
into the database. One way to estimate how much log space is required is to use the
ETL processes to load one day’s data into the stage and then into the NDS and then into
the DDS. If we set the initial size and the increment of these three databases to a small
amount (say 1MB to 2MB), during this process the log file will grow so that after the ETL
processes are completed, the transaction log sizes of these three databases will indicate
the required log sizes.
• For the recovery model, choose simple rather than bulk. All the changes in the data
warehouse are from the ETL processes. When recovering from failure, we can roll forward
using ETL by reapplying the extracted source system data for the particular day.
We don’t require the bulk recovery model to roll forward to a certain point in time using
differential backup or transaction log backup. We fully control the data upload on the
ETL, and the ETL process is the only process that updates the data warehouse. For the
stage, the NDS, and the DDS, the full recovery model is not necessary and causes overhead.
The full recovery model requires log backup, whereas the simple recovery model
doesn’t. The simple recovery model reclaims the log space automatically. The full recovery
model is suitable for OLTP systems where inserts and updates happen frequently all
day from many users and we want to be able to recover the database to a certain point
in time. In data warehousing, we can recover the data store by restoring the last full
backup followed by applying differential backups and then applying the daily ETL
loads.
0/5000
Dari: -
Ke: -
Hasil (Bahasa Indonesia) 1: [Salinan]
Disalin!
dan 105GB. Kita harus ingat bahwa ukuran awal ini adalah angka-angka keseluruhan kolektif, jadikita perlu membaginya antara file enam. Menetapkan kenaikan pertumbuhan untuk 20 untuk 25 persenukuran awal untuk meminimalkan SQL Server tumbuh operasi. Ingat bahwa kitaperlu untuk mempertahankan ukuran data dan log file secara manual (mengatakan setiap enam bulan, denganpemantauan setiap bulan); kita harus menggunakan autogrowth hanya untuk keadaan darurat.• Jika beban harian perkiraan 5GB untuk 10GB dan kami menjaga lima hari data di panggung, mengaturdatabase tahap awal ukuran-50GB dengan kenaikan pertumbuhan dari 10GB. Untukmetadata database, kami berharap itu akan sekitar 10GB dalam setahun dan menjadi 20GB duatahun, jadi mari kita menetapkan ukuran awal untuk 10GB dengan kenaikan 5GB. Prinsip ke pengaturankenaikan adalah bahwa hal itu hanya untuk keadaan darurat; kita seharusnya tidak pernah menggunakan autogrowthkarena kami meningkatkan ukuran file secara manual. Kenaikan untuk tahap awaldatabase ditetapkan untuk 20 persen 25 persen per DDS dan NDS. Ini besarpersentase yang dipilih untuk meminimalkan fragmentasi jika database file Apakah menjadi penuh.Kenaikan untuk metadata database diatur ke 50 persen (5GB) karena metadatadatabase berisi metadata audit dan penggunaan yang bisa berubah-ubah oleh signifikanJumlah tergantung ETL proses.• Ukuran file log tergantung pada ukuran beban harian, pemulihan model dan pemuatanmetode (ETL atau ELT, panggung atau tidak tahap; Saya akan membahas ini dalam bab selanjutnya) sertaoperasi indeks. Mari kita menetapkan ukuran awal 1GB dengan kenaikan 512MB untuk keduaDDS dan NDS. Untuk tahap, menetapkan ukuran awal 2GB dengan kenaikan 512MB. Untukmetadata, set ke 100MB dengan kenaikan 25MB. Log transaksi berisi databaseperubahan. Ruang log transaksi yang diperlukan tergantung pada berapa banyak data kami memuatke dalam database. Salah satu cara untuk memperkirakan berapa banyak ruang log diperlukan adalah untuk menggunakanProses ETL meload data satu hari ke tahap dan kemudian ke NDS dan kemudian keDDS. Jika kita menetapkan ukuran awal dan peningkatan database ini tiga keciljumlah (mengatakan 1MB untuk 2MB), selama proses ini, log file akan tumbuh begitu bahwa setelah ETLproses selesai, transaksi log ukuran database tiga ini akan menunjukkanukuran diperlukan log.• Untuk model pemulihan, pilih sederhana daripada massal. Semua perubahan pada datagudang berasal dari proses ETL. Ketika pulih dari kegagalan, kita dapat roll ke depanmenggunakan ETL dengan mengajukan permohonan visa kembali sistem diekstrak sumber data untuk hari tertentu.Kita tidak memerlukan model pemulihan massal untuk roll ke depan ke suatu titik tertentu dalam waktu menggunakandiferensial cadangan atau cadangan log transaksi. Kami sepenuhnya mengontrol meng-upload data padaETL, dan proses ETL adalah proses yang hanya update gudang data. Untukpanggung, NDS dan DDS, model pemulihan penuh tidak diperlukan dan menyebabkan overhead.Model pemulihan penuh memerlukan cadangan log, sedangkan model sederhana pemulihantidak. Model sederhana pemulihan dipotret ruang log secara otomatis. Pemulihan penuhmodel ini cocok untuk OLTP sistem mana menyisipkan dan pembaruan terjadi sering semuahari dari banyak pengguna dan kami ingin dapat memulihkan database ke suatu titik tertentudalam waktu. Dalam data pergudangan, kita dapat memulihkan data toko dengan mengembalikan penuh terakhircadangan yang diikuti dengan menerapkan differential backup dan kemudian menerapkan ETL harianbeban.
Sedang diterjemahkan, harap tunggu..
Hasil (Bahasa Indonesia) 2:[Salinan]
Disalin!
and 105GB. We need to remember that these initial sizes are collective overall figures, so
we need to divide them among the six files. Set the growth increment to 20 to 25 percent
of the initial sizes to minimize the SQL Server grow operation. Remember that we
need to maintain the size of the data and log files manually (say every six months, with
monitoring every month); we should use autogrowth only for emergencies.
• If the expected daily load is 5GB to 10GB and we keep five days of data on the stage, set
the stage database’s initial size to 50GB with a growth increment of 10GB. For the
metadata database, we expect it would be about 10GB in a year and to be 20GB in two
years, so let’s set the initial size to 10GB with a 5GB increment. The principle to setting
the increment is that it is only for emergencies; we should never have to use autogrowth
because we increase the file size manually. The increment for the earlier stage
database was set to 20 percent to 25 percent as per the DDS and the NDS. This large
percentage is selected to minimize fragmentation if the database file did become full.
The increment for the metadata database is set to 50 percent (5GB) because the metadata
database contains audit and usage metadata that could fluctuate by significant
amounts depending on ETL processes.
• The log file size depends on the size of the daily load, recovery model, and the loading
method (ETL or ELT, stage or not stage; I’ll discuss this in the next chapter) as well as
index operations. Let’s set it to a 1GB initial size with a 512MB increment for both the
DDS and the NDS. For the stage, set a 2GB initial size with a 512MB increment. For
metadata, set it to 100MB with a 25MB increment. The transaction log contains database
changes. The transaction log space required depends on how much data we load
into the database. One way to estimate how much log space is required is to use the
ETL processes to load one day’s data into the stage and then into the NDS and then into
the DDS. If we set the initial size and the increment of these three databases to a small
amount (say 1MB to 2MB), during this process the log file will grow so that after the ETL
processes are completed, the transaction log sizes of these three databases will indicate
the required log sizes.
• For the recovery model, choose simple rather than bulk. All the changes in the data
warehouse are from the ETL processes. When recovering from failure, we can roll forward
using ETL by reapplying the extracted source system data for the particular day.
We don’t require the bulk recovery model to roll forward to a certain point in time using
differential backup or transaction log backup. We fully control the data upload on the
ETL, and the ETL process is the only process that updates the data warehouse. For the
stage, the NDS, and the DDS, the full recovery model is not necessary and causes overhead.
The full recovery model requires log backup, whereas the simple recovery model
doesn’t. The simple recovery model reclaims the log space automatically. The full recovery
model is suitable for OLTP systems where inserts and updates happen frequently all
day from many users and we want to be able to recover the database to a certain point
in time. In data warehousing, we can recover the data store by restoring the last full
backup followed by applying differential backups and then applying the daily ETL
loads.
Sedang diterjemahkan, harap tunggu..
 
Bahasa lainnya
Dukungan alat penerjemahan: Afrikans, Albania, Amhara, Arab, Armenia, Azerbaijan, Bahasa Indonesia, Basque, Belanda, Belarussia, Bengali, Bosnia, Bulgaria, Burma, Cebuano, Ceko, Chichewa, China, Cina Tradisional, Denmark, Deteksi bahasa, Esperanto, Estonia, Farsi, Finlandia, Frisia, Gaelig, Gaelik Skotlandia, Galisia, Georgia, Gujarati, Hausa, Hawaii, Hindi, Hmong, Ibrani, Igbo, Inggris, Islan, Italia, Jawa, Jepang, Jerman, Kannada, Katala, Kazak, Khmer, Kinyarwanda, Kirghiz, Klingon, Korea, Korsika, Kreol Haiti, Kroat, Kurdi, Laos, Latin, Latvia, Lituania, Luksemburg, Magyar, Makedonia, Malagasi, Malayalam, Malta, Maori, Marathi, Melayu, Mongol, Nepal, Norsk, Odia (Oriya), Pashto, Polandia, Portugis, Prancis, Punjabi, Rumania, Rusia, Samoa, Serb, Sesotho, Shona, Sindhi, Sinhala, Slovakia, Slovenia, Somali, Spanyol, Sunda, Swahili, Swensk, Tagalog, Tajik, Tamil, Tatar, Telugu, Thai, Turki, Turkmen, Ukraina, Urdu, Uyghur, Uzbek, Vietnam, Wales, Xhosa, Yiddi, Yoruba, Yunani, Zulu, Bahasa terjemahan.

Copyright ©2025 I Love Translation. All reserved.

E-mail: