[translation] why don't you shrink database files
In this view, these days many articles about SQL SERVER shrinkage data file, ready to write an article about shrink log of the article, but suddenly have to translate impulse will read the classic article, the article below is a translation is Paul Randal – & ldquo; Why You Should Not Shrink Your Data Files”. Some of the more difficult to translate, clear place, I'll post the original. Well, not long winded, direct look at the following translation.
I'm a hot issue is the largest data file on shrinkage, while Microsoft's time, I write the relevant contract data file code, I never have chance to rewrite it, make it more easy to operate. I really don't like contraction.
Now, not to be confused with the contraction of the transaction log files and shrinkage data file, when a transaction log file growth out of control or to remove excess VLF fragments (here and here. See the excellent article Kimberly), however, the frequent use of contract transaction log data file (not rare operation) and should not be a part of you performed regularly the maintenance plan.
Contraction data files should be performed even less. This is why — — data file shrink resulting in a large index fragmentation, let me with a simple and you can run your footsteps to demonstrate. The following script will create a data file, create a 10MB size of “ filler” table, the size of a 10MB “ production” cluster index, and then analyzes the new pieces of clustered index.
Results are as follows
The logical fragments clustered index close to about 0.4% contraction in the data file. [but I test results is 0.54%, as shown above, but also is close to 0.4%]
Now I delete the filter table, running shrinkage data file command, re analysis of the clustered index fragmentation.
Here is my result, the execution results, please see the:
Wow! After the shrink, the logical fragmentation is almost 100%. The shrink operation *completely* fragmented the index, removing any chance of efficient range scans on it by ensuring the all range-scan readahead I/Os will be single-page I/Os.
Wow, really terrible! Data file after contraction, logical fragmentation index is nearly 100%, shrinkage data files resulted in a complete fragmentation index. Eliminate any chance of the effective range of about its scanning, ensuring that all perform read ahead range scan of I/O single page in I/O operation
How so? When a single data file shrink operation once, it will use GAM bitmap index to find the distribution of the highest page data file, then as much as possible to move forward to the file can be moved to places, like this, in the example above, it is a complete reversal of the clustered index, make it from the non fragmentation debris to complete chemical.
The same code for DBCC SHRINKFILE, DBCC SHRINKDATABASE, and automatic contraction, they are as bad as the index, fragmentation, data file contraction also produced a large number of I/O operation, consume a large amount of CPU resources, and generate the *load* transaction log, because any operation will record all.
Data file shrink never as part of regular maintenance, you cannot enable “ automatic contraction of ” attribute, I try to remove it from the SQL 2005 and SQL 2008 products, the only reason it exists is in order to better forward compatibility, not to fall into this trap: create a maintenance plan, re generating all the index, and then attempt to reclaim the index rebuild cost space contraction data files — — this is what you do to generate a lot of transaction log, but in fact did not improve performance of zero sum game.
So, why do you want to run a contraction,? For example, if you put a considerable database delete a considerable proportion, the database is unlikely to grow, or if you need to move a database file before the first empty data file?
I would like to recommend the following method:
- Create a new file group
- Will all table and index moving affected to a new file with CREATE INDEX... WITH (DROP_EXISTING=ON) script, in the mobile table at the same time, delete the table in the debris.
- Delete those you prepare the contraction of the old file group, you anyway to shrink (or shrunk down, the way it if its master file group).
Basically you need to provide some more space, the old file before you can shrink, but it is a clearer set.
The method I like to recommend is as follows:
- Create a new filegroup
- Move all affected tables and indexes into the new filegroup using the CREATE INDEX … WITH (DROP_EXISTING = ON) ON syntax, to move the tables and remove fragmentation from them at the same time
- Drop the old filegroup that you were going to shrink anyway (or shrink it way down if its the primary filegroup)
Basically you need to provision some more space before you can shrink the old files, but it’s a much cleaner mechanism.
If you did not choose to shrink log file, please note that this operation will cause index fragmentation, you should take some steps to eliminate shrinkage data files which may cause performance problems, the only way is to use the DBCC INDEXDEFPAGE or ALTER INDEX... REORGANIZE eliminate index fragmentation is not caused by the data file growth, these orders expand the space 8KB page instead of the reconstruction of a new index in the index rebuild operation.
The bottom line — — try to avoid at all costs to run data file shrink
------------------------------------------------ dividing line ----------------------------------------
So, still use work regular contraction data file or database on &ldquo automatic contraction; ” attribute friends, please correct your mistake.！
Support the original, hope everyone support me hard labor, please add link Xiaoxiang hermit blog.
Posted by Quintion at November 22, 2013 - 8:43 PM