[translation] why don't you shrink database files

In this view, these days many articles about SQL SERVER shrinkage data file, ready to write an article about shrink log of the article, but suddenly have to translate impulse will read the classic article, the article below is a translation is Paul Randal – & ldquo; Why You Should Not Shrink Your Data Files”. Some of the more difficult to translate, clear place, I'll post the original. Well, not long winded, direct look at the following translation.


I'm a hot issue is the largest data file on shrinkage, while Microsoft's time, I write the relevant contract data file code, I never have chance to rewrite it, make it more easy to operate. I really don't like contraction.

Now, not to be confused with the contraction of the transaction log files and shrinkage data file, when a transaction log file growth out of control or to remove excess VLF fragments (here and here. See the excellent article Kimberly), however, the frequent use of contract transaction log data file (not rare operation) and should not be a part of you performed regularly the maintenance plan.

Contraction data files should be performed even less. This is why — — data file shrink resulting in a large index fragmentation, let me with a simple and you can run your footsteps to demonstrate. The following script will create a data file, create a 10MB size of “ filler” table, the size of a 10MB “ production” cluster index, and then analyzes the new pieces of clustered index.

Code Snippet
  1. USE [master];
  2. GO
  3. IF DATABASEPROPERTYEX(N'DBMaint2008', N'Version') IS NOT NULL
  4.     DROP DATABASE [DBMaint2008];
  5. GO
  6. CREATE DATABASE DBMaint2008;
  7. GO
  8. USE [DBMaint2008];
  9. GO
  10. SET NOCOUNT ON;
  11. GO
  12. -- Create the 10MB filler table at the 'front' of the data file
  13. CREATE TABLE [FillerTable](
  14.     [c1] INT IDENTITY,
  15.     [c2] CHAR (8000) DEFAULT 'filler');
  16. GO
  17. -- Fill up the filler table
  18. INSERT INTO [FillerTable] DEFAULT VALUES;
  19. GO 1280
  20. -- Create the production table, which will be 'after' the filler table in the data file
  21. CREATE TABLE [ProdTable](
  22.     [c1] INT IDENTITY,
  23.     [c2] CHAR (8000) DEFAULT 'production');
  24. CREATE CLUSTERED INDEX [prod_cl] ON [ProdTable]([c1]);
  25. GO
  26. INSERT INTO [ProdTable] DEFAULT VALUES;
  27. GO 1280
  28. -- Check the fragmentation of the production table
  29. SELECT
  30.     [avg_fragmentation_in_percent]
  31. FROM sys.dm_db_index_physical_stats(
  32.     DB_ID(N'DBMaint2008'), OBJECT_ID(N'ProdTable'), 1, NULL, 'LIMITED');
  33. GO

Results are as follows


clipboard

The logical fragments clustered index close to about 0.4% contraction in the data file. [but I test results is 0.54%, as shown above, but also is close to 0.4%]


Now I delete the filter table, running shrinkage data file command, re analysis of the clustered index fragmentation.

SQL Code Two
  1. -- Drop the filler table, creating 10MB of free space at the 'front' of the data file
  2. DROP TABLE [FillerTable];
  3. GO
  4. -- Shrink the database
  5. DBCC SHRINKDATABASE([DBMaint2008]);
  6. GO
  7. -- Check the index fragmentation again
  8. SELECT
  9.     [avg_fragmentation_in_percent]
  10. FROM sys.dm_db_index_physical_stats(
  11.     DB_ID(N'DBMaint2008'), OBJECT_ID(N'ProdTable'), 1, NULL, 'LIMITED');
  12. GO

Here is my result, the execution results, please see the:

image

The original:

Wow! After the shrink, the logical fragmentation is almost 100%. The shrink operation *completely* fragmented the index, removing any chance of efficient range scans on it by ensuring the all range-scan readahead I/Os will be single-page I/Os.

Translation:

Wow, really terrible! Data file after contraction, logical fragmentation index is nearly 100%, shrinkage data files resulted in a complete fragmentation index. Eliminate any chance of the effective range of about its scanning, ensuring that all perform read ahead range scan of I/O single page in I/O operation

How so? When a single data file shrink operation once, it will use GAM bitmap index to find the distribution of the highest page data file, then as much as possible to move forward to the file can be moved to places, like this, in the example above, it is a complete reversal of the clustered index, make it from the non fragmentation debris to complete chemical.

The same code for DBCC SHRINKFILE, DBCC SHRINKDATABASE, and automatic contraction, they are as bad as the index, fragmentation, data file contraction also produced a large number of I/O operation, consume a large amount of CPU resources, and generate the *load* transaction log, because any operation will record all.

Data file shrink never as part of regular maintenance, you cannot enable “ automatic contraction of ” attribute, I try to remove it from the SQL 2005 and SQL 2008 products, the only reason it exists is in order to better forward compatibility, not to fall into this trap: create a maintenance plan, re generating all the index, and then attempt to reclaim the index rebuild cost space contraction data files — — this is what you do to generate a lot of transaction log, but in fact did not improve performance of zero sum game.

So, why do you want to run a contraction,? For example, if you put a considerable database delete a considerable proportion, the database is unlikely to grow, or if you need to move a database file before the first empty data file?


Translation:

I would like to recommend the following method:

Basically you need to provide some more space, the old file before you can shrink, but it is a clearer set.


The original:

The method I like to recommend is as follows:

Basically you need to provision some more space before you can shrink the old files, but it’s a much cleaner mechanism.

If you did not choose to shrink log file, please note that this operation will cause index fragmentation, you should take some steps to eliminate shrinkage data files which may cause performance problems, the only way is to use the DBCC INDEXDEFPAGE or ALTER INDEX... REORGANIZE eliminate index fragmentation is not caused by the data file growth, these orders expand the space 8KB page instead of the reconstruction of a new index in the index rebuild operation.

The bottom line — — try to avoid at all costs to run data file shrink

------------------------------------------------ dividing line ----------------------------------------

So, still use work regular contraction data file or database on &ldquo automatic contraction; ” attribute friends, please correct your mistake.!

Support the original, hope everyone support me hard labor, please add link Xiaoxiang hermit blog.

Posted by Quintion at November 22, 2013 - 8:43 PM