Systems Engineering and RDBMS

Auto-Shrink Issues

Posted by decipherinfosys on November 23, 2007

This is one of the common problems that we have seen in many of our client engagements. The database files are not sized appropriately and the client relies on the auto-grow feature – this can cause some issues and we had discussed those here.

In almost all of those implementations, we had also seen that the client had also enabled the “Auto-Shrink” feature in order to get a handle on the file-sizes. And this auto-grow and the auto-shrink leads to multitude of issues. Besides the ones mentioned in the link above, you can also run into issues in the middle of the production hours when either auto-grow or auto-shrink kicks in. In this post, we will cover another reason why auto-shrink should not be used for any of your production databases. It causes fragmentation. Let’s see that using an example:

We will create a brand new database for this work and create a dummy table in it and an actual table in it and populate them with a bunch of data.  The reason for creating the dummy table is so that we can then drop it and thus create up some free space so that when we manually run the shrinking of the database, we can show how the fragmentation takes place.  Both the dummy table and actual table are identical in nature.

USE MASTER;
GO

CREATE DATABASE DEC_TEST_SHRINK;
GO

ALTER DATABASE DEC_TEST_SHRINK SET AUTO_SHRINK OFF
GO

/********************************
Create the database for demo
*********************************/
USE DEC_TEST_SHRINK;
GO

/********************************
Hide the messages
*********************************/
SET NOCOUNT ON;
GO

/**************************************************************************************************
Let us now create a table and populate it with records – we will fill it so
that we can cross the threshold for the shrink after dropping it and create some free space ***************************************************************************************************/

CREATE TABLE TEST_TBL_DROP (COL1 INT IDENTITY NOT NULL, COL2 NVARCHAR(4000) NOT NULL)
GO

/*Now, run the script to populate the data for 10000 records*/

DECLARE @I INT;
SELECT @I = 1;
WHILE (@I < 10000)
BEGIN
INSERT INTO TEST_TBL_DROP (COL2) VALUES (REPLICATE (‘X’, 1000));
SELECT @I = @I + 1;
END;
GO

/**************************************************************************************************
Create and populate the test table which is the actual table we will check the fragmentation on
***************************************************************************************************/
CREATE TABLE TEST_TBL (COL1 INT IDENTITY NOT NULL, COL2 NVARCHAR(4000) NOT NULL)
GO
/*Create the clustered index*/
CREATE UNIQUE CLUSTERED INDEX TEST_TBL_IND_1 ON TEST_TBL (COL1)
GO

/*Now, run the same script to populate it with the data for 10000 records*/

DECLARE @I INT;
SELECT @I = 1;
WHILE (@I < 10000)
BEGIN
INSERT INTO TEST_TBL (COL2) VALUES (REPLICATE (‘X’, 1000));
SELECT @I = @I + 1;
END;
GO

/* Now, let us check the fragmentation level BEFORE shrinking the database */
SELECT
avg_fragmentation_in_percent,
fragment_count
FROM sys.dm_db_index_physical_stats (DB_ID (‘DEC_TEST_SHRINK’), OBJECT_ID (‘TEST_TBL’), 1, NULL, ‘LIMITED’);
GO

avg_fragmentation_in_percent fragment_count

—————————- ——————–

0.36                                            10

As expected, the count is pretty low right now on the TEST_TBL.

/*Now, drop the dummy table to create up the free space*/
DROP TABLE TEST_TBL_DROP
GO

/* Now, let’s just use the SHRINKDATABASE command to shrink the database*/
DBCC SHRINKDATABASE (DEC_TEST_SHRINK);
GO

DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages
—— ———– ———– ———– ———– ————–
10 1 2656 144 2648 2648
10 2 63 63 56 56

/* Now, let us check the fragmentation level again AFTER shrinking the database */
SELECT
avg_fragmentation_in_percent,
fragment_count
FROM sys.dm_db_index_physical_stats (DB_ID (‘DEC_TEST_SHRINK’), OBJECT_ID (‘TEST_TBL’), 1, NULL, ‘LIMITED’);
GO

avg_fragmentation_in_percent fragment_count

—————————- ——————–

99.96                                       2500

As you can see from above, after the shrinking of the database, the total fragmentation percentage went up. The shrink operation starts at the very end of the data file and it moves one page at a time and moves it to the free space that is below the marked shrink threshold. This means that as part of the shrinking process, it reverses the actual physical order of the pages which make up your leaf level of the index. That is what causes it to fragment.

So, in summary, here are the problems that auto-shrink can cause:

1) It could kick in at peak production times thus causing IO and CPU contention issues.
2) When it kicks in, it can easily flush out your pages from the buffer pool and thus cause performance issues – this can also result into IO timeouts.
3) When used in combination with auto-grow, it works like a see-saw…any operational database requires free-space and this if you run shrink to free up the space and then it has to grow again and then you shrink it again via auto-shrink…and so on and so forth…
4) It causes fragmentation which you would need to fix to avoid performance issues.

So, our recommendation is to never use this option for your production environments.

Sorry, the comment form is closed at this time.

 
%d bloggers like this: