Systems Engineering and RDBMS

The 900 byte index limitation in SQL Server

Posted by decipherinfosys on November 6, 2007

One restriction that you might have faced at times with SQL Server is that the maximum size for an index can only be up-to 900 bytes. It is important to note that it is 900 bytes and not 900 characters. Here is an easy way to prove this:

CREATE TABLE TEST_INDX (COL1 NVARCHAR(451))
GO
CREATE INDEX TEST ON TEST_INDX (COL1)
GO

and you will get a warning message from the SQL Server Engine:

Warning! The maximum key length is 900 bytes. The index ‘test’ has maximum length of 902 bytes. For some combination of large values, the insert/update operation will fail.

As the warning states, any time that an insert or an update statement gets executed that exceeds the 900 byte limit, that particular statement will fail. You can imagine that if you need to create an index on a column or a set of columns on which you also need to enforce the uniqueness constraint, this particular restriction becomes a big issue. There are couple of ways to enforce such uniqueness constraint in such a scenario where the column(s) that make the uniqueness criteria exceed the 900 byte limit.

One such option is the usage of triggers which upon an insert or an update, check for that set of columns and prevent the data entry from happening. I am not a big fan of this approach especially since in a high transaction system, it can create performance issues. Another method of enforcing such a constraint is by using a computed column and using the CHECKSUM() function. If you want to know the difference between the checksum functions, you can refer our previous blog post here.

Let’s follow that up with an example:

In the example given below, you can see that the computed column is created as a combination of three columns. One thing to remember is that if the data-types are incompatible for the columns involved in making the uniqueness criteria, in that case, the CHECKSUM() function will skip the check all-together. So, ensure that you handle this at the time of the creation of the column itself. The code below creates such a table and then enforces a unique constraint via a unique index on the computed column.

The code then demonstrates the insert scenario where it tries to insert duplicate data into the table and the unique constraint on the computed column gets violated. This is a much simpler approach as compared to using a trigger to enforce the uniqueness. Hopefully, such a restriction will be taken off in future versions of SQL Server.

CREATE TABLE TEST_INDX
(
COL1 INT,
COL2 NVARCHAR(2000),
COL3 BIT,
COL4 AS CHECKSUM(CAST(COL1 AS NVARCHAR(10)) + COL2 + CAST(COL3 AS NVARCHAR(1)))
)
–FILEGROUP CLAUSE
GO

CREATE UNIQUE INDEX TEST_INDX_IND_1 ON TEST_INDX (COL4)
–FILEGROUP CLAUSE
GO

INSERT INTO TEST_INDX (COL1, COL2, COL3) VALUES (1, ‘ABRACADABRA-1’, 0)
GO
INSERT INTO TEST_INDX (COL1, COL2, COL3) VALUES (2, ‘ABRACADABRA-2’, 0)
GO
INSERT INTO TEST_INDX (COL1, COL2, COL3) VALUES (2, ‘ABRACADABRA-2’, 0)

The last insert one will fail with this error:

Msg 2601, Level 14, State 1, Line 1
Cannot insert duplicate key row in object ‘dbo.TEST_INDX’ with unique index ‘TEST_INDX_IND_1’.
The statement has been terminated.

Sorry, the comment form is closed at this time.

 
%d bloggers like this: