Systems Engineering and RDBMS

Archive for September, 2007

Optimistic Concurrency in SQL Server

Posted by decipherinfosys on September 28, 2007

In one of our earlier post, we had talked about using ORA_ROWSCN to implement optimistic concurrency in the case of Oracle. In this post, we will cover how to implement this in the case of SQL Server. Read that previous blog post to see what optimistic concurrency is and when it is used. Optimistic concurrency in SQL Server can be implemented in two ways:

Optimistic Concurrency with Values:  When using optimistic concurrency with values, as the name suggests, the current values for the row (for each column) are compared to the older values from when the row was last fetched.   If any of the values have changed,  that indicates that some other session has made modifications to the row.  This is noted as a “no data exception” which the client application must trap and refresh the cursor and to let the end user decide whether they wish to perform modifications based on the new values.

Optimistic Concurrency with row versioning:  This is a more common approach of implementing optimistic concurrency in the case of SQL Server.  In this scheme,  we use a versioning scheme in the schema by adding a versioning column to the tables.  One can chose to either use just an integer column to implement optimistic concurrency with row versioning or one can choose to use the timestamp data-type.  The timestamp data-type has nothing to do with date or time – it is a 18 digit binary number that indicates the sequence of modifications made to the record.  Every database on an instance has a global current timestamp value which can be obtained by @@DBTS.  Think of it as a running counter for a database.  So, if there is a table that has a column called say opt_version_id that has a data-type of timestamp, any time a modification is made to this record, the value from @@DBTS is used to update this value in that row and the value is automatically incremented to the next value by SQL Server.  Let’s see this using an example:

CREATE TABLE dbo.DECIPHER_TEST
(
COL1                INT    IDENTITY NOT NULL,
COL2                INT             NOT NULL,
OPT_VERSION_ID        TIMESTAMP     NOT NULL,
CONSTRAINT PK_DECIPHER_TEST PRIMARY KEY (COL1)
)
GO
/*Let us create some data – we will populate only COL2*/
INSERT INTO dbo.DECIPHER_TEST (COL2) VALUES (10)
INSERT INTO dbo.DECIPHER_TEST (COL2) VALUES (20)
INSERT INTO dbo.DECIPHER_TEST (COL2) VALUES (30)

/*Check the data now*/

select * from dbo.DECIPHER_TEST

COL1        COL2        OPT_VERSION_ID
———– ———– ——————
1           10          0x00000000000007D1
2           20          0x00000000000007D2
3           30          0x00000000000007D3

As you can see from above, SQL Server automatically populated the opt_version_id column.  Let’s look at what is the value for @@DBTS right now:

select @@DBTS

——————
0x00000000000007D3

So, as you can see, it is the current value that it used for the column.  If another record is inserted, it will be the next higher up value.  Now, let’s see how this can be used to automatically enforce the optimistic concurrency – we will talk about this using T-SQL as an example and then will cover what goes on when using this using ADO or ADO.Net:

Session 1:  Suppose that Session 1 read the current value into a variable and is going to use that for doing the updates.  Say till time T1, it has executed these commands:

declare @x varbinary(18)
select @x = OPT_VERSION_ID from dbo.DECIPHER_TEST where COL1 = 1

print @x

0x00000000000007D1

Then, in another session, say at time T2, another session goes ahead and updates the record:

Session 2:

update dbo.DECIPHER_TEST set COL2 = 1000 where COL1 = 1

Now, say at time T3, Session one tries to make the update using the value that it read before:

Session 1:

update dbo.DECIPHER_TEST
SET COL2 = 100
WHERE COL1 = 1
AND OPT_VERSION_ID = @x

if (@@rowcount = 0)
begin
select ‘some other session updated the value.  Updated value is:’,  OPT_VERSION_ID
from dbo.DECIPHER_TEST
where COL1 =1
end

You will get:

OPT_VERSION_ID
——————————————————– ——————
some other session updated the value.  Updated value is: 0x00000000000007D4

This is because the other session already went in and update the record and SQL Server automatically updated this counter.  As a result, Session 1 got a no data found exception.  This concurrency methodology is used in those scenarios where there is a very little chance of other sessions updating the row in the interval between a read and a write i.e. the cost of occasional rollbacks due to clashes is not that big and a re-read and a re-try can be done and is acceptable to the business.

The advantage of using optimistic concurrency with row versioning is that the engine does not have to compare all the column’s before and after values – all it needs to use is the timestamp data-type column value.  In ADO, one can specify adLockOptimistic for the LockType…one thing to note is that if the table does not have a column with a timestamp data-type and the application has specified optimistic concurrency with row versioning, the cursor defaults to values based optimistic concurrency control instead.  And obviously, one table can have only one timestamp data-type column.  In future posts, we will discuss pessimistic concurrency model and multivalue concurrency models.

Posted in SQL Server | Leave a Comment »

DDL Triggers in SQL Server 2005

Posted by decipherinfosys on September 27, 2007

We all are familiar with DML (Data Manipulation Language) triggers which can be invoked after any DML operations (used for update, insert and delete commands).  We can achieve auditing functionality using DML triggers but this does not help us in auditing if someone has altered the definition of the tables or dropped the tables etc.. We can achieve this by using DDL (Data Definition Language) triggers introduced in SQL Server 2005.

As the name suggests, DDL triggers do not get fired for any DML operations but get fired for CREATE, ALTER, DROP, GRANT, DENY, REVOKE, and UPDATE STATISTICS DDL statements. Some of the system stored procedures which perform CREATE like operations, also fire the DDL trigger. One exception is sp_rename. Sp_rename procedure does not invoke DDL trigger even though it performs the operation of renaming the object. Most of the time, DDL triggers are used to prevent modification in the database or to audit the DDL changes in the database.

DDL triggers can be created at the database level for a specific database or at the server/instance level. For creating trigger at the database level, we have to use the ‘ON DATABASE’ clause and for creating trigger at the server/instance level we have to use the ‘ON ALL SERVER’ clause. When created at the server level, it keeps track of DDL changes for any database on that specific instance.

Let us create a table first.

CREATE TABLE dbo.AUDIT_EVENT
(
AUDIT_EVENT_ID INT IDENTITY(1,1),
EVENT_TEXT     VARCHAR(MAX),
ACTION_LOGIN   VARCHAR(100),
ACTION_DATE    DATETIME,
CONSTRAINT PK_AUDIT_EVENT PRIMARY KEY(AUDIT_EVENT_ID)
)
GO

In the above table, we will record the information pertaining to any DDL operation.  Let us create trigger to record the changes at database level.

CREATE TRIGGER AUDIT_TRIG
ON DATABASE
FOR DROP_TABLE, ALTER_TABLE, CREATE_TABLE
AS
INSERT INTO dbo.Audit_Event(Event_Text, Action_Login, Action_date)
SELECT EVENTDATA().value(‘(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]’,’nvarchar(max)’), original_login(), getDate()
GO

In the trigger shown above, we are using the EVENTDATA() function to obtain the actual text. We will cover more about EVENTDATA() function in future blog,. We have adopted the idea of using EVENTDATA() function from BOL example. Once trigger is in place, we can start testing it by creating, altering and dropping the table. Let us try to create table first. We will also alter it to add a new column and then we will drop it.

CREATE TABLE TEST(ID INT IDENTITY(1,1))
GO
ALTER TABLE TEST ADD TEST_DESC VARCHAR(30)
GO
DROP TABLE TEST
GO

For all of the above statements, once the statement is executed successfully, trigger will get fired. Let us check our audit table now.

SELECT Event_Text,Action_Login, Action_Date FROM AUDIT_EVENT
GO

Here is the result set:

Event_Text                                  Action_login  Action_Date
------------------------------------------- ------------- -----------------------
CREATE TABLE TEST(ID INT IDENTITY(1,1))     sa            2007-09-26 13:43:57.177
ALTER TABLE TEST ADD TEST_DESC VARCHAR(30)  sa            2007-09-26 13:43:57.223
DROP TABLE TEST                             sa            2007-09-26 13:43:57.223

From the above result set, we know what command got executed, at what time and by whom. This prevents unwanted surprises during the development phase and in addition,  we can keep track of database changes made within the application. Similar to the example shown above, we can also create trigger for CREATE_PROCEDURE, DROP_PROCEDURE event types to audit the procedure changes.

Posted in SQL Server | 2 Comments »

Documenting the OS of the DB Server in Oracle

Posted by decipherinfosys on September 26, 2007

Since Oracle runs on multiple operating systems, if you want to develop an auditing script in which you document the OS of the DB Server using PL/SQL scripts (since the rest of the auditing scripts will also be a combination of PL/SQL scripts, shell scripts and WMI scripts (windows)), one can make use of the inbuilt packages.  There are various ways of finding operating system of the database server. One of the easier ways is to use DBMS_UTILITY package. This package contains a port string procedure which returns the OS of the database server. Please keep in mind that this is different from the client’s OS (e.g. SQL*Plus session from which we are connecting to the database server). Following is the small code snippet, which displays the OS of the database server.

DECLARE
v_os VARCHAR(100);
BEGIN
V_OS := dbms_utility.port_String;
dbms_output.put_line(v_os);
END;
/

Here is the output.

IBMPC/WIN_NT-8.1.0

PL/SQL procedure successfully completed.

We are running our database on windows server and hence it showed as WIN_NT.

Posted in Oracle | Leave a Comment »

Enabling and Disabling an Index in SQL Server 2005

Posted by decipherinfosys on September 25, 2007

In SQL Server 2005, Microsoft introduced the “ALTER INDEX” command. This alter command is used to enable or disable the indexes, re-build the indexes or re-organize the indexes. One can also set specific options like ‘allow_row_locks’, ‘allow_page_locks’ etc. at the index level.  In this blog post, we will talk about disabling and enabling the index. Let us first create a table along with some indexes and populate it with some data.

CREATE TABLE TEST(TEST_ID INT,TEST_DESC VARCHAR(20),TEST_DATE DATETIME)
GO
CREATE UNIQUE CLUSTERED INDEX TEST_IND_1 ON TEST(TEST_ID)
GO
CREATE INDEX TEST_IND_2 ON TEST(TEST_DESC)
GO
CREATE INDEX TEST_IND_3 ON TEST(TEST_DATE)
GO

INSERT INTO TEST VALUES(1,’Oracle’)
INSERT INTO TEST VALUES(2,’SQLServer’)
INSERT INTO TEST VALUES(3,’DB2 LUW’)

Disabling non-clustered index:

Whenever a non-clustered index is disabled, only the access to index is disabled. Optimizer will ignore the disabled index even though your query has the perfect filter criteria in which an index can be used. Also, whenever a non-clustered index or index on a view is disabled, index data physically gets deleted.  Execute the following query and look at the execution plan.

SET SHOWPLAN_TEXT ON
SELECT TEST_ID,TEST_DESC FROM TEST WHERE TEST_DESC = ‘ORACLE’
GO

|–Index Seek(OBJECT: ([DECIPHER].[dbo].[TEST].[TEST_IND_2]), SEEK: ([DECIPHER].[dbo].[TEST].[TEST_DESC]=[@1]) ORDERED FORWARD)

Plan text indicates that an index seek operation is performed on the table.  Also selecting row count for an index will show that there are 3 rows for a given index.

SELECT rowcnt FROM sys.sysindexes WHERE name = ‘TEST_IND_2’
GO

Now let us disable the index and run the same queries again to check the plan and to check the row count. Following is the syntax to disable the index.

ALTER INDEX TEST_IND_2 ON TEST DISABLE
GO

Re-executing above two queries prior to disable command will have following results.
|–Clustered Index Scan(OBJECT: ([DECIPHER].[dbo].[TEST].[TEST_IND_1]), WHERE: ([DECIPHER].[dbo].[TEST].[TEST_DESC]=[@1]))

Instead of performing an index seek operation on TEST_IND_2,  the optimizer performs a clustered index scan operation beause we disabled the index. Also checking the rowcount will result into 0,  since the data from the index pages got deleted when we disabled the index.

Disabling clustered index:

Whenever we disable a clustered index, it disables all the non-clustered indexes of the table as well. Once clustered index is disabled, user can not access the the underlying table data but index definition remains in the system catalog. Another important point to note is, even though user cannot access the data, data still remains in the B-Tree until index is dropped or rebuilt but it is not maintained. Let us disable the clustered index now.

ALTER INDEX TEST_IND_1 ON TEST DISABLE
GO

Warning: Index ‘TEST_IND_2’ on table ‘TEST’ was disabled as a result of disabling the clustered index on the table.
Warning: Index ‘TEST_IND_3’ on table ‘TEST’ was disabled as a result of disabling the clustered index on the table.

Now let us try to select the data.

select * from test

Issuing select statement will result into following error. Same message will be displayed if one trys to perform any DML operation.

Msg 8655, Level 16, State 1, Line 1
The query processor is unable to produce a plan because the index ‘TEST_IND_1’ on table or view ‘test’ is disabled.

Enabling index:

Indexes can be enabled using ‘ALTER INDEX REBUILD’ command or ‘CREATE INDEX WITH DROP_EXISTING’ command. Index can be rebuilt online also but we will cover it separately. We can enable individual index on the table or we can enable all the indexes on the table using ‘ALL’ clause. Again one thing to keep in mind is that, enabling clustered index does not enable all non-clustered indexes. You have to enable them individually. Following is the syntax to enable specific index.

ALTER INDEX TEST_IND_1 ON TEST REBUILD
GO

or to enable all indexes on the table

ALTER INDEX ALL ON TEST REBUILD
GO

We can see whether index is disable or not by accessing sys.indexes view. Following is the query to check whether index is disable or not.

SELECT name, is_disabled FROM sys.indexes WHERE name like ‘TEST%’

Posted in SQL Server | 3 Comments »

Universal Data Links

Posted by decipherinfosys on September 24, 2007

Universal Data Links or UDLs are a quick way to test out a connection string and prepare a connection string without typing in parameters.  Here is how you can go about creating a UDL and then saving it as a connection string that you can use in your applications.

1) Right click within any folder in Windows explorer and select New and then text document.

2) Name it as test.udl.  When it prompts you about whether you want to change the extension, click on Yes.

3) This will create a test.udl file for you.  Double click on the file and you will get this screen:

On the first tab, is the selection of the different OLEDB providers from which you can choose from.  For this post, we will select the Microsoft OLEDB provider for SQL Server.  One cal also choose SQL Native Client (SQL Server 2005), Oracle OLEDB Provider etc. depending upon the data source and the type of the application and the application needs.

4) Then click next.  You will come to the connection tab.

On this tab, you can specify the connection parameters for connectivity to the data source.  In the case of SQL Server when using the OLEDB Provider for SQL Server, you can choose between Windows and SQL Authentication and then choose your database as well.  For the sake of generating the connection string with the UDL, you can choose to store the password and can test your connection.  It is a quick and dirty way to test your connectivity as well to the data source.

5) On the next tab, you have some of the advanced settings like specifying a connection time out.

6) And on the last tab, one gets a listing of all the parameters for that connection string.  One can choose to specify additional attributes over here like the ApplicationName, the WorkStationID etc.  All this helps when troubleshooting where the connections are coming from since this information will then be reflected in the sysprocesses view on the server side.

Once you have tested the connection string and are happy with the settings, you can save it and then right click and open it with a notepad.   For the UDL created above, you will get this entry:

[oledb]
; Everything after this line is an OLE DB initstring
Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;User ID=sa;Initial Catalog=ReportServer;Data Source=(local)

As you can see above, the connection string is now available for you.  This prevents the need for typing out the connection strings and running into typing errors – in addition, this is a quick and dirty way of verifying your connections as well.

Posted in DB2 LUW, Oracle, SQL Server, Technology | 1 Comment »

Using vhdmount to access the virtual disk files

Posted by decipherinfosys on September 23, 2007

In the release of Virtual Server 2005 R1 SP1, Microsoft included a tool called VHDMount which allows one to connect to a vhd disk on your host operating system as if it were a separate physical disk. It makes sharing of the files a lot more simpler. Our partner company, Cerkitek has a good blog post on the usage of this tool. You can access that post here:

http://cerkitek.wordpress.com/2007/08/12/accessing-virtual-disk-files-with-vhdmount/

Posted in Virtual Server | Leave a Comment »

SAP enters on-demand software market

Posted by decipherinfosys on September 22, 2007

Following on the footsteps of companies like Salesforce.com, NetSuite etc., SAP is looking to move towards the web-based software market and mak it’s software available over the internet as a service. You can read more on this at the reuters site here.

Posted in Technology | Leave a Comment »

Counting number of records for all the tables – Oracle

Posted by decipherinfosys on September 21, 2007

In some of our previous posts, we had given some SQL scripts on how to get the counts of the records in all the tables in a database schema in SQL Server. You can access those posts here and here. In this post, we will give a script that can be used in Oracle to do the same thing.  Connect to SQL*Plus with proper credentials and run following PL/SQL block to see the tables and corresponding record count.

/*Script to count the number of rows in tables*/

set serveroutput on

declare
numrows integer;
cursor c1 is select table_name from user_tables order by table_name;
function rowcount(tablename in user_tables.table_name%type)
return integer is
cursornum    integer;
numrows        integer;
ignore        integer;
begin
cursornum := dbms_sql.open_cursor;
dbms_sql.parse(cursornum,
‘select count(*) from ‘ || tablename,
dbms_sql.v7);
dbms_sql.define_column(cursornum, 1, numrows);
ignore := dbms_sql.execute(cursornum);
ignore := dbms_sql.fetch_rows(cursornum);
dbms_sql.column_value(cursornum, 1, numrows);
dbms_sql.close_cursor(cursornum);
return numrows;
end;
begin
dbms_output.enable(100000);
dbms_output.put_line(‘Table                           Rows      ‘);
dbms_output.put_line(‘——————————  ———-‘);
for c1rec in c1 loop
numrows := rowcount(c1rec.table_name);
dbms_output.put_line(rpad(c1rec.table_name, 32) || numrows);
end loop;
end;
/
Here is the output from our schema.

Table                           Rows
------------------------------  ----------
BONUS                           0
DEPT                            4
EMP                             14
EXTERNAL_CITY                   4
INVOICE                         4
SALGRADE                        5
TEMP_INV                        0
TEST                            0
TEST_TABLE                      4

PL/SQL procedure successfully completed.

There are other ways of counting number of records in the table as well. Like getting the NUM_ROWS column value from USER_TABLES view or getting NUM_ROWS value from USER_INDEXES view for a primary key index. But for that number to be accurate, we have to make sure that statistics are always up to date otherwise that number can be wrong. This is the same as getting the number from the sysindexes table in SQL Server.

Posted in Oracle | 2 Comments »

Bulk Collect and Multiple Exceptions

Posted by decipherinfosys on September 20, 2007

In one of our previous post, we had covered the scenario of getting a no data found exception when using Bulk Collect in PL/SQL.    In this post, we will look into how to handle multiple exceptions when using Bulk Collect.  BULK COLLECT construct is used to work with batches of data rather than single record at a time. Whenever we have to deal with large amount of data, bulk collect provides considerable performance improvement.

‘Bulk collect..into’ clause improve performance of the select statements. To loop through records collected using bulk collect we can use FORALL syntax. FORALL clause works with DML statements in batches and much faster than the regular for loop construct. In this blog post, we are going to talk about FORALL exceptions with the %BULK_EXCEPTIONS attribute. This attribute allows us to continue with the process, even if run into any DML exception for some record in between. Basically this mechanism allows us to complete the process without stopping if any error occurs.

All exceptions raised during execution are saved in %BULK_EXCEPTION attribute. It also stores a collection of records similar to BULK COLLECT. It has two fields.

•    %BULK_EXCEPTIONS(i).ERROR_CODE holds the corresponding Oracle error code.
•    %BULK_EXCEPTIONS(i).ERROR_INDEX holds the iteration number of the FORALL statement for which the exception was raised.
•    %BULK_EXCEPTIONS.COUNT holds total number of exceptions encountered.

In order to bulk collect exceptions, we have to use FORALL clause with SAVE EXCEPTIONS keyword. Let us see with the example.

Connect to SQL*Plus with proper credentials and run following query to create the table and populate it with some data.
CREATE TABLE TEST
(
Test_ID     NUMBER(9) NOT NULL PRIMARY KEY,
Test_Desc   VARCHAR(30),
New_Desc    VARCHAR(30)
)
/
SQL> INSERT INTO TEST(TEST_ID,TEST_DESC)
2  SELECT ROWNUM, TABLE_NAME
3    FROM USER_TABLES;

9 rows created.

Run following PL/SQL block to populate the table and later on update it to see the exception behavior. We have created this PL/SQL block based on the example shown in Oracle manual.

DECLARE

TYPE ga_Test_ID   IS TABLE OF TEST.TEST_Id%TYPE;
TYPE ga_Test_Desc IS TABLE OF TEST.TEST_DESC%TYPE;
va_Test_ID    ga_Test_ID;
va_Test_Desc  ga_Test_Desc;
V_Err_count   NUMBER;

BEGIN

SELECT Test_ID, Test_Desc
BULK COLLECT INTO va_Test_ID, va_Test_Desc
FROM Test;

FORALL i IN va_test_ID.FIRST..va_test_ID.LAST SAVE EXCEPTIONS
UPDATE TEST
SET NEW_DESC = TEST_DESC || LPAD(‘ ‘,22,’A’)
WHERE TEST_ID = va_test_ID(i);

EXCEPTION

WHEN OTHERS THEN

v_Err_Count := SQL%BULK_EXCEPTIONS.COUNT;
DBMS_OUTPUT.PUT_LINE(‘Number of statements that failed: ‘ || v_Err_Count);

FOR i IN 1..v_Err_Count
LOOP
DBMS_OUTPUT.PUT_LINE(‘Error #’ || i || ‘ occurred during ‘||
‘iteration #’ || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
DBMS_OUTPUT.PUT_LINE(‘Error message is ‘ ||
SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
END;
/

If you execute above PL/SQL block, it will display following information. It may be different for your scenario as data will be different.

Number of statements that failed: 2
Error #1 occurred during iteration #6
Error message is ORA-12899: value too large for column
Error #2 occurred during iteration #9
Error message is ORA-12899: value too large for column

From above results we know that our iteration #6 and #9 failed because of the exceeding column length. This is really useful feature for error handling when we are dealing with larger data set. Instead of exiting at first error, we can loop through entire set and then work with the erroneous records later on.

Posted in Oracle | 1 Comment »

Procedure cache in SQL Server 2005

Posted by decipherinfosys on September 19, 2007

In one of our previous post, we had covered the importance of using bind variables (parameterized queries). In response to that post, one of the readers asked whether it is possible to look at what is available in the memory for the RDBMS. Yes, it is pretty easy to get to that information. In this post, we will cover how to do that in the case of SQL Server 2005 and then will cover Oracle and DB2 LUW in future posts.  In the case of SQL Server, memory is used for buffer cache (storing the data) and procedure cache (storing the query plans).  The cache is stored as 8KB pages (Oracle has more options on this size).  Let’s see how we can find out what is in the procedure cache and how often those plans are getting used.

In SQL Server 2005, there is a DMV that can be used to get this information – the SQL is shown below:

SELECT  top 5
name,
type,
(single_pages_kb + multi_pages_kb) AS cache,
entries_count as cnt
FROM sys.dm_os_memory_cache_counters
ORDER BY cache desc

On our test system, this is the output:

name                     type                   cache                cnt
------------------------ ---------------------- -------------------- ------
Object Plans             CACHESTORE_OBJCP       24352                46
Bound Trees              CACHESTORE_PHDR        20648                252
SQL Plans                CACHESTORE_SQLCP       19432                292
TokenAndPermUserStore    USERSTORE_TOKENPERM    14488                31167
SchemaMgr Store          USERSTORE_SCHEMAMGR    10584                0

If you see the output from above, you will see CACHESTORE_OBJCP, CACHESTORE_PHDR and CACHESTORE_SQLCP as the top three cache related enteries.  Each has it’s own importance.  CACHESTORE_OBJCP represents the compiled plans for stored procedures, triggers and functions, CACHESTORE_SQLCP represents cached SQL statements and batches that are not part of stored procedures/triggers/functions and CACHESTORE_PHDR represents the parsed SQL text.  On our test system, we have a few stored procedures that are used by the test harness and there are a lot of dynamic SQL queries that are fired off by the test application that uses an ORM layer.  That is why the count for CACHESTORE_OBJCP is 46 and CACHESTORE_SQLCP count is 305.

While I was writing this post, a colleague of mine also pointed out that all this information is also available through the performance monitor.  Here is an image that shows you which counter you can use to get that information:

sql_plans.jpg

Once you get the counts, the next logical step is to look for the actual queries that are in the system cache.  In order to do that, we will make use of two more DMV’s in SQL Server 2005 and will make use of the new “OUTER APPLY” functionality:

SELECT
cache_plan.objtype,
cache_plan.size_in_bytes,
cache_plan.cacheobjtype,
cache_plan.usecounts,
sql_text.text
FROM sys.dm_exec_cached_plans as cache_plan
outer apply sys.dm_exec_sql_text (cache_plan.plan_handle) as sql_text
ORDER BY cache_plan.usecounts DESC

One can look at the output and see how much space is being occupied by different plans.  Since SQL Server does not provide a configuration option to put a cap on the procedure cache, if the application is not using parameterized queries, you will see this cache to be blotted. Hopefully, like Oracle, Microsoft can also provide a configuration option in the future to keep that in check – of course, there is no alternative to a well designed application however, as consultants brought in to tune the environment in production, re-design or fixing the fundamental building blocks of the application is rarely an option that we have.

Posted in SQL Server | 2 Comments »