Systems Engineering and RDBMS

  • Click Here for Decipher's Homepage

  • Categories

  • Questions?

    Please send your wish list of things that you would like us to write about or if you have suggestions to help improve this blog site. You can send all questions/suggestions to: Blog Support
  • Archives

  • Blog Stats


Archive for February 9th, 2007

Implicit data-type conversions in queries

Posted by decipherinfosys on February 9, 2007

Beware of the implicit data type conversions that can occur in the queries. If you are not careful, this can lead to performance issues. Consider this for an example:

If an indexed column is a string data-type column in a table and hence can have numbers as well as strings and alphanumerics and then if you specify:

where col1 = 100

in the where clause, this implicit conversion will lead to a scan since it does not know whether the value is a number, a string or an alphanumeric…see examples below (of course, DB2 will cry as soon as you do this since DB2 does not allow implicit conversions which in my opinion is a very good feature):


SQL Server:

create table my_test (col1 varchar(10) primary key)
/* I use table_x which is a very large table in one of our client’s schema – name changed because of proprietary code reasons*/

select identity(int, 1, 1) as row_num, * into #temp from table_x
insert into my_test select row_num from #temp
insert into my_test select locn_id from #temp
set showplan_text on
select * from my_test where col1 = 100

|–Parallelism(Gather Streams)
|–Clustered Index Scan(OBJECT:([Decipher].[dbo].[my_test].[PK__my_test__5749A5B5]), WHERE:(Convert([my_test].[col1])=Convert([@1])))

select * from my_test where col1 = ‘100’
|–Clustered Index Seek(OBJECT:([Decipher].[dbo].[my_test].[PK__my_test__5749A5B5]), SEEK:([my_test].[col1]=[@1]) ORDERED FORWARD)


create table my_test (col1 varchar2(10) primary key)

create table temp_test as select rownum as row_num, locn_id from table_x

insert into my_test select row_num from temp_test
/*using analyze for quick and dirty test – use DBMS_STATS in real production*/

Analyze table my_test compute statistics for table for all indexes for all indexed columns;

set autotrace traceonly

select * from my_test where col1 = 100

1 0 INDEX (FULL SCAN) OF ‘SYS_C0090776’ (UNIQUE) (Cost=157 Card=1 Bytes=5)

select * from my_test where col1 = ‘100’

1 0 INDEX (UNIQUE SCAN) OF ‘SYS_C0090776’ (UNIQUE) (Cost=1 Card=1 Bytes=5)

Now, if on the other hand, you had the indexed column as a number data-type and provide a quoted string to it in the comparison in the where clause, then since the only value that it can ever have is a number, the optimizer is intelligent enough to take a number in a string and use the index after convert i.e. the convert is then done at the constant value rather than the indexed column.

Posted in DB2 LUW, Oracle, SQL Server | Leave a Comment »

Changing the IP Subnet of a Microsoft Cluster

Posted by decipherinfosys on February 9, 2007

This entry collates with another entry:

The cluster that was the basis for these blogs was both migrated to a new domain and IP subnet all at the same time. The other entry discusses how to resolve the domain migration issues within Microsoft Cluster Services (MSCS), and this entry will deal with the topic of the new IP subnet.

New IP Addresses – Now What Do I Do?

Prior to moving cluster nodes and their corresponding SAN to a new IP subnet, there is some preliminary configuration that can be done. The IP addresses of the SAN’s Storage Processors (SPs) can be changed through the SAN administration console, like Navishpere for example, but the exact process varies slightly depending on the manufacturer.

Note: Please make sure to always review your product’s operations manuals prior to making any changes.

Once the SP addresses have changed, you will lose connectivity to them until the SAN network cables have been physically moved to the new subnet. Once everything has been re-cabled, boot the cluster’s nodes back up and change the IP addresses of their LAN network cards to the new subnet. Once this has been completed connectivity to the SAN administration console will be restored. The LAN network cards on each node should be the only ones whose IP addresses need to be changed – all cluster heartbeat IP address configureations can remain unchanged.

Network Path Could Not Be Found

Once the Cluster service is once again running on the nodes, opening Cluster Administrator will result in the following error: “The cluster service on node ‘ cannot be started. The network path was not found.” Error ID 53. What this error means in this case is that there are still some references to the old IP subnet lingering in the cluster’s configuration, even though the storage processor IP addresses have been changed, the virtual IP (VIP) address information is still configured for the old subnet. Now, many of you would assume that something has gone horribly wrong, and that the cluster will have to be destroyed and completely rebuilt. Not so! Time to go into our old friend the registry.

Regedit – An Administrator’s Best Friend

There section within the registry where IP address changes need to made are all under HKEY_LOCAL_MACHINE -> Cluster -> Resources. The sub-entries are cryptic numbers, so some digging will be required to find the IP addresses. – there will be about a dozen or so entries that you will need to look through. There are two or three resources whose IP addresses need to be changed, depending on your configuration:

  • Virtual IP address
  • Cluster IP address
  • MSDTC IP address

The root of each entry will have a Name key that tells you what each resource is. For example the following screen-shot shows you what the root of the cluster IP address looks like. Notice the Name key (Cluster IP Address in this case ) second from the bottom:

Cluster IP Address Resource Screenshot

Drilling one level down will take you to the Parameters directory, this is where the IP address itself resides as shown here:

Cluster IP Address Resource Screenshot

Double-click on the Address key and change the IP address to what is appropriate for the new subnet. Remember to also change the subnet mask if applicable. Repeat these steps for the Virtual IP and MSDTC IP addresses, and keep in mind the registry changes need to be performed on all nodes of the cluster.

Note: Always make a backup of your current registry settings prior to making any changes.

Once the registry changes have been completed you should be able to successfully reconnect to your existing cluster via Cluster Administrator. There may be some additional tweaks and modifications necessary to get your cluster back to 100% status, but resolving the issues related to the domain and IP address migrations will have you well on your way towards getting your MSCS cluster back to normal.

Posted in Windows | Leave a Comment »


Posted by decipherinfosys on February 9, 2007

SAN stands for Storage Area Network

NAS stands for Network Attached Storage

The two of them are very different. A SAN system is an external storage system that allows multiple computer systems to access the same storage. The fiber controller inside the external storage system is able to take requests for different logical volumes from different HBAs (Host Bus Adapters). A NAS is similar, however, unlike the SAN system where the storage is connected via a fiber channel connection, a NAS system is accessed via a network connection. Of the two choices, I prefer SAN over a NAS for a production environment. The problem with NAS is that it is IP-based and data must travel over your network, which is limited by the top speed of the network, and other shared network traffic. This can greatly increase I/O latency, harming performance.

There also is a question of whether to go with different RAID arrays or to go with a SAN. SANs do outperform individual RAID arrays. Servers can have more than two fiber connections to a SAN, so you can increase bandwidth to the SAN when necessary. Another big value point for SANs is efficient disk usage. In a well-tuned RAID configuration, you might need half a terabyte of disk to support a 100GB database so you’ll have enough heads reading and writing data. SANs give systems only what they need and don’t over-allocate disk space the way RAID does. In addition, you can use a SAN to support multiple servers simultaneously and dual-path and multi-path SANs match the performance of RAID 1 and RAID 10 volumes. The drawbacks of using a SAN include the expense and difficulty of setup. In addition, a properly sized and configured SAN requires you to measure bandwidth and I/O requirements of all systems that simultaneously use the SAN.

Posted in Hardware | Leave a Comment »

Model Database – SQL Server

Posted by decipherinfosys on February 9, 2007

The model database in SQL Server is used as the template for all databases created on a system. When a CREATE DATABASE statement is issued, the first part of the database is created by copying in the contents of the model database, then the remainder of the new database is filled with empty pages. If you want any settings that the new user databases should always have, you can set those up for your model database. Also, please remember that when creating new databases, exclusive access is required on the model database else your create scripts will fail with an error. And if the model database files have been sized to say 500MB and 100MB for the data and the log file, if any new database creation is attempted with a size smaller than these, then you will get another cryptic error message stating that enough space is not available on the disk even though there might be enough space available. So, you need to ensure that the settings for your model database are what you need and that your create database scripts are written to take care of such scenarios.

Posted in SQL Server | 2 Comments »

Lock Conversion and Lock Escalation

Posted by decipherinfosys on February 9, 2007

Lock Conversion and Lock Esclation are different though people do get confused with the terminology and mix them up.  Lock conversion means that a particular lock mode type gets converted to a higher lock mode type during the process of execution, for example: in an update statement’s lifecycle, the engine first has to read the rows that it plans to update and thus takes a shared lock on the resource(s) and then does a lock conversion to an exclusive lock prior to doing an update.  This is a common process across RDBMS.
A lock escalation is a process wherein many fine-grain locks are escalated into fewer coarser grained locks i.e. many row level or page level locks can be escalated to table level locks.

Does lock escalation occur in every RDBMS?  No, it doesn’t.   Oracle does not need to do this because of it’s wonderful locking architecture.  There is no concept of a lock manager in the case of Oracle – it is controlled at the block level using ITLs (Interested Transaction Lists).  The lock escalation issues arise in those RDBMS where locking is controlled via a lock manager like in the case of SQL Server and DB2 LUW.

Every lock is a memory structure and too many locks would mean, more memory being occupied by locks.  When a transaction requests rows from a table, SQL Server/DB2 LUW automatically acquire locks on those rows affected and place higher-level intent locks on the pages and table, or index, which contain those rows. When the number of locks held by the transaction exceeds its threshold, the engine then attempts to change the intent lock on the table to a stronger lock (for example, an intent exclusive (IX) would change to an exclusive (X) lock). After acquiring the stronger lock, all page and row level locks held by the transaction on the table are released, reducing lock overhead.  The database engine may choose to do both row and page locking for the same query, for example, placing page locks on the index (if enough contiguous keys in a nonclustered index node are selected to satisfy the query) and row locks on the data.

In one of the future posts, we will look at how lock escalation can be prevented in SQL Server and DB2 LUW.

Posted in DB2 LUW, Oracle, SQL Server | 1 Comment »