Systems Engineering and RDBMS

  • Click Here for Decipher's Homepage

  • Categories

  • Questions?

    Please send your wish list of things that you would like us to write about or if you have suggestions to help improve this blog site. You can send all questions/suggestions to: Blog Support
  • Archives

  • Blog Stats


Archive for February 2nd, 2007

Replacing temp tables with table variables

Posted by decipherinfosys on February 2, 2007

If your application frequently creates temporary tables, consider using the table variable or a permanent table. You can declare a table variable to store a row set in memory. Table variables are cleaned up automatically at the end of the function, stored procedure, or batch that they are defined in. Many requests to create temporary tables may cause contention in both the tempdb database and in the system tables. Very large temporary tables are also problematic. If you find that you are creating many large temporary tables, you may want to consider a permanent table that can be truncated between uses.  Using temp tables is also not advisable since it can lead to re-compilation (in SQL 2005, there is statement level re-compilation but in SQL 2000, the stored procedure re-compilation can lead to high CPU utilization issues so if you use temp tables, read the Microsoft KB article to ensure that you are not running into re-compilation issues because of them).

Large table variables also use the tempdb database in a similar fashion like the temp tables use tempdb, so avoid large table variables also. Also, table variables are not considered by the optimizer when the optimizer generates execution plans and parallel queries.  However, for smaller data-sets, these are created in memory and can prove to be more beneficial than temp tables.

Every application is different – you should test temporary table and table variable usage for performance in your environment. Test with many users for scalability to determine the approach that is best for each situation. Also, be aware that there may be concurrency issues when there are many temporary tables and variables that are requesting resources in the tempdb database.

Posted in SQL Server | 1 Comment »

Sharing the instance for OLTP and OLAP applications

Posted by decipherinfosys on February 2, 2007

I have seen client implementation systems where the client had decided to put both the OLTP as well as the OLAP databases on the same instance of SQL Server. There are many issues that can arise because of such a configuration.

The very nature of the OLTP vs. OLAP applications is totally different. OLTP applications rely on transaction based processing and concentrates on fast retrieval of data, data-integrity, and concurrency of the system where as the OLAP applications are based on volume data and heavy data crunching. The tuning as well as the maintenance routines are very different as well.

If the data-mart databases and the OLTP database are sharing the same instance, then these issues can arise:

a)  Tempdb database gets shared – contention can arise and proper # of files and file-placements needs to be done.

b)  Plans will get kicked out of the cache very soon – that is very bad for the OLTP applications – more memory requirements and proper configuration of the parameters would need to be done which will be constrained by the memory resources on the box. If memory constraints appear, then lock escalations can start occurring leading to increased blocking. Memory constraints can also lead to increased re-compilation issues.

c) Parameters for the instance will be different since OLTP query plans should not be going parallel where-as in the case of the OLAP application, queries benefit from parallel execution.

d) If the two applications require separate editions of SQL Server, that is going to be an issue. Even if the editions are the same, the same issue goes for the upgrade as well since some of the hot-fixes (or even service packs) might make more sense for data-marts rather than the OLTP apps.. example being the SP4 which has a known issue related to AWE configuration.

e) Maintenance tasks are very different for OLTP vs. OLAP and can cause resource constraints.

f) Failover and HA issues : Memory requirements for the failover machine increases

If not sharing the same instance but have 2 separate instances on the same machine : one for OLTP and the other one for the data-marts for the OLAP application:

a) Maintenance tasks are very different for OLTP vs. OLAP and can cause resource constraints.

b) Will need to share the resources on the machine and fine tuning of AWE/PAE/3gb switch needs to be done and very closely monitored. Fencing of resources might have to be done as well.

c) Failover and HA issues : If using clustering, this can result into more hardware requirements since the failover box now needs to account for all the failover instances, if using log shipping, that will add to the resource utilization as well. Same goes for replication.

d) Hardware: If the file-layouts are not done properly on RAID/SAN, IO contentions can arise (this is regardless of same instance or multiple instances).

The only time that I can see this making sense is if the client uses OLTP during the day (AND IS VERY LOW ON LOAD AND PROCESSING) and loads data only during the night (do their OLAP work only during the night) i.e. their OLTP and OLAP operations are totally segregated…it can still lead to issues (plans re-use goes our of the window, re-compilations and poor performance for OLTP etc.) but because of low load, they could still live with that. The size of the OLTP and OLAP databases matters only for the maintenance routines, filegroups and file layouts & processing time taken for the ETLM for OLAP in this case. It’s the concurrent work-load, single vs. multiple instances, overall box (CPU, memory, and external storage), maintenance routines, data-mart load, instance and DB settings, failover etc. that matters and raises the contention.

Posted in DB2 LUW, Oracle, SQL Server | Leave a Comment »