Systems Engineering and RDBMS

Archive for June, 2007

Different Kind of an acquisition

Posted by decipherinfosys on June 30, 2007

Bangalore based, Indian Software company Infosys is all set to to acquire the French IT consulting company Cap Gemini. You can read more about this potential takeover at the article on Market Watch. This is the first time an Indian software company is ever going to acquire a company that has so deep roots in the USA.

Posted in News | Leave a Comment »

Different High Availability Options for SQL Server

Posted by decipherinfosys on June 29, 2007

High Availability means different things to different people in the IT industry. IMHO, High Availability refers to keeping the downtime to as low as possible for the hours of operation. Let us assume that in an utopian world, nothing goes wrong and everything works as planned – in that scenario, our SQL Server installation will be available 100% of the time as needed by the end users. This uptime could be 24/7 for 365 days a year or could be 24/7 for 5 days of the week or could be 9-6 per day during the week days. As long as those business requirements are met for the uptime, the availability solution is a high availability solution. Not every shop requires a 24/7 solution for 365 days or even 5 days of the week. Because of outages, mistakes made by personnel or upgrades, there will be outages though so this uptime could be 99.999% or 90% or any other number. What the uptime requirement is needs to be ascertained by the business users and the management.

Only after that decision has been made, should one start looking into which HA (High Availability) and DR (Disaster Recovery) solution one should opt for. If the above brain-storming is not done, then you might end up spending too much when you could have easily opted for a low cost solution, likewise, you might end up choosing a low cost option when indeed you needed a better uptime for your database servers.

Different failure types bringing the uptime down can be categorized under:
• Natural Causes – floods, earthquake, fire etc.
• Hardware failures – CPU, Memory, Network Card, System Board etc.
• Network Infrastructure
• Electrical Power
• Upgrades of the OS or SQL Server
• Human errors and scheduled maintenance

Different options available for HA:

1) Failover Clustering:

This is an expensive option but also one that provides very good high availability from the perspective of keeping the downtime low in case of a server failure in the node. The failover to the passive node is pretty fast (anywhere between 1-5 minutes depending upon the state of the cluster). This does not protect the data that resides on the SAN – this is only for the server redundancy.

This is very well documented and has been available since quite some time as the failover solution for MS SQL Server. Since this is at the instance level, all the resources are available after the failover happens. There are realms of documentation on this and we have provided those links in the References section towards the end of the document.

The reason why this is an expensive option is because unlike Oracle RAC (Real Application Clusters), the clustering for MS SQL Server is under the Shared Nothing architecture which means that at any given point of time, only one node can own the resources. This means that your passive server (which will be the same hardware configuration) will be sitting idle for most of the time till disaster strikes and a failover happens from the primary server to the passive one. One way that people off-set this issue is by having an Active-Active cluster which is essentially 2 Active-Passive cluster combinations. It does drive the processor and memory requirements a bit high on each server since in the event of a failure, the failover server will then house 2 instances, however this option works out well for a lot of shops who have more than one critical application that requires SQL Server as their back-end.

Read more about it in our previous blog posts – post1, post2, post3 and post4.

2) Database Mirroring:

This is a new feature that was introduced for the first time in SP1 for SQL Server 2005. This works at the database level and allows you to duplicate the production database (in real time if you chose to) from one server to another. If the primary server goes down for some reason, the system will failover to the mirrored database instance.

This is less costly than the failover clustering option and can also be combined with failover clustering to provide a very robust solution. When combined with failover clustering, the local failover is done by the clustering solution and the site failover is automated using database mirroring.

The risk is that this is a pretty new feature and is still not 100% proven in the field.

3) Database Snapshots:

This is also a new feature in SQL Server 2005 and also operates at the database level. This can be used to create snapshots which are static database copies of your production database. By using this option, one can off-load the reporting load on your production database over to the other server which in turn will boost your production server’s performance as well as HA.

4) Replication:

Even though some shops use this as their HA mechanism, we will not recommend it. This is a time tested option but not for HA. It will involve a lot of manual steps in order to provide a good HA solution.

5) Log Shipping:

This is a very low cost option but also one that will cause data loss. However, some shops are ok with the data loss portion of it since the time synchronization is configurable. Database Mirroring is a much better option than log shipping and as it becomes more robust, we believe that that will be more in use as compared to log shipping.

Log shipping can be used in conjunction with failover clustering to provide a good site failure redundancy with a low cost solution. There will be manual steps for recovery though when a site failure occurs.

6) Back-ups and Restore:

The old back-up and restore technique. This is really not a solution for HA. It is necessary to have a good set of back-up and restore policies in place and one needs to have good procedures in place to ensure that their data is safe but this is not really a HA option even though shops that are ok with some data loss and want to go with the most cheap option available for their non-critical systems do opt for this option at times.

Bottom line is that you need to evaluate the criticality of the application that you plan to support and for which you are looking to have a HA plan in place and then choose from the options mentioned above.

Posted in SQL Server | Leave a Comment »

Moving Tempdb files

Posted by decipherinfosys on June 28, 2007

Tempdb is one of the system databases in MS SQL Server. It is used by all the users who are connected to an instance to hold the temporary objects like global and local temporary tables, table variables, interim data for sorting and grouping operations. In SQL Server 2005, it also holds the row versions of the data affected by newly introduced features like online index operations.

It gets re-created every time when SQLServer instance is started. Normally temporary tables or any other object exists in the tempdb only during its scope. i.e. once procedure execution is finished, any temporary tables or table variables created by procedure are dropped from tempdb. On the other hand, global temporary tables are removed from the tempdb when last session accessing the table is closed. To see how temporary tables work, you can read our previous blog post on this topic. Another post covers replacement of temp tables with table variables and also, the scope differences for table variables.

In short when an instance is shut down, there are no active connections and hence there is nothing in the tempdb. So there is no need to take backup of tempdb. In fact, backup and restore operations are not allowed on tempdb.

With this introduction on tempdb, let us see how can, we move tempdb to a different location from it’s default location. Some times it is necessary to resize and move temporary database (tempdb) to separate disk for performance improvements. In one of our previous blog post, we showed how we can move datafile of temporary tablespace to different location in Oracle. In this article, we will show you how we can move tempdb database in MS SQLServer.

Connect to an instance either using management studio or query analyzer. Let us first check physical location and logical name of the existing tempdb files.

— FOR SQLServer 2005
USE MASTER
GO
SELECT name,physical_name,size,max_size
FROM sys.master_files
WHERE database_id = DB_ID(N’tempdb’);
GO

OR

— For SQLServer 2000. This query can be used in SQLServer 2005
— as well.
USE TEMPDB
GO
SELECT name,filename,size,maxsize
FROM sysfiles
GO

Following will be the output from the first query. We have modified the result set to fit into the page.

name physical_name size max_size
——- ———————— —– ——-
tempdev C:\MSSQL\DATA\tempdb.mdf 1024 -1
templog C:\MSSQL\DATA\templog.ldf 64 -1

In the above result set,:
• name is the logical name of the file.
• Physical_name is current location where data and log file resides. It has been modified to fit into page for better display.
• Size is the file size shown in terms of 8KB pages.
• Max_size indicates maximum size of the file. -1 indicates, that we allow file to grow until disk is full.

Now issue the following command to change the size of the data and log file and also specify new path to put files in new location. If one does not want to change the file size, SIZE parameter can be omitted. Make sure that you change the FILENAME parameter value appropriately before executing the command.

USE MASTER
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev,
SIZE=200 MB,
FILENAME = ‘D:\SQLServer\Data\tempdb.mdf’)
GO

ALTER DATABASE tempdb
MODIFY FILE (NAME = templog,
SIZE = 500 MB,
FILENAME = ‘D:\SQLServer\Data\templog.ldf’)
GO

Even though command is executed successfully, change will not be immediate. We need to stop and restart SQL Server service. Next time upon instance startup, tempdb will be created using files in new location.

Once instance is up and running, we need to delete old tempdb files from its original location. Let us verify that change has taken place by issuing following command. New values for location and size will be displayed along with other related values.

USE tempdb
GO
sp_helpfile
GO

Appropriate sizing and placement of tempdb will definitely help in resolving tempdb performance bottlenecks. In our future posts, we will discuss how to avoid performance issues related to tempdb – we will look at hardware optimization and placement of tempdb, number of files created for tempdb, proper file sizes and code optimizations.

Posted in SQL Server | 1 Comment »

Re-Sizing or Switching UNDO tablespace

Posted by decipherinfosys on June 27, 2007

If you have a large database, then during the purge operation or large batch transactions that involve a lot of undo work, the undo tablespace will grow rapidly and will occupy a large amount of space on the file system.  In order to avoid the ORA-1651 error (unable to extend undo segment by <n> in tablespace <undo_tablespace>) which could result because of this expansion,  it is a very common practice that you add a set of data files to the undo tablespace with AUTOEXTEND ON MAXSIZE UNLIMITED.  And if you try to re-size the data-file, you may encounter ORA-3297 error: file contains used data beyond the requested RESIZE value.

Solution in this situation is to create a NEW UNDO tablespace with smaller size and switch over to your current UNDO tablespace to NEW UNDO tablespace.

Example:

/* Create new undo tablespace with smaller size */
SQL> create undo tablespace undotbs2 datafile ‘/u01/oradata/decipher/undotbs2_01.dbf’ size 1024m autoextend on next 256m maxsize 10240m;

/* Set new tablespace as undo_tablespace */
SQL> alter system set undo_tablespace= UNDOTBS2 scope=both;

If the parameter value for UNDO TABLESPACE is set to ” (two single quotes), the current undo tablespace will be switched out without switching in any other undo tablespace. This is normally to un-assign an undo tablespace in the event that you want to revert to manual undo management mode.

/* Drop the old tablespace */
SQL> drop tablespace UNDOTBS1 including contents;

Dropping the old undo tablespace may give ORA-30013 error: undo tablespace ‘%s’ is currently in use. This error indicates that you need to wait for the undo tablespace to become OFFLINE.  Initially, on switching to the new UNDO tablespace, the old UNDO tablespace will go into PENDING OFFLINE status until the active incomplete transactions are finished.

Few other important points:

•    The value for undo_retention also has a role in growth of the undo tablespace. If there is no way to get the undo space for a new transaction, then the undo space will be re-used first by overwriting all EXPIRED undo and then ignoring the value for undo_retention till you have sufficient UNEXPIRED (Obviously INACTIVE) undo segment provided that UNDO tablespace is not created with retention guaranteed option. But, if the data-files for the undo tablespace are set to auto extensible, it will not re-use the space. In such scenarios, a new transaction will allocate space and your undo tablespace will start growing.
•    Is the large file size really bad?
Overhead on larger file/tablespaces can impact the database and the OS. Also with the bigger file/tablespace you will have other overheads, like backup will take longer if you are not using RMAN. As far as the undo management there should be no performance impact just because the file/tbs is bigger.

Posted in Oracle | 3 Comments »

Best Places to work in IT?

Posted by decipherinfosys on June 27, 2007

ComputerWorld had an excellent article on the best places to work for in the IT arena. The key thing to look for would be the same company that shows up in different categories and here is the link for the overall rankings of the companies.  Check to see whether your company’s name is in the list and if not, what can you do in order to get there in the following years – after all, it is the people, the work culture and management that are the key assets of an IT company.

Posted in News | Leave a Comment »

Switching temp tablespace and ORA-xxxxx errors

Posted by decipherinfosys on June 26, 2007

You may need to move datafile of your default temp tablespace to a dedicated disk to improve i/o as a part of performance tuning exercise, but it is important to note that you can neither take it offline nor drop default temporary tablespace of database.

SQL> alter tablespace temp offline;
alter tablespace temp offline
*
ERROR at line 1:
ORA-12905: default temporary tablespace cannot be brought OFFLINE

SQL> drop tablespace temp including contents and datafiles;
drop tablespace temp including contents and datafiles
*
ERROR at line 1:
ORA-12906: cannot drop default temporary tablespace

You need to follow the steps as mentioned below to achieve this:

SQL> create tablespace TEMP2
2  datafile ‘/export/home1/ora900/oradata/V900/temp2_01.dbf’
3  size 100k
4  TEMPORARY;

Tablespace created.

SQL> alter database default temporary tablespace TEMP2;
Database altered.

SQL> drop tablespace temp including contents and datafiles;
Tablespace dropped.

Also, if you drop the temporary tablespace of a user, and this temporary tablespace is not the DEFAULT TEMPORARY TABLESPACE for the database, then user’s temporary tablespace will not automatically switch to the DEFAULT TEMPORARY TABLESPACE.  You will run into “ORA-00959 : tablespace <tablespace_name> does not exist” while doing an operation that requires disk sort say for example CREATE INDEX statement.

Let’s follow that up with an example: Say you have a database having TEMP as default temporary tablespace and TEMP2 as other non-default temporary tablespace that is assigned to user SCOTT as the temporary tablespace.

SQL> select TEMPORARY_TABLESPACE from dba_users where username=’SCOTT’;

TEMPORARY_TABLESPACE
——————————
TEMP2

SQL> drop tablespace TEMP2 including contents and datafiles;
Tablespace dropped.

SQL> select TEMPORARY_TABLESPACE from dba_users where username=’SCOTT’;

TEMPORARY_TABLESPACE
——————————
TEMP2

You can fix this by explicitly assigning new temporary tablespace to the user SCOTT.

SQL> alter user scott temporary tablespace TEMP;
User altered.

SQL> select TEMPORARY_TABLESPACE from dba_users where username=’SCOTT’;
TEMPORARY_TABLESPACE
——————————
TEMP

Posted in Oracle | Leave a Comment »

DBMS_FILE_TRANSFER

Posted by decipherinfosys on June 25, 2007

Traditionally, DBAs depend upon the operating system commands to copy an “Oracle database file” within a database, or transfer a file between databases. However, after the introduction of DBMS_FILE_TRANSFER, they do not necessarily need to depend upon the operating system for transferring oracle database file within or across the databases, and this could prove very useful while using the transportable tablespace feature or an Automatic Storage Management (ASM) disk group as the source or destination for a file transfer.

We will demonstrate here how to use dbms_file_transfer to copy an “oracle database file” within a database by moving a tablespace’s data-files.

Pre requisite:

You need to create a directory object pointing to the source and destination directory.

SQL> CREATE DIRECTORY source_dir AS ‘/u01/oradata/decipher/’;

SQL> CREATE DIRECTORY dest_dir AS ‘/u01/oradata/decipher/’;

Grant the read/write privileges to the user who will run the COPY_FILE procedure.

SQL> GRANT EXECUTE ON DBMS_FILE_TRANSFER TO decipher;

SQL> GRANT READ ON DIRECTORY source_dir TO decipher;

SQL> GRANT WRITE ON DIRECTORY dest_dir TO decipher;

SQL>conn decipher/decipher@decipher

Step 1: Take the tablespace offline

SQL>ALTER TABLESPACE decipher_data OFFLINE NORMAL;

Note : DB must be running in archive log mode.

Step 2: Move the datafiles using DBMS_FILE_TRANSFER

BEGIN
DBMS_FILE_TRANSFER.COPY_FILE(
source_directory_object => ‘SOURCE_DIR’,
source_file_name => ‘decipher_data_001.ora’,
destination_directory_object => ‘DEST_DIR’,
destination_file_name => ‘decipher_data_001.ora’);
END;
/

Few important points to keep in mind:

  • On UNIX systems, the owner of a file created by the DBMS_FILE_TRANSFER package is the owner of the shadow process running the instance. Normally, this owner is ORACLE. A file created using DBMS_FILE_TRANSFER is always writable and readable by all processes in the database, but the non-privileged users who need to read or write such a file directly may need access from a system administrator. You need to ensure that such privileges are in place if you are going to use those users.
  • Do not use the DBMS_FILE_TRANSFER package to copy or transfer a file that is being modified by a database because doing so can result in an inconsistent file.
  • DBMS_FILE_TRANSFER can also be invoked as a remote procedure call. A remote procedure call lets you copy a file within a database even when you are connected to a different database. For example, you can make a copy of a file on database DB decipher, even if you are connected to database healthcare, by executing the following remote procedure call:

connect healthcare/healthcare@healthcare

BEGIN
DBMS_FILE_TRANSFER.COPY_FILE@decipher(
source_directory_object => ‘SOURCE_DIR’,
source_file_name => ‘decipher_data_001.ora’,
destination_directory_object => ‘DEST_DIR’,
destination_file_name => ‘decipher_data_001.ora’);
END;
/

  • DBMS_FILE_TRANSFER along with remote procedure calls enables you to copy a file between two databases, even if you are not connected to either database. For example, you can connect to the database healthcare and then transfer a file from database decipher to database finance or vice versa using either DBMS_FILE_TRANSFER.PUT_FILE or DBMS_FILE_TRANSFER.GET_FILE.

Step 3: Rename the filenames within the database

SQL>ALTER TABLESPACE decipher_data
RENAME DATAFILE ‘/u01/oradata/decipher/decipher_data_001.ora’,
‘/u01/oradata/ decipher/decipher_data_002.ora’
TO ‘/u02/oradata/decipher/decipher_data_001.ora’,
‘/u02/oradata/decipher/decipher_data_002.ora ‘;

Step 4: Bring tablespace back to online

SQL>ALTER TABLESPACE decipher_data ONLINE;

Posted in Oracle | Leave a Comment »

DB2 Viper 2

Posted by decipherinfosys on June 24, 2007

IBM has released a public beta for it’s next release for DB2 v9 for LUW. You can read more on this release on their site:

https://www14.software.ibm.com/iwm/web/cc/earlyprograms/data/viper2_open/index.shtml

DB2 expert Chris Eaton has also blogged about the new upcoming features in this release. And here is the IBM link for the new features in this upcoming release:

https://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp

Look under “New Features and Functionality”.

Posted in DB2 LUW | Leave a Comment »

Cumulative Update Package 2 for SQL Server 2005 SP2

Posted by decipherinfosys on June 24, 2007

The cumulative update package 2 for SQL Server 2005 SP2 has been released by MSFT. You can obtain more information on it from this KB article:

http://support.microsoft.com/kb/936305

This will update your install to build 3175 and it includes all the updates made since the release of SP2.

Posted in SQL Server | Leave a Comment »

Fill factor value for Indexes

Posted by decipherinfosys on June 23, 2007

Whenever you create a new index in SQL Server or rebuild an existing index, you can specify several optional parameters – one of such parameters is the fill factor. This parameter controls the percentage of free space in the leaf level of the index pages are filled when they are created. So, a fill factor value of 100% or 0% means that each index page is 100% full, a fill factor of 90% means that each index page is 90% full and 10% of each leaf level page will be empty thus providing space for future index expansion as DML statements occur. The default is 0% which is the same as 100%.

If in your environment, you have tables that have a lot more deletes+inserts+updates done on them, you should consider choosing a non-default value for the fill factor. The reason is that if the default value is used and a new row needs to be added to the page, the engine will have to do a page split and will move half of the rows from that page to the new page in order to make room for this record. Page splits are a resource intensive operation and can cause performance issues in a highly transactional system as well as in data marts. It can also cause increased fragmentation and thus increase the I/O.

Suppose you added an index on a table with the default fill factor. When the index gets created, the engine will place the index on contiguous physical pages thus allowing for optimal I/O since the data can be read sequentially instead of randomly. As the DML operations start happening on this table, due to page splits, the engine will now need to allocate new pages elsewhere on the disk and these will not be contiguous thus random I/Os will be used instead of sequential and that will be a costly operation.

You can ascertain the right fill factor value for your environment by benchmarking and playing with different values. However, you can use this as a rule of thumb for choosing the fill factor value in your environment:

1) For low DML operation tables – tables that are mostly used for reads only: Keep the default fill factor of 100% or 0%.

2) For tables that have a lot of DML operations against them (many more writes than reads), go with a value of 80%.

3) For tables that fall in between or are seasonal tables (i.e. they get heavily utilized only in certain times of the year), go with a value of 90%.

Choose carefully and judiciously – because if you choose too low of a fill factor value, the page splits will be reduced – however, it will also increase the number of pages that SQL Server then has to read for the queries and that will have an impact on performance. This will also have an impact on your data buffer as those pages move from disk to the buffer cache and those pages with their empty spaces will occupy the buffer which means that you can fit in less number of pages at the same time in the buffer which can impact performance.

Posted in SQL Server | Leave a Comment »