Systems Engineering and RDBMS

Archive for June, 2009

Changing the database to be Read/Write

Posted by decipherinfosys on June 29, 2009

A few days back, we had written a post on how to go about changing a database to be in a read only mode and the scenarios when you would do that.  You can read more on that here.  A reader asked how to change it back to be a read/write DB.  It’s pretty simple and you can use the same set of commands as we had shown in that post – just change the READ_ONLY to be READ_WRITE.  Here is an example:

USE MASTER
GO
/*Mark it as Singe User*/
ALTER DATABASE DECIPHERTEST SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
/*Mark the database as Read Write*/
ALTER DATABASE DECIPHERTEST SET READ_WRITE
GO
/*Mark it back to Multi User now*/
ALTER DATABASE DECIPHERTEST SET MULTI_USER
GO

Posted in SQL Server | Leave a Comment »

Exporting Windows Event Viewer data for compliance

Posted by decipherinfosys on June 26, 2009

The companies that are subjected to regulatory compliance are often required to store and archive the logs from various part of their infrastructure such as applications, firewalls, VPN and servers. Most of the network devices support Syslog and if you have any syslog server in your environment you should be able to view, collect and archive the syslog data. Kiwi Syslog server is one of the best tools available in the market.

Windows servers do not have a syslog client by default and usually all the system related warnings, alerts and information are stored and displayed in the Windows Event Viewer. Event viewer allows exporting of data locally in different formats for review. However, in an enterprise environment, there is no tool exists to automate the collection of event viewer from a centralized location.

One great solution for this is using software called ‘winlogd’. Winlogd converts the windows event viewer logs into syslog and send it to the syslog server. Winlogd installs itself as a windows service and requires a registry edit to specify the syslog server IP.
It can be easily pushed to all the servers in an enterprise environment using a .reg file.
Once the syslog server can receive the data from servers, it can be viewed and archived for compliance purposes.

One limitation of Winlogd is it doesn’t allow filtering the window event viewer logs. So, all the data that is going to Windows Event Viewer (including ‘information’) will be sent to syslog server. If you have many chatty servers that would cause lot of informational event logs, it may generate tons of syslog data and network traffic. I’m hoping that winlogd community will fix this in their next release. Nevertheless winlogd is a great tool!

More information on ‘winlogd’ can be found here:  http://edoceo.com/creo/winlogd

Posted in Windows | Leave a Comment »

Guide to SQL Server Consolidation

Posted by decipherinfosys on June 25, 2009

One of the most common questions in our consulting engagements surround consolidation of the SQL Server environments.  There are a couple of ways to go about it and it varies client to client depending upon the type of applications that the SQL Server is being used for, versions & their OS’s, existing and proposed hardware infrastructure, the proper IT team in place to support it etc.

Here is an excellent whitepaper from Microsoft MVP’s that you can download in order to learn more about the different consolidation techniques – here.

Posted in SQL Server | Leave a Comment »

Combining multiple trace files into single file using trcsesse

Posted by decipherinfosys on June 24, 2009

We are all aware that tkprof and trace facility are two most basic and important aspects of performance tuning in oracle database. We can generate trace for the entire database or for a specific session. We have covered about them in our previous blog post – here. We have also covered event level tracing here.

Problem occurs when tracing spans across multiple trace files. How we can collect data from multiple trace files? Oracle provides the utility called ‘trcsess’, which can combine multiple trace files into single trace file. Once all the files are combined into one file, we can run tkprof or trace analyzer on the file to collect more meaningful data. Basic syntax for trcsess utility is as follows.

Trcsess utility provides different criteria on which we can combine information from all the trace files into single trace file. Following is the basic syntax for trcsess utility.

trcsess [output=output_file_name]

[session=session_id]

[clientid=client_id]

[service=service_name]

[action=action_name]

[module=module_name]

[trace_files]

Output specifies the output file name and rest of them are criteria on which one can combine the different trace files into single trace file. We can combine multiple trace files based on either session or client id or service (instance name) or specific action or module name. All these information will be available in trace file itself. After that we have to specify all the trace files we would like to combine. So here is the example of it.

trcsess output=ora_jun23_lvl12.trc service=ORCL orcl_ora_1024490_TEST.trc orcl_ora_1024492_TEST.trc

In our example, we are combining two trace files based on the service and combined output is created in ora_jun23_lvl12.trc file. As mentioned earlier, we can run tkprof output on this single file to get all the information for the traced session.

Resources:

  • Oracle 10g Performance Tuning guide – here

Posted in Oracle | 1 Comment »

Mutating table/trigger error and how to resolve it

Posted by decipherinfosys on June 22, 2009

Most of us who have worked in Oracle have encountered ORA-04091 (table xxx is mutating. Trigger/function might not see it) at some time or the other during the development process.  In this blog post, we will cover why this error occurs and how we can resolve it using different methodology.

Mutating error normally occurs when we are performing some DML operations and we are trying to select the affected record from the same trigger. So basically we are trying to select records in the trigger from the table that owns the trigger. This creates inconsistency and Oracle throws a mutating error. Let us take a simple scenario in which we have to know total number of invalid objects after any object status is updated to ‘INVALID’. We will see it with an example. First let us create a table and then trigger.

SQL> CREATE TABLE TEST
2  AS SELECT * FROM USER_OBJECTS;

Table created.

CREATE OR REPLACE TRIGGER TUA_TEST
AFTER UPDATE OF STATUS ON TEST
FOR EACH ROW
DECLARE
v_Count NUMBER;
BEGIN

SELECT count(*)
INTO v_count
FROM TEST
WHERE status = ‘INVALID’;

dbms_output.put_line(‘Total Invalid Objects are ‘ || v_count);
END;
/

Now if we try to change the status of any object to ‘INVALID’, we will run into mutating error as we are trying to update the record and trigger is trying to select total number of records in ‘INVALID’ status from the same table.

SQL> update test
2  set status = 'INVALID'
3  where object_name = 'TEST1';
update test
*
ERROR at line 1:
ORA-04091: table SCOTT.TEST is mutating, trigger/function may not see it

Having said that there are different ways we can handle mutating table errors. Let us start taking one by one scenario.

First one is to create statement level trigger instead of row level. If we omit the ‘for each row’ clause from above trigger, it will become statement level trigger. Let us create a new statement level trigger.

CREATE OR REPLACE TRIGGER TUA_TEST
AFTER UPDATE OF STATUS ON TEST
DECLARE
v_Count NUMBER;
BEGIN

SELECT count(*)
INTO v_count
FROM TEST
WHERE status = ‘INVALID’;

dbms_output.put_line(‘Total Invalid Objects are ‘ || v_count);
END;

Now let us fire the same update statement again.

SQL> UPDATE TEST
2     SET status = 'INVALID'
3   WHERE object_name = 'TEST1';

Total Invalid Objects are 6

1 row updated.

When we defined statement level trigger, update went through fine and it displayed the total number of invalid objects.

Why this is a problem when we are using ‘FOR EACH ROW’ clause? As per Oracle documentation, the session, which issues a triggering statement on the table, cannot query the same table so that trigger cannot see inconsistent data. This restriction applies to all the row level triggers and hence we run into mutating table error.

Second way of dealing with the mutating table issue is to declare row level trigger as an autonomous transaction so that it is not in the same scope of the session issuing DML statement. Following is the row level trigger defined as pragma autonomous transaction.

CREATE OR REPLACE TRIGGER TUA_TEST
AFTER UPDATE OF STATUS ON TEST
FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
v_Count NUMBER;
BEGIN

SELECT count(*)
INTO v_count
FROM TEST
WHERE status = ‘INVALID’;

dbms_output.put_line(‘Total Invalid Objects are ‘ || v_count);
END;

Now let is issue the update statement again and observe the results.

SQL> UPDATE TEST
2     SET status = 'INVALID'
3   WHERE object_name = 'TEST1';

Total Invalid Objects are 5

1 row updated.

If you closely look at the output, you will see only 5 objects shown in invalid status while statement level trigger showed 6 objects in invalid status. Let us try to update multiple objects at the same time.

SQL> UPDATE TEST
2     SET status = 'INVALID'
3   WHERE object_name IN ('T1','T2');

Total Invalid Objects are 6
Total Invalid Objects are 6

2 rows updated.

By defining row level trigger as an autonomous transaction, we got rid of mutating table error but result is not correct. The latest updates are not getting reflected in our result set as oppose to statement level trigger. So one has to be very careful when using this approach.

In version 11g, Oracle made it much easier with introduction of compound triggers. We have covered compound triggers in a previous blog post. Let us see in this case how a compound trigger can resolve mutating table error. Let’s create a compound trigger first:

CREATE OR REPLACE TRIGGER TEST_TRIG_COMPOUND
FOR UPDATE
ON TEST
COMPOUND TRIGGER

/* Declaration Section*/
v_count NUMBER;

AFTER EACH ROW IS
BEGIN

dbms_output.put_line(‘Update is done’);

END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN

SELECT count(*)
INTO v_count
FROM TEST
WHERE status = ‘INVALID’;

dbms_output.put_line(‘Total Invalid Objects are ‘ || v_count);

END AFTER STATEMENT;

END TEST_TRIG_COMPOUND;
/

Now let us check how many objects are invalid in the test table.

SQL> select count(*) from test where status = 'INVALID';

COUNT(*)
———-
6

Here is the update statement followed by an output.

SQL> UPDATE TEST
2     SET status = 'INVALID'
3   WHERE object_name = 'T2';

Update is done
Total Invalid Objects are 7

1 row updated.

Here we get correct result without getting mutating table error. This is also one very good advantage of compound triggers. There are other ways also to resolve mutating table error using temporary tables but we have discussed common ones in this blog post.

Resources:

Posted in Oracle | 28 Comments »

Google Fusion Tables

Posted by decipherinfosys on June 21, 2009

Google introduced their Fusion Tables approach towards data management in the cloud.  You can read more on this from their Research Blog here:

http://googleresearch.blogspot.com/2009/06/google-fusion-tables.html

And here is the infoworld article on the same topic:

http://www.infoworld.com/d/data-management/google-tests-revolutionary-cloud-based-database-290?source=rss_infoworld_news

Posted in News, Technology | Leave a Comment »

Making a database Read Only

Posted by decipherinfosys on June 20, 2009

We were in the process of migrating the data from a legacy system to a newly architected system for a client of ours.  For the duration of the migration, they needed the data set to be available for reporting purposes but of course no data should be created in that legacy system during the time of the migration.  So, one of the obvious choices were to make the database as a read only database.  In SQL Server, there is an option in the “ALTER DATABASE” command to achieve that.  In order to do so, one first has to mark the database in a single user mode first, then make the change to mark the database as a read only database and then change it back to the multi-user mode.

Here is a sample script:

USE MASTER
GO
/*Mark it as Singe User*/
ALTER DATABASE DECIPHERTEST SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
/*Mark the database as Read Only*/
ALTER DATABASE DECIPHERTEST SET READ_ONLY
GO
/*Mark it back to Multi User now*/
ALTER DATABASE DECIPHERTEST SET MULTI_USER
GO

References:

  • BOL entry for the ALTER DATABASE command – here.

Posted in SQL Server | 1 Comment »

Why does the IP address and subnet mask matter?

Posted by decipherinfosys on June 19, 2009

We had an interesting issue that relates to networking and SQL heartbeat with one of our clients that we would like to share with our readers.

One of our clients has their servers distributed across 3 data centers in US. Let us call them A, B and C. They have a VPN tunnel connecting all their sites together using Cisco ASA firewalls. Their IT staff access the entire infrastructure through one data center (A) which has remote access VPN enabled in the Cisco ASA. They recently reported an issue of not being able to access the servers in SQL cluster across remote access VPN located in data center (B). However, those servers are accessible from data center A, C and within B itself and others servers in data center B were accessible through remote access VPN.

As their remote VPN connection terminates at Cisco ASA at data center (A), various teams were involved to find out the cause. The following were checked to identify the cause of this issue:

1. It was ensured that the remote access VPN subnet (10.1.1.x) is added to the crypto-map on the site to site VPN configuration.
2. We also made sure all the servers are connected to the same switch and residing on same VLAN.
3. There was no specific access list (ACL) or firewall or IPS or F5 configuration that was blocking traffic to the database servers from remote access VPN subnet (10.1.1.x).
4. We ensured all the servers have the same IP default gateway configured.

During packet tracing, it is found that the traffic reaches to the ASA at data center A and also reaches data center B. It is found that the traffic was not going back from the servers to the remote access VPN subnet. We realized that there would be something wrong on the SQL cluster servers and started looking in depth on its network configuration. We identified that SQL server’s heart beat NIC was configured with the IP address of 10.0.0.x with a subnet mask of 255.0.0.0 (/8) allowing to have 16777214 hosts where only 2 IP addresses are needed for heart beat. So, all the incoming traffic from remote access VPN was forwarded to the heart beat NIC on the SQL servers and not going back to the remote access VPN ASA.

Having a subnet mask of 255.255.255.252 on SQL servers heart beat network would have allowed it to have only two IP addresses that are needed for heartbeat on the SQL cluster. As it is a production SQL network, we did not want to change the heart beat network’s IP address or subnet mask. As an alternative workaround, we used persistent route in Windows 2003 to configure the remote access VPN traffic to reach the correct NIC and gateway. A helpful article on windows 2003 routing can be found here. Once we added the persistent route, the remote access VPN users were able to access the SQL servers.

Lesson’s learned:

1. Make sure you aware of all the network addresses and subnets involved in all the locations.
2. Assign a subnet mask for the required number of host addresses.

Posted in Networking, Protocols, SQL Server, Technology | Leave a Comment »

Installing SP3 on SQL Server 2005 cluster

Posted by decipherinfosys on June 19, 2009

A good post by Sander Stad on SQLServerCentral on preparing SQL Server 2005 in a clustered environment for SP3 – here.  We are scheduled to do that in a couple of weeks for a client and were researching any known issues faced by other folks – this article would be very helpful when we start testing it in our lab next week.

Posted in SQL Server | Leave a Comment »

Some more free training materials from MSFT

Posted by decipherinfosys on June 19, 2009

Windows 7 Introduction – here.

This is an older one but a very good one for those looking for an introduction to SSIS 2008 – here.

Windows 2008 Fundamentals – here.

Silverlight Fundamentals – here.

Windows Server 2008 R2, new features – here.

Posted in .Net Development, SQL Server, Technology, Windows | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 74 other followers