Systems Engineering and RDBMS

Archive for March, 2008

Dimensional Modeling

Posted by decipherinfosys on March 31, 2008

Anyone who has designed and implemented a business intelligence system knows that one of the most important thing when doing dimensional modeling is the declaration of the grain of the fact table.  What that means is that the grain should be something that represents exactly what a record within the fact table is supposed to represent.  If this is not done properly, you will not be able to roll up a lot of information and your analysis will suffer due to the bad design.  Let’s take an example to illustrate this:

We will pick an example from the Medical industry since that was one of our most recent projects.  When working on implementing a BI solution for the medical practices and small hospitals, one of the items in the billing/coding departments was the line items in the bill for the services rendered by the practice.  These were the potential dimensions for such a line item:

  1. Provider (which particular doctor or a nurse practitioner who did the work),
  2. Location (since the practice can have multiple locations),
  3. Date (of examination),
  4. Patient,
  5. Diagnosis,
  6. Procedure,
  7. Billing Organization (the practice),
  8. Responsible Party (either parent/guardian or the patients themselves),
  9. Payer (though this can be further divided into Primary Payer and Secondary Payer)

One key thing to note is that the grain for this fact table for which we have listed the potential dimensions above is a line item on the bill that is received from the medical practice.  So, the grain always needs to be very atomic – the smaller and more atomic the measures are for the fact table, the more things you can infer from such a design since it will give rise to a lot of dimensions across which you can do your analysis.  Now suppose that instead of choosing the grain to be the line item on the bill, we choose the grain to be more coarse i.e. aggregations of the atomic data.  Such an example would be to choose a grain to be the “Total monetary amount charged for a given procedure based on a certain diagnosis done per month”.  If we choose such a grain, then we will end up with very less dimensions.  In the example above, we will potentially have only these dimensions:

  1. Provider (the doctor or the nurse practitioner who did the work),
  2. Procedure,
  3. Diagnosis,
  4. Month

We have taken the extreme case to make the point of choosing the grain judiciously.  As you can see from above, the number of dimensions have reduced in number and a lot of atomic information (and thus those dimensions) are lost.  It would have made no sense to include all of those 9 dimensions that we had listed before since even if we included them, we would have ended up with very little aggregation since the fact table’s grain is not atomic enough to be able to make sense for those dimensions that represent the atomic data.

Understand that aggregations shrink the dimensions since they have less dimensional detail.  As long as one designs the fact tables with a grain that is very atomic (cannot be further divided up into smaller section), it is always possible to roll up and present the aggregated information with a lot more control over how to slice and dice the data to present other interesting facts.  In the example above, one such measured fact would be the amount that was billed for that one particular procedural diagnosis.  One can then analyze this data by rolling it up by patient, by procedural diagnosis, by provider or by combinations like amount billed per month by procedure by provider by location.  As long as one has the atomic data, none of the information gets lost.

Posted in Business Intelligence, Data Model | 1 Comment »

Running Parameterized SQL Commands using the Execute SQL Task – II

Posted by decipherinfosys on March 29, 2008

This post is a continuation of the SSIS series of posts that we had done before: DTS vs SSIS, Execute SQL Task, and running parameterized SQL commands using that task. In today’s post, we will continue the previous post and will cover parameterized queries in the Execute SQL task and returning the result set using the ADO.Net connection. The last part is what makes this post different than the previous one. So, we would advise reading up that post before reading this one since many items are common and we have not reproduce them in here again. We are only going to be highlighting what is different when using the ADO.Net connection.

Defining the parameters and the Result Set in the Execute Sql Task:

This remains the same as the previous post.

Executing parameterized query using Execute SQL Task returning single result set using ADO.NET Connection:

The configuration of the Execute SQL Task to use OLEDB connection was explained in detail in the earlier article. There is a minor change in the configuration here to use the ADO.NET connection. In the earlier post, since we did not return any result set, we set the Result Set Property in the general page to be None. Since we want to return a single value in this case, let us set the Result Set Property to single row. The image shown below shows the configuration of the Result Set Property in the General Page of the editor.

s_1.jpg

Let us consider that we are running the following parameterized query in the Execute Sql Task. We configured the SQLSourceType property to direct input. Copy the following statement and paste it in the SqlStatement editor. As described earlier, while using the ADO.NET connection, the parameters are represented by the @parametername.

Select Title from HumanResources.Employee where ManagerId =@MgrId And EmployeeId = @EmpId

Add two variables named ‘Manager’ and ‘Employee’ of type integer to the SSIS variables collection to pass the ManagerId and the EmployeeId as the input parameters. Add another variable named ‘Title’ to hold the value returned by the sql statement. In the parameter page of the task, we should define these two parameters as Input parameters. The parameters will be named as @MgrId, @EmpId respectively. The name @MgrId maps to the ManagerId we are passing in and the name @EmpId maps to the EmployeeId we are passing in as the parameters. The parameter names in the query and the parameter place holders in the Parameter Mapping Page should match while using the ADO.NET connection. The image below shows the parameter page for the above query.

s_2.jpg

In the previous post we discussed the design of the SSIS package for executing the parameterized query using the OLEDB connection. Follow the instructions discussed there to configure the Execute SQL Task to return the Single Result Set and display the Result Set using the Script Task. After finishing coding the script task, click on ok and we will return to the Control Flow design pane.Click on Execute Sql Task. This will show a green arrow which is the precedence constraint. Drag the green arrow and drop it on the Script Task. This is basically telling the SSIS engine that the Script Task should executed upon the successful execution of the prevoius step which is the execute sql task.

This finishes the development of the SSIS Package which displays the value returned by a parameterized query. We can change the parameters passed into the query by giving different values to the variables Manager and Employee. To execute the package press the F5 key. The image shown below the successful execution of the package. We can go back to query the database to verify if the value returned was correct or not.

s_3.jpg

Executing parameterized query using Execute SQL Task returning Full Result Set using ADO.NET:

Configuring the Execute Sql Task for executing the parameterized query returning a Full Result Set is very similar to the above example described with a few differences. The first difference is that since we are returning a full result set, set the Result Set property in the General Page to Full result set rather than Single Row. The image below shows the Execute SQL Task configured to return the full result set.

s_4.jpg

The next difference is in defining the Result Set. Since we are returning a full result set from the SQL statement, we require a variable of Object Type to hold the full result set. So, for our example let us define a variable ResultSet of type Object for holding the full result set returned from the sql statement. Paste the sql statement below into the Sql Statement Editor of the Execute Sql Task.

select EmployeeId,Title,BirthDate,Gender,VacationHours from HumanResources.Employee where ManagerId = @MgrId

Let us define one more variable called “manager” to pass the ManagerId into the SQL statement. Since we are returning 5 columns, for our display purposes, let us define 5 more variables named EmployeeId, Title, BirthDate, Gender, VacationHours with the respective types. The image shown below shows the variables editor with the variables we described.

s_5.jpg

In the previous post, we discussed the design of the SSIS package for executing the parameterized query using the OLEDB connection. Follow the instructions discussed there to configure the Execute SQL Task to return the Full Result Set and display the Result Set using the Script Task.

This finishes the development of the SSIS Package which displays the result set returned by a parameterized query. We can change the parameters passed into the query by giving different values to the variables Manager. To execute the package press the F5 key. The following screen shot shows the partially successful execution of the package. We can go back to query the database to verify if the value returned was correct or not.

s_6.jpg

In this post, we discussed the use of parameterized queries and how to execute the parameterized queries using the Execute SQL Task with ADO.NET connection. We also discussed the various kinds of result sets returned by the Execute SQL Task and how to configure the task to return the single result set and the Full Result Set. We also went a little into the configuration of For Each Loop Container. Experimenting with the XML result set is left as an exercise. This is similar to that of returning the single row except that we specify explicitly that the statement is returning the XML data.

Reference: http://msdn2.microsoft.com/en-us/library/ms141003.aspx

Posted in SQL Server | Leave a Comment »

VSTS “How Do I” Videos

Posted by decipherinfosys on March 28, 2008

If you have been playing with the Visual Studio Team System 2008 (VSTS), here is a link to the MSDN site for the “How Do I” videos which is the best way to learn. If you do not have Silverlight installed, it will ask you to install it first prior to launching the videos. Schema Comparison, Data Comparison, Build Notifications, Data Bound Web Tests etc. are all displayed in easy to learn videos.

http://msdn2.microsoft.com/en-us/vsts2008/db/bb507749.aspx?wt.slv=topsectionsee

Posted in Visual Studio | Leave a Comment »

SQL Server – Suspect Database

Posted by decipherinfosys on March 28, 2008

Here is an excellent post by Tibor Karaszi on the steps/actions for a corrupt or a suspect SQL Server database:

http://www.karaszi.com/SQLServer/info_corrupt_suspect_db.asp

It is rare to encounter this in actual production environment but it can happen nevertheless and the list above gives you an idea on what to do under such situations. Also, here is an older post by Brian Knight about the same topic – even though it mentions SQL 7 and 2000, the steps are still valid.

Posted in SQL Server | 1 Comment »

Index Browning

Posted by decipherinfosys on March 28, 2008

Index Browning is a terminology that is sometimes used in the Oracle world to refer to fragmentation of the indexes. It is essentially a process where the btrieve leaf nodes are deleted thus leaving holes in an index structure. The name comes from the analogy of a tree where when a leaf turns brown after it is dead. Excessive index browning can cause performance issues since one would then need to traverse through a lot more data for range scans of the data to find the data. Such indexes should be rebuilt. Here is a script to help identify such indexes:

Set Doc Off
/*****************************************************************************************
*
* Purpose:  To generate statistics for indexes and get the “browning” report.

* NOTE:  Please plan on some downtime during this activity, as this operation could result in locking the indexes

* Usage:

* 1. Get the list of the indexes for a given table for which you need to run the brown report.

Ex: SELECT INDEX_NAME FROM USER_INDEXES WHERE TABLE_NAME = ‘<table>’

* 2. Run Analyze with Validate Structure option on these indexes. Use the following steps :

Create a temp table to store the index statistics. *
* CREATE TABLE TEMP_IND_STATS AS *
* SELECT * FROM INDEX_STATS WHERE 1=2; *
* *
* i. ANALYZE INDEX <ind_1> VALIDATE STRUCTURE; *
* ii. INSERT INTO TEMP_IND_STATS SELECT * FROM INDEX_STATS; *
* iii. COMMIT; *
* *
* Repeat the above i, ii and iii steps for all the indexes, you are interested in and *
* then run the following SQL to generate the browning report *
* *
* Explanation of ResultSet: *
* Index Name: *
* Name of the Index *
* Deleted Bytes in MB: *
* Total length of all deleted rows in Index measure in MegaBytes *
* Filled Bytes in MB: *
* Percent Browned: *
* Height: *
* Blocks: *
* # Of Keys: *
* Most Repeated Key: *
* Used Space in MB: *
* Rows Per Key: *
* *
*****************************************************************************************/
Set Pages 100 Feedback Off Verify Off Lines 120
COLUMN name FORMAT a30 HEADING ‘Index|Name’
COLUMN deleted_bytes FORMAT 9,999.99 HEADING ‘Deleted|Bytes|in MB’
COLUMN filled_bytes FORMAT 9,999.99 HEADING ‘Filled|Bytes|in MB’
COLUMN browning FORMAT 999.99 HEADING ‘Percent|Browned’
COLUMN height FORMAT 999,999 HEADING ‘Height’
COLUMN blocks FORMAT 999,999 HEADING ‘Blocks’
COLUMN distinct_keys FORMAT 999,999,999 HEADING ‘# Of|Keys’
COLUMN most_repeated_key FORMAT 999999999 HEADING ‘Most|Repeated|Key’
COLUMN used_space FORMAT 9,999.99 HEADING ‘Used|Space|in MB’
COLUMN rows_per_key FORMAT 999999 HEADING ‘Rows|Per|Key’
TTITLE “Index Statistics Report”
SELECT Name,
Del_Lf_Rows_Len/1048576 As deleted_bytes,
Lf_Rows_Len/1048576 As filled_bytes,
(Del_Lf_Rows_Len/Decode((Lf_Rows_Len+Del_Lf_Rows_Len),0,1,Lf_Rows_Len+Del_Lf_Rows_Len))*100 Browning,
Height,
Blocks,
Distinct_Keys,
Most_Repeated_Key,
Used_Space/1048576 As Used_Space,
Rows_Per_Key
FROM Temp_Ind_Stats
WHERE Rows_Per_Key > 0
/
TTITLE OFF
CLEAR COLUMNS
Set Feedback On Verify On

Posted in Oracle | Leave a Comment »

List of common TNS errors related to connectivity to Oracle

Posted by decipherinfosys on March 28, 2008

While helping one of our clients, we were asked to compile a list of the common TNS errors related to connectivity issues with Oracle – this was done to have a list of errors to look-up and have reconnect logic (as well as log those errors) in the application. After going through more than 150 pages of errors, we narrowed it down to a list of 50+ errors that were common. Here is that list:

list_common_tns_errors.doc

It might help if you have a similar requirement.

Posted in Oracle | Leave a Comment »

Calculating Rows per Block in Oracle

Posted by decipherinfosys on March 27, 2008

Here is a simple spreadsheet displaying how to calculate total number of rows per block in Oracle. The example takes a segment and does the calculation for it. The reason for doing this calculation at a client site was to show that the high value of ITL (Interested Transaction List) was not needed and the values needed to be re-configured as not only was it a waste of space, it would lead to performance issues as well since the data that could be fit in lesser number of blocks was now more spread out.

itl.jpg

As you can see above, every ITL slot occupies 24 bytes and since this client was using 40 for initrans and 40 for pctfree, considering the average row length for the segment, only 29 rows can be added to each data block which meant that 11 of the created ITL slots will never ever get used. One should carefully evaluate the waits in the system and see how one can then minimize those waits. So, if ITL waits have a high number in the system, then you can play with the initrans parameter and pctfree (maxtrans is not even a concern beginning Oracle 10g) but don’t go overboard and do not apply the same yardstick to every table in the system. Carefully evaluate which tables fall into the heavy transactional tables category and then adjust the settings and observe the waits and then fine tune again. Do remember that when changing these settings, these apply to the new blocks only – for the existing ones, you need to re-load the data.

If you want to read up more on ITL, here are some excellent articles written by Arup Nanda on this topic:

http://www.rampant-books.com/art_nanda_interested_tarnsaction_list_itl.htm

http://www.proligence.com/itl_waits_demystified.html

In-correct setting of initrans can also cause deadlocks which are represented by the X <–> S lock modes in the deadlock graph. Here is an excellent explanation by Tom Kyte of such a scenario.

Posted in Oracle | Leave a Comment »

Running Parameterized SQL Commands Using the Execute SQL Task – I

Posted by decipherinfosys on March 26, 2008

In one of our previous blog post, we saw the configuration of one of the many control flow tasks which the Integration Services offers us: the Execute SQL Task. We also saw how to configure this task to use various kind of sources for the sql statements like the Direct Input, Input from the variable and also the Input from a sql file. In this article we will go over using the parameterized queries in the Execute SQL task and returning result sets using OLE DB. In OLTP systems, we require running a particular sql statement many times with a change in the parameter value supplied. The idea is to parse and compile once and execute many times. As such such queries make use of bind variables (Oracle Lingo) or parameters and the queries using them are called as parameterized queries. You can search for parameterized query or bind variables on our site to get more information on those. In this post, we are going to concentrate on using the Execute Sql Task to run the parameterized queries. A lot of very good information on the Execute SQL task can be obtained at this URL at MSDN:

http://msdn2.microsoft.com/en-us/library/ms141003.aspx

Defining the parameters in the Execute Sql Task:

SQL statements and stored procedures frequently use input parameters, output parameters, and return codes. The Execute SQL task supports the Input, Output, and ReturnValue parameter types. The image below shows the parameter page of the Execute Sql task. For demonstration purposes few variables have been added and defined as parameters. We are going to see how to do this with an example later in the post.

ss_1.jpg

We use the Input type for input parameters, Output for output parameters, and ReturnValue for return codes from the stored procedure. Return codes from the stored procedures indicate if the stored procedure was executed successfully or not. Parameters in SQL commands, including queries and stored procedures, are mapped to user-defined variables that are created within the scope of the Execute SQL task, a parent container or within the scope of the package. The values of variables can be set at design time or populated dynamically at run time. You can also map parameters to system variables.

Defining the Result Set in the Execute Sql Task:

As we had mentioned in the previous post, we can also return the result set from the sql statement we execute from the Execute Sql Task against the source database. Depending upon the type of SQL command, a result set may or may not be returned to the Execute SQL task. Example: A SELECT statement typically returns a result set, but an INSERT or an UPDATE statement does not return a result set.

Writing Parameterized Queries in the Execute Sql Task Editor:

Depending on the connection type that the Execute SQL task uses, the syntax of the SQL command uses different parameter markers. Parameter markers are the place holders for the parameters. The following table (refer the MSDN post above – we are using that as an example for building up our blog post sample) summarizes the different syntax that is used for different connection types for specifying the parameters.

 
Conne Connection Type Parameter Marker Parameter Name Example
OLED OLEDB & EXCEL ? 0,1,2,3 select * from HumanResources.Employee where ManagerId =?
ODB ODBC
? 1,2,3,4 select * from HumanResources.Employee where ManagerId =?
ADO ADO ? par1, par2, par3 select * from HumanResources.Employee where ManagerId =?
ADO. ADO.NET
@<parametername> @<parametername> select * from HumanResources.Employee where ManagerId = @ManagerId

An explanation of these is given in the link above from MSDN. Using that as a reference and applying it to our example: In OLE DB, Excel, ODBC and ADO connections, the parameters are represented by a “?” in the query while in ADO.NET, it is represented by @parameter. The names that you can use as parameter names in the mappings between variables and parameters also vary by connection manager type. In the Parameter page of the Execute Sql task, for OLDDB & EXCEL connection types, the parameter name is specified by the ordinal number of the parameter in the query. That means the parameter name of 0 represents the first “?” in the query, parameter name 1 represents the second “?” in the query and so on. The ODBC connection manager type requires that you use the numeric value of a 1-based ordinal as the parameter name i.e. the first “?” in the query is named as 1 in the parameter name property of the parameter page, the second “?” is named and 2 and so on. The ADO.NET connection type, uses a user-defined name with a @ prefix. The user defined parameter name in the query and the user defined parameter name in the parameter page should match. Finally the ADO connection type uses the user defined parameter name in the parameter page of the execute sql task. Of all the connection types, OLE DB and the ADO.NET are most widely used.

Executing parameterized query using Execute SQL Task returning single result set:

The configuration of the Execute SQL Task was explained in detail in the earlier post. There is a minor change in the configuration here. In the earlier post, since it did not return any result set, we set the Result Set Property in the general page to be None. Since we want to return a single value in this case, let us set the Result Set Property to single row. The image below, shows the configuration of the Result Set Property in the General Page of the editor.

ss_2.jpg

Let us consider that we are running the following parameterized query in the Execute Sql Task. We configured the SQLSourceType property to direct input. Copy the following statement and paste it in the SqlStatement editor.

select Title from HumanResources.Employee where ManagerId =? And EmployeeId = ?

Add two variables names ‘Manager’ and ‘Employee’ of type integer to the SSIS variables collection to pass the ManagerId and the EmployeeId as the input parameters. Add another variable named ‘Title’ to hold the value returned by the sql statement. In the parameter page of the task, we should define these two parameters as Input parameters. The parameters will be named as 0,1 respectively, The name 0 maps to the ManagerId we are passing in and the name 1 maps to the EmployeeId we are passing in as the parameters. The image shown below shows the parameter page for the above query.

ss_3.jpg

The next step is to define the variable to hold the Result Set. The image shown below shows the Result Set page of the Execute Sql Task.

ss_4.jpg
Select the drop down for the Variable Name column and select the variable named ‘Title’ which we defined earlier as a part of the variable collection. Since we are returning only one column, the Result Name is left at 0. If we are returning more than one column, then the corresponding variables will have incrementing integers starting from 0. This completes the configuration of the Execute SQL task to execute the parameterized query. For our demo purposes, we have to display the value returned by the Execute Sql Task. To do that, drag and drop Script Task from the control flow tasks and rename it as Display Title. Script Task in also one of the most important tasks in SSIS control flow tasks. The image below shows the configuration of the script task.

ss_5.jpg

In the script page of the script task, enter the variable name we wish to display which is the ‘Title’. Click on the Design Script button on this page and paste the following code in it:

MsgBox(“The Retrieved Value is: “& _ Dts.Variables(“User::Title”).Value.ToString)

After finishing coding the script task, click on ok and we will return to the Control Flow design pane.Click on Execute Sql Task. This will show a green arrow which is the precedence constraint. Drag the green arrow and drop it on the Script Task. This is basically telling the SSIS engine that the Script Task should executed upon the successful execution of the prevoius step which is the execute sql task.

This finishes the development of the SSIS Package which displays the value returned by a parameterized query. We can change the parameters passed into the query by giving different values to the variables Manager and Employee. To execute the package press the F5 key. The following screen shot shows the successful execution of the package. We can go back to query the database to verify if the value returned was correct or not.
ss_6.jpg

Configuring the Execute Sql Task for executing the parameterized query returning a Full Result Set is very similar to the above example that we had just described with a few differences.

The first difference is that since we are returning a full result set, set the Result Set property in the General Page to Full result set rather than Single Row. The image shown below shows the Execute Sql Task configured to return the full result set.

ss_7.jpg
The next difference is in defining the Result Set. Since we are returning a full result set from the sql statement we require a variable of Object Type to hold the full result set. So for our example let us define a variable ResultSet of type Object for holding the full result set returned from the sql statement. Paste the sql statement below into the Sql Statement Editor of the Execute Sql Task.

select EmployeeId,Title,BirthDate,Gender,VacationHours from HumanResources.Employee where ManagerId = ?

Let us define one more variable called manager to pass the ManagerId into the sql statement. Since we are returning 5 columns, for our display purposes, let us define 5 more variables named EmployeeId, Title, BirthDate, Gender, VacationHours with the respective types. The image below shows the variables editor with the variables we described.

ss_8.jpg
In the Result Set page of the Execute Sql Task, we set the Variable which holds the result set returned by the sql statement. As described earlier this variable is of type Object. In our example, the variable name is ResultSet which holds the result set. The following screen shot shows the configuration of the Result Set Page of the Execute Sql Task.

ss_9.jpg

For our display purpose, we need to have some mechanism which loops through each record in the result set and displays the record. Here comes the need of the For Each Loop Container. This article gives a brief introduction of For Each Loop container. This will be dealt in detail in upcoming posts.

For Each Loop Container is used in the situations where we need to loop through a particular directory until we process all the files in the director or when we need to loop through each record in the record set for processing. The following screen shot shows the For Each Loop Container Collection Page. The General Page of this task just asks us to enter a meaningful name for this task.

ss_10.jpg

The most important configuration property in this is the Enumerator Property. The Enumerator specifies the For each Loop Container what kind of looping it performs. In our case, since we have to loop through a record set, we should select the Foreach ADO Enumerator. Foreach ADO Enumerator loop through each record in the record set. We should map each column in the record set to a variable to display/process them. This assignment is done in Variable Mappings Page. Earlier we defined the variables to hold the columns returned by the sql statement. The image below shows the variable mapping to the columns returned from the SQL statement.

ss_11.jpg

The Index column represents the ordinal position of the column in the sql statement. I am mapping the first column returned to a variable called EmployeeId, the second column to the variable Title and so on. The Index value starts from 0 and increments by 1 as the number of variables increases. This finishes the configuration of the For Each Loop Container. Inside the For Each Loop Container, drag and drop the script task. Configure this task as we did in the earlier example. The image below shows the script page of the script task.

ss_12.jpg

Enter all the variables EmployeeId, Title, BirthDate, Gender, VacationHours in the ReadOnly variables Property of the script task. Click on the Design Script button on the Script page of the task and paste the following code in the Script Task Editor:

MsgBox(“The EmployeeId is: ” & _
Dts.Variables(“User::EmployeeId”).Value.ToString)

MsgBox(“The Title is: ” & _
Dts.Variables(“User::Title”).Value.ToString)

MsgBox(“The BirthDate is: ” & _
Dts.Variables(“User::BirthDate”).Value.ToString)

MsgBox(“The Gender is: ” & _
Dts.Variables(“User::Gender”).Value.ToString)

MsgBox(“The VacationHours is: ” & _
Dts.Variables(“User::VacationHours”).Value.ToString)

The image below shows the scripting editor of the script task:

ss_13.jpg
This finishes the development of the SSIS Package which displays the result set returned by a parameterized query. We can change the parameters passed into the query by giving different values to the variables Manager. To execute the package press the F5 key. The following screen shot shows the partially successful execution of the package. We can go back to query the database to verify if the value returned was correct or not.

ss_14.jpg

In this post, we discussed the use of parameterized queries and how to execute the parameterized queries using the Execute SQL Task. We also discussed the various kinds of result sets returned by the Execute Sql Task and how to configure the task to return the single result set and the Full Result Set. We also went a little into the configuration of “For Each Loop” Container. Experimenting with the XML result set is left as an exercise to the reader. This is similar to that of returning the single row except that we specify explicitly that the statement is returning the XML data.

Posted in SQL Server | 1 Comment »

Compund Triggers in Oracle 11g

Posted by decipherinfosys on March 24, 2008

In one of our previous blog post, we had covered the basic concepts of triggers in Oracle. We had briefly touched upon the topic of compound triggers in that post. In 11g, Oracle introduced the concept of compound triggers in which we can combine more than one trigger.

Compound triggers allow us to combine more than one trigger with different triggering timings into a single compound trigger. This is the biggest advantage of compound trigger as it allows for modular programming with less cumbersome code. Compound trigger contains declaration section and more than one section for each timing points (i.e. before each row, after each row, after statement etc.).

In our example, we will create two tables. TEST and AUD_TEST (To record the auditing of inserted records). We are also creating sequence to generate the number automatically.

CREATE TABLE TEST
(
TEST_ID NUMBER(9),
TEST_NAME VARCHAR(30)
)
/

CREATE TABLE AUD_TEST
(
AUD_ACTION VARCHAR(6),
AUD_DATE DATE,
TEST_ID NUMBER(9),
TEST_NAME VARCHAR(30)
)
/

CREATE SEQUENCE TEST_SEQ
START WITH 100
INCREMENT BY 1
/

Now we will create a compound trigger with three triggering events.
• BEFORE EACH ROW: to generate the new ID
• AFTER EACH ROW: to bulk collect the inserted record into collection.
• AFTER STATEMENT: To populate audit tables using newly inserted records into TEST table. This is so that we don’t pay penalty of performance by inserting record for each row.

Let’s create the compound trigger now.

CREATE OR REPLACE TRIGGER TEST_TRIG_COMPOUND
FOR INSERT
ON TEST
COMPOUND TRIGGER

/* Declaration Section*/

TYPE ga_Test_ID IS TABLE OF TEST.TEST_ID%TYPE index by pls_integer;
TYPE ga_Test_NAME IS TABLE OF TEST.TEST_NAME%TYPE index by pls_integer;

va_Test_NAME ga_Test_NAME;
va_Test_ID ga_Test_ID;
v_cnt PLS_INTEGER := 0;

BEFORE EACH ROW IS
BEGIN

:NEW.TEST_ID := TEST_SEQ.NEXTVAL;

END BEFORE EACH ROW;
AFTER EACH ROW IS
BEGIN

v_cnt := v_cnt + 1;
va_Test_ID(v_cnt) := :NEW.TEST_ID;
va_Test_Name(v_cnt) := :NEW.TEST_NAME;

END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN

FOR i IN 1..VA_TEST_ID.COUNT
LOOP
INSERT INTO AUD_TEST(AUD_ACTION,AUD_DATE,TEST_ID,TEST_NAME)
VALUES (‘INSERT’,sysdate, va_Test_ID(i), va_Test_Name(i));
END LOOP;

END AFTER STATEMENT;

END TEST_TRIG_COMPOUND;
/

Let us insert records into the TEST table now.

SQL> INSERT INTO TEST(TEST_NAME)
2 SELECT object_name
3 FROM user_objects
4 WHERE rownum < 10;

9 rows created.

Above SQL indicates that we created 9 records successfully in TEST table and AUD_TEST table each. You can verify the result by querying both the tables.

Inspite of some obvious benefits, compound trigger also has certain limitations. Major limitations are:

• It can be defined only as a DML trigger.
• Exceptions cannot be handled across the blocks. It should be handled in the same block in which it occurs.
• :OLD and :NEW cannot be used in declaration section or BEFORE STATEMENT and AFTER STATEMENT section.
• :NEW values can be changed only in BEFORE EACH ROW section.

For complete list of restrictions, please refer to PLSQL Reference manual. In one of the upcoming blog post, we will see how we can use compound triggers to avoid mutating table errors.

Posted in Oracle | 2 Comments »

Disable/Enable Foreign Keys and Triggers in Oracle

Posted by decipherinfosys on March 22, 2008

In one of our previous posts, we had posted a set of scripts for disabling/enabling foreign key constraints in SQL Server. In response to that post, one of the readers had asked us to post similar set of scripts for Oracle. This post addresses that question. The scripts are available below:

disable_enable_constraint_triggers_oracle.txt

Posted in Oracle | 2 Comments »