Systems Engineering and RDBMS

Archive for June, 2007

Lock Detection Scripts in Oracle (Non-RAC environment)

Posted by decipherinfosys on June 22, 2007

Here are some lock detection scripts for Oracle in a non-RAC environment.  Next time, we will post the ones that apply to the RAC environment and will post the series of steps on how to help detect those issues using the gv$ views.  In the following SQL scripts, it is assumed that you already have the select privileges on the objects in the from clause.  If you do not, then you need to grant those select privileges to the Oracle User using which you will be running these scripts.  If you want to look for a specific Oracle user, replace the “username is not null” clause with the equality operator and given that particular user name.  Also, the RULE hint is used because we ran into a performance issue in 9iR2 where accessing the system views was very slow using the ALL_ROWS or FIRST_ROWS_N optimizer modes.  The RULE mode is deprecated in 10g, so you can either leave it there if you want a generic script for all environments (when used that way, the hint is ignored) or just yank it out for your 10g installations.

Login into database using SQLPlus to run the following 2 sqls to find out the locking information in the
SQL 1: Shows the SID and SERIAL# of all the locks currently held in the db along with the type of the lock
and related information.
SQL 2: Shows the SID and SERIAL# of all the locks along with the currently SQL being run.

set lines 120 pages 1000 feed on verify off echo off
column sid          heading ‘Sid’         format 99999
column serial_num   heading ‘Serial#’     format 999999
column db_username  heading ‘DB|Username’ format a15 wrap
column process      heading ‘Process’     format a10 truncate
column host_pid     heading ‘Host|PID’    format a8  wrap
column machine      heading ‘Host’        format a12 truncate
column program      heading ‘Program’     format a25 truncate
column object_name  heading ‘Object|Name’ format a20 wrap
column lock_type    heading ‘Lock|Type’   format a5 truncate
column mode_held    heading ‘Mode|Held’   format a10 truncate
column mode_req     heading ‘Mode|Requested’ format a15 truncate
column ctime        heading ‘Time Hrs.|Since|LMode|Held’ format a8 truncate
column is_blocking  heading ‘Blocking?’   format a12 wrap
break on sid on serial_num on process on db_username on spid on machine
select /*+rule */
ses.sid as sid,
ses.serial# as serial_num,
ses.process  as process,
ses.username as db_username,
pro.spid     as host_pid,
ses.machine  as machine,
substr(ses.program,1,30) as program,
substr(obj.object_name,1,20) as object_name,
loc.lock_type as lock_type,
loc.mode_held as mode_held,
loc.mode_requested as mode_req,
to_char(trunc(sysdate) + loc.last_convert/(24*3600), ‘HH24:MI:SS’) as ctime,
loc.blocking_others as is_blocking
from v$session ses,
v$process pro,
dba_lock loc,
dba_objects obj
where ses.sid      = loc.session_id
and ses.paddr    = pro.addr
and loc.lock_id1 = obj.object_id
and ses.username is not null
order by ses.sid, ses.serial#, ses.process, ses.username
clear columns
clear breaks

col sid        heading ‘Session|Id’  format 99999
col serial_num heading ‘Serial#’     format 999999
column db_username  heading ‘DB|Username’ format a15 wrap
column lock_id1 format a8 noprint
column ctime   heading ‘Time Hrs.|Since|LMode|Held’ format a8 truncate
col sql        heading ‘SQL|Text’    format a64 wrap
break on sid on serial_num on process on db_username
select /*+rule */
a.sid as sid,
a.serial#  as serial_num,
a.process  as process,
a.username as db_username,
c.lock_id1 as lock_id1,
to_char(trunc(sysdate) + c.last_convert/(24*3600), ‘HH24:MI:SS’) as ctime,
b.sql_text sql
from   v$session a,
v$sqltext b,
dba_lock c
where  a.sid            = c.session_id
and    a.username is not null
and   ((a.sql_address    = b.address
and  a.sql_hash_value = b.hash_value)
or    (a.prev_sql_addr  = b.address
and  a.prev_hash_value = b.hash_value))
order by a.sid, a.serial#, a.process, a.username, c.lock_id1, b.piece
clear breaks
clear columns

If your Oracle user has the “ALTER SYSTEM” privileges, you can then use the SID and SERIAL# information from above and use that to kill the sessions (if at all needed) that are holding blocking locks for a longer duration of time.

Posted in Oracle | Leave a Comment »

More acquisitions by Google

Posted by decipherinfosys on June 21, 2007

Google has acquired feedburner – the chicago based company that provides media distribution services for blogs and RSS feeds. Our RSS feeds also go through feedburner. Here is the announcement post on their site.

They also acquired another company called Zenter – this company provides software for creating online presentations. If you recall,Google had previously bought Tonic Systems which also works in the same area but more on the backend technology. Google should now have a complete set of online applications suite – documents, spreadsheets, presentations, mail and calender/events. In order to be a real threat to MS-Office though, Google will need to figure out a way to do seamless integration between these different applications. The usability and the interface of these applications is another cause of concern for them – it still has a long way to go. Google is not the only one looking at the online office application market – other players in this line are: Zoho, Zimbra, MS Office Live and thinkfree. Read/Write Web had an interesting post with a comparison between these different application suites.

Posted in News | Leave a Comment »

More on Sharepoint Resources

Posted by decipherinfosys on June 21, 2007

Here are some more Sharepoint resources that you can use for learning about WSS 3.0 or MOSS 2007:

  1. : This has the 40 sample applications provided by MSFT – you can create an account and play with those to get a feel of what they offer.
  2. : MSFT Sharepoint Web-casts.
  3. : A great resource for Web Seminars on the topic.
  4. : Sharepoint Solutions Blog – an excellent resource.
  5. “How do I…” series from MSFT on Sharepoint topics.
  6. MSFT guide to MOSS 2007 features.
  7. Codeplex has a lot of good articles on this topic.

You can also look into the Sharepoint category to see other URLs and information that we had posted about WSS 3.0 and MOSS 2007. In the future posts, we will walk through a generic Inventory Management application that can be built using MOSS 2007 using an external database as the data store.

Posted in Sharepoint | Leave a Comment »

Start up parameters in SQL Server

Posted by decipherinfosys on June 20, 2007

Very rarely does one need to add to the start up parameter list in SQL Server but when you do need to (when working with MSFT PSS or trying to add trace flags), you should know what those parameters do and where to add them. In this blog post, we will go over the start up parameters that exist in SQL Server by default for a given installation and the additional ones that can be added to the list. First, let’s take a look at where and how you can define these start up parameters. In SQL Server 2000, you can access them through the Enterprise Manager by right clicking the server name (instance) and selecting properties. When the screen that is shown below appears, you have the tab at the bottom that states “Start up Parameters”.


Once you click on that tab, you will get the following window – addition or removal of the start-up parameters can be done here.


In SQL Server 2005, you will make use of the SQL Server configuration manager. Once you open it up, right click on the instance for which you want to configure the start-up parameters and select properties:



You can add/remove your start-up configuration parameters here. A semi-colon is used to separate out each start-up parameter. Another way (non-GUI) to add/remove these parameters is to use the “sqlservr.exe parameter” command from the command line.

Now that we have seen where and how to add/remove the start-up parameters, let’s go over these parameters:

By default, you will see three start-up parameters which can be used to change the location of the master database data and log file and the error logs location:

-dC:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf;

-eC:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG;

-lC:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\mastlog.ldf

Some other start-up parameters that you might find useful (Standard Disclaimer: Please involve MSFT PSS if you are un-sure of these parameters or play around with these in an environment that is non-critical):

-f: This can be used to start up SQL Server with minimal configuration. This is useful in scenarios where a bad configuration setting has prevented your instance from starting.

-T trace#: This is one of the most useful options – this can be used to specify the trace flags that SQL Server should use when starting up like the 1204, 1211 trace flags etc.

-x: This option disables the CPU time and cache hit ratio statistics.

-m: This options starts up SQL Server in a single user mode. This is typically helpful when you need to repair the system databases.

-g memory_to_reserve: This option can be used to specify a number in MB that the engine will leave available within the SQL Server process but outside of the memory pool – this is used for automation objects that are used in T-SQL, or distributed queries using linked servers, extended procedure’s .dll files. The default is 256MB and you can increase it if you need to by using this parameter.

There are overall 9 different start-up options but the ones mentioned above are the ones that we have used at different occasions. You can read more about the remaining options in SQL Server BOL under the topic: “Using the SQL Server Service Startup Options”. One thing to note is that in case you are using the “net start” command to start up the SQL Server service, you need to use slashes (/) instead of the hyphens (-) for these start-up parameters. While some of these options are typically used only at the time of troubleshooting/repairing (like the -m or -f options), you may want to use certain options like -g, or -T all the time and you can configure those using the steps given above. When troubleshooting, we would recommend using these with the sqlservr.exe from the command line and for options that you want to always have in place, you can use the GUI – that will also make the changes to the registry keys so that they are taken into account every time.

Posted in SQL Server | 1 Comment »

Bulk Collect and NO_DATA_FOUND exception

Posted by decipherinfosys on June 19, 2007

Bulk Collect clause is used to return multiple values to collection variables. When data involved is very large, we can use Bulk Collect clause to fetch the data into local PL/SQL variables faster without looping through one record at a time.  We can store the result set into either individual collection variables, if we are fetching certain number of columns or collection records, if we are fetching all the columns of the table.

When we are bulk collecting records, if a query does not fetch any records, it does not throw NO_DATA_FOUND exception.  We need to check whether the collection variable has any elements or not. In simple words, each record fetched is considered as one element. Let us see this by an example. Connect to the database using SQL*Plus with proper authentication. Execute following sql to create a table.

Test_Desc     VARCHAR(50)

Now run the following PL/SQL block in which we are collecting data into scalar variables.


V_Test_ID   NUMBER(9);
V_Test_Desc VARCHAR(50);


SELECT Test_ID, Test_Desc
INTO v_Test_ID, v_Test_Desc
WHERE Test_ID = 1;

dbms_output.put_line(‘Exception: No Records in the Test Table’);

When you execute the PL/SQL block shown above, since we don’t have any data, the NO_DATA_FOUND exception will be caught and the following result will be displayed.

Exception: No Records in the Test Table

PL/SQL procedure successfully completed.

Now, let’s change the above PL/SQL block to use bulk collect syntax.  Execute the following PL/SQL block. As we mentioned earlier, normally bulk collect clause is used when we want to operate on larger data set, but in our example, we are just fetching single record to demonstrate the issue and the resolution.



va_Test_ID    ga_Test_ID;
va_Test_Desc  ga_Test_Desc;


SELECT Test_ID, Test_Desc
BULK COLLECT INTO va_Test_ID, va_Test_Desc
WHERE Test_ID = 1;

IF va_test_ID.count = 0 THEN
dbms_output.put_line(‘Bulk Collect: No Records in the Test Table’);

dbms_output.put_line(‘Exception: No Records in the Test Table’);

Here is the result of the execution.

Bulk Collect: No Records in the Test Table

PL/SQL procedure successfully completed.

If you look at the message above, you will notice that, it did not generate NO_DATA_FOUND exception so text in the exception is not displayed. Instead we checked for count of elements in the collection and if the count is 0, a different text is displayed.

So, the point to keep in mind is that whenever Bulk Collect clause is used in stored procedures/ functions, checking for NO_DATA_FOUND exception can give wrong results if the query does not return any record.

Posted in Oracle | 3 Comments »

TABLESAMPLE clause in SQL Server 2005

Posted by decipherinfosys on June 18, 2007

SQL Server 2005 introduced a new clause – TABLESAMPLE. This clause restricts the number of rows returned from the “FROM” clause in a query to a sample number or a percentage of rows. Let’s look at an example first to understand this clause:

set nocount on
create table demo_ts (id_val int identity(1,1) , col1 int)
declare @i int
set @i = 1
while (@i <= 10000)
insert into demo_ts (col1) values (@i)
set @i = @i + 1

We have populated the table with sample data. Now, let’s use the tablesample clause and retrieve 500 rows and then we will use the percent clause to get a certain percentage back:

select *
from demo_ts
tablesample (500 rows);

select *
from demo_ts
tablesample (10 percent);

If you keep running these queries over and over again, you will see the non-deterministic nature of this clause – it will keep giving back different results based on the sampling that it does on the data pages for that table. This sampling is done on the pages and not on the rows – once the random value is assigned to a page, all of the rows from those pages are returned (so it’s either all the rows from the page(s) that got sampled or none at all regardless of the rows on the page). Since the pages could be filled with different number of rows, the count of the records returned per execution could be different as well.

So, how will this behave if one or more of the tables in the from clause have this clause applied to them, example:

select *
from demo_ts a
tablesample (500 rows)
inner join demo_ts_2 b
on a.id_val = b.id_val

Assume that demo_ts_2 table has only a sub-set of the rows as compared to the demo_ts table. In this scenario, some rows returned from the alias b might not have a matching record in the sampled set from the alias “a” and would thus return different records. If you always want the same data set to be retrieved, you can use the REPEATABLE option with the same repeat_seed value – this will force SQL Server to return the same sub-set of the rows each time, example:

select *
from demo_ts
tablesample (10 percent)
repeatable (200);

So, what is a practical use of this clause? In some statistical or data-mining applications where one requires a sampling mechanism over a sub-set of the data, one can make use of this – however, if you are looking for a true random sampling, then read the post on random number generation and random sorts on our blog.

Posted in SQL Server | Leave a Comment »

Extracting the User Name from the export *.dmp file

Posted by decipherinfosys on June 17, 2007

If you get an export dump file that you need to import but you do not know the userID that was used to do the export, there are a couple of ways, that you can extract that information out of the *.dmp file. You will then use this userID when doing the import using the imp command. Here is how you can get that information using some of the unix commands:

Unix commands ‘head’ and ‘tail’ displays first few lines from the top and bottom respectively. Since user information is usually in the starting of the file, we can issue following command to know the username inside the file.

$head -30 exp.dmp

Above command displays first 30 lines of exp.dmp file. Another such command is ‘strings’ command.

$strings exp.dmp | pg OR
$strings exp.dmp | more

This command also displays first few lines of the file. If you don’t find username in first page, then you can hit <Enter> to display next page. Once you know the username, you can come out of it by hitting <Ctrl-C>.

In Oracle 10g, there are better ways of doing export/import by using the data pump feature – we will cover that in one of the upcoming posts.

Posted in Oracle | Leave a Comment »

Oracle 11g launch in July 2007

Posted by decipherinfosys on June 17, 2007

Oracle is all set to release their next version of the RDBMS on July 11. You can read more on this in Infoworld’s article as well as here on Oracle. In Feb., we had posted on the upcoming features of this release and will now be posting specific articles pertaining to the features of this release.

Posted in News, Oracle | Leave a Comment »

Safari for Windows

Posted by decipherinfosys on June 17, 2007

Apple recently introduced Safari 3, their web browser for Windows and Mac. It is supposed to be faster than IE 7 on Windows as per the press release by Apple 🙂 So, what do other people think of this release? Here is the blog post from John Lilly (COO of Mozilla) – link. And regarding the reasons on why Apple might have thought about releasing Safari for Windows, here is an interesting post by Om Malik on his blog.

Posted in News, Technology | Leave a Comment »

Recycling large listener log files

Posted by decipherinfosys on June 16, 2007

When the listener.log file grows beyond a certain size, you can start noticing interim performance and connectivity issues. Oracle support had mentioned to re-cycle the log file when the size reaches 30MB but this can very well vary per application. You can search metalink for this topic and you will find a couple of notes on this topic. In this post, we are posting a korn shell script (ksh) which can be scheduled using crontab to check for the listener.log file size. Script renames the file to a file with .bak extension and creates a new listener.log file when it reaches the max_size parameter mentioned in the script.



if [ ! -f ${env_file} ]
/usr/bin/echo “${env_file} file does not exist.”
exit 2

. ${env_file}


size=`ls -l $file_path/$file | awk ‘{ print $5;}’`

if [ $size -gt $max_size ]
cp $file_path/$file $file_path/$file.bak
cat /dev/null > $file_path/$file

Posted in Oracle | Leave a Comment »