Systems Engineering and RDBMS

Archive for October, 2008

Data Compression in SQL Server 2008 – II

Posted by decipherinfosys on October 31, 2008

Yesterday, we had given a brief summary of the data compression feature available in SQL Server 2008. Today, we will cover the row level and page level data compression and also the wizard to set it up for an object.

Row Level Data Compression:

As you know, the page size in the case of SQL Server is fixed (Oracle does allow us options to select from a range of block sizes) so if you have larger sized rows, it means that you will have fewer records per page which means that if there is a query that needs to do a range search, it has to go over a large number of pages – which means more I/O. A classic example typically given to help explain this is that of a cookie jar. If you have a cookie jar and have larger sized cookies, you can fit in say 10 of them in that jar whereas if you had smaller sized cookies, you can fill in say 100 of them in the same jar. So what does row level data compression do? It turns the fixed length data types for the columns (example int which normally takes 4 bytes for storage) into variable length data types (say we store a value of say 1, then it will use only 1 byte instead of 4) and thus frees up the empty space. It can also ignore the null values as well. This in turn allows us to save more rows per page. It reduces the amount of meta-data required to store a row, does the same thing for other fixed data types like say CHAR(10) would take 10 bytes normally – in this case if we are storing only 3 characters, it will take only 3 bytes and the blank characters are not stored.

Page Level Data Compression:

Page level compression does what row level data compression does plus does dictionary compression and prefix compression. So, the row level compression is done first followed by prefix and then the dictionary compression – what prefix compression does is that the repeating patterns of the data at the start of a column values are removed and are substituted with an abbreviated reference which is stored in a CI (Compression Information) Structure which is after the page header in the page. Once the prefix compression is done, the dictionary compression kicks in and it also searches for the repeated values but not just in a column values – it searches for those anywhere on the page and same way stores them in the CI. So, if the data is more random, then the page level compression won’t help much – if the data is repetitive then the page level compression is more useful.

So, enough of theory – let’s see it in action now: We will use SSMS for displaying the compression. It can also be done via T-SQL. Let’s create a table:

use dis_test
go
create table dbo.big_table (col1 int, col2 char(100));
create clustered index big_table_ind_1 on dbo.big_table (col1);

Now, let’s populate it with some sample data:

set nocount on
go
declare @i int
set @i = 1
while (@i <= 100000)
begin
insert into dbo.big_table values (@i, ‘value:’ + cast(@i as varchar(10)))
set @i = @i + 1
end

In SSMS, Object Explorer, right click on the table, select storage and then Manage Compression:

It will bring up the data compression wizard as shown below:

You can then click on next and choose to use the same compression type (row/page or none) for all the partitions:

It also allows us to see how much space we will save based on different compression schemes. So, let’s try that out first – let’s select row level compression and hit on calculate to see what we will be able to save:

As you can see, we saved from 11.070 MB to 2.422 MB. And now, let’s try it with the page level compression:

It has now dropped down to 1.25 MB. Let’s go with this one and click next. We will get a screen that looks like the one below:

data_c_11

We have the option of running the compression at that time or saving it as a script or scheduling it for a later time. If you select to run immediately, it will present you the summary and then clicking on Finish, you will be able to compress it.

Now, one thing to remember is that as we mentioned yesterday, the compression is at the object level. Since a table with clustered index is treated as one and the same thing, if you have a non clustered index on this big_table, you would need to compress it separately. A bit cumbersome, you might think at first but this allows us a lot of flexibility – especially if you benefit more by compressing only certain objects in your environment and leaving the rest as is because either there is not much benefit to it or the CPU load is not acceptable.

Posted in SQL Server | 3 Comments »

Data Compression in SQL Server 2008 – I

Posted by decipherinfosys on October 30, 2008

In one of our previous blog post, we had talked about backup compression feature in SQL Server 2008. In today’s post we are going to cover the data compression in SQL Server 2008.  We can use data compression feature for these objects:

  1. Tables without any indexes.
  2. Tables with clustered and/or non-clustered indexes (includes the ones with partitioning).
  3. Non clustered indexes.
  4. Indexed views.

And the modes of compression are either row level or page level compression.  The compression is handled completely on the database engine side so there is no changes to the application code.  So, after enabling compression for an object, anytime that the data has to pass to and from the storage engine, it has to be compressed and uncompressed respectively.  So, there is an extra CPU overhead involved but the amount of disk I/O saved by compression makes up for the CPU costs.  So, let’s walk through an update statement and see what happens behind the scenes (assume that data compression has been done at the page level for the object):

update tableA set col1 = 2 where colx = 1;  (colx is the Primary Key column)

In any update statement, a read is done before the write in order to identify the record that is being updated.  So, the relational engine will make a request to the storage engine to retrieve that data record and the storage engine gets the record from the disk (assuming it is not in the buffer cache already) and puts it into the buffer cache – still in a compressed format.  When the handover from the Storage Engine to the Relational Engine happens, that is when the uncompression happens.  Now, having gotten the row that needs to be updated, the update is fired off and passed back to the storage engine and it again compresses it and keeps it in the buffer cache till the row is flushed to disk in the compressed format.

The obvious reasons for doing data compression are:

a) Disk space utilization improves.
b) Lesser disk I/O for read and write operations.
c) More data gets stored in the buffer cache since even in the buffer cache, it is compressed data.

Disadvantage is of course higher CPU load.  Still have to do a real benchmark to see what the impact is.

In tomorrow’s post, we will cover the row level and page level data compressions and will also go through the data compression wizard.

Posted in SQL Server | 2 Comments »

SQL Server 2008: Deprecated Feature – Notification Services

Posted by decipherinfosys on October 28, 2008

We had covered the different SQL Server services in one of our blog posts before. You can access that post over here. We had also covered in one of our previous posts that the Notification Services will be deprecated in SQL Server 2008 since there was very poor adoption for it because it was not very easy/flexible enough to use. It is a deprecated feature in SQL Server 2008 which means that even though it is available in SQL Server 2008, it is slated to be removed in the next release. So, in case you are using it, what would be your migration options?

There is no single option as far as we have seen till date. We can however use a two step approach:

1) Collect the information: You can do so by using either Policy Management Feature (can be scheduled or can be on demand) or can use the wonderful Auditing features of SQL Server 2008 including CDC (Change Data Capture) or if you were using Notification Services for performance issues, you can make use of Data Collection elements and extended events.

2) Notify: Once you have collected all the information that you wanted to track, you can then have a SQL Agent job scheduled to send that information to you at scheduled intervals or have DDL triggers configured to alert you or you can even use Database mail configured for the alerts.

If you still want to use Notification Services against the SQL Server 2008 Database Engine, you can still do so – get the SQL Server 2005 Notification Services Components Package RC1 from here.  It states:

This release of the SQL Server 2005 Notification Services components provides interoperability for running SQL Server 2005 Notification Services against either an instance of the SQL Server 2005 Database Engine or SQL Server 2008 Database Engine.

We would still advise removing that dependency on NS since it will not be available in the future.

Posted in SQL Server | 1 Comment »

SQL Server 2005 SP3

Posted by decipherinfosys on October 28, 2008

SQL Server 2005 SP3 is to be released by the end of this year as per the MSDN post over here. Now, would you rather wait for SP1 for SQL Server 2008 and move to it or would you stick with SQL Server 2005 for some time?  At our client sites, almost all of the clients are using SQL Server 2008 in their test labs right now -regression testing and benchmarking the applications and making sure that everything works fine and also waiting for the SP1 to be released before making the switch.  So, where do you stand on this topic?:

Posted in SQL Server | Leave a Comment »

The Google G-1 Phone

Posted by decipherinfosys on October 27, 2008

Finally, I got my Google G-1 phone. Being a T-Mobile customer, this was the best phone to get from a wide range of choices that were available to me. I did consider moving to AT&T and getting an iPhone but then would have had to cancel the plan, pay the penalty and then move. Decided against it and stuck with T-Mobile and went with the G-1. So, how is it?

Prior to purchasing the phone, I did go over the reviews done by several folks…here are two of them which were very good and very detailed:

a) Review of G-1 by Walter S. Mossberg
b) Hands on introduction to G-1

The phone is very easy to use, has very simple interface and the e-mail configuration and other device configurations were done in less than half an hour without referring any manuals.  So, what are the pluses and the minuses:

Plus Points:

1) Trackball makes it very easy to navigate through the items.  Having a keyboard is better for a person like me who finds working on the iPhone interface cumbersome at times (though one does become used to it after some time).
2) Google Applications and Google Search is very tightly integrated which is very good.
3) IM options: Google Talk, AOL, Yahoo Messenger and Windows Live.
4) Music options: MP3, AMR, WMA, MIDI, WAV and M4A formats are supported.
5) The width of the screen is pretty good – smaller than iPhone but still very nice – have 3 home pages.
6) Android OS is very robust – much powerful than the Windows Mobile.  Haven’t seen much lag time.
7) Good set of applications & games sections and am sure this will grow as time passes by.

Minus Points:

1) The same battery life issue as the iPhone – you use WiFi, 3G, bluetooth etc. and you will be charging it every 4 hours.
2) Google Maps on G-1 does not seem to be as good as the Google Maps on iPhone or even the Microsoft Maps on Windows Mobile.  Still need to play with it a bit more.
3) It comes with only 1GB MicroSD memory storage.
4) It is a bit bulky as compared to iPhone 3G and it does not come with a case that has a belt clip.

Posted in News, Technology | Leave a Comment »

Returning a record set from a Function in Oracle, SQL Server and DB2

Posted by decipherinfosys on October 27, 2008

In one of our previous posts, we had covered how to return record sets from a stored procedure in the case of Oracle, SQL Server and DB2 LUW. One of the readers recently asked how to do the same with a User Defined Function. We will demonstrate that in this blog post.

In Oracle, function which returns record set, requires an object type with attributes to be created first. Once object Type is created, we have to create named table type of the object type. In function we can use named table type as return type. We will show you in code snippet, how to do it but first let us create a table and populate it with some data. If you already have table with the same name, change all the occurrences of the table name with some other name.

CREATE TABLE Test
(
Test_ID      NUMBER(9) NOT NULL,
Test_Desc    VARCHAR(30),
Test_DATE    DATE,
CONSTRAINT PK_TEST PRIMARY KEY(Test_ID)
)
— TABLESPACE Clause
/

INSERT INTO Test(Test_Id, Test_Desc, Test_Date)
SELECT rownum, table_name, sysdate
FROM user_tables
WHERE rownum <= 5
/

Here is the data in the table.

SQL> select * from test;

TEST_ID TEST_DESC                      TEST_DATE
———- —————————— ———
1 DEPT                           20-FEB-07
2 EMP                            20-FEB-07
3 BONUS                          20-FEB-07
4 SALGRADE                       20-FEB-07
5 TEST                           20-FEB-07

Now let us assume, we want to return test_id and test_desc to calling application using function. First of all we will create an object type. Make sure that proper privileges are assigned to the user.

CREATE OR REPLACE TYPE TEST_OBJ_TYPE IS OBJECT
(
TEST_ID   NUMBER(9),
TEST_DESC VARCHAR(30)
)
/

Once object type is created, we will create named table type of above object type.

CREATE OR REPLACE TYPE TEST_TABTYPE AS TABLE OF TEST_OBJ_TYPE
/

Now let us write function code to return the result set.

CREATE OR REPLACE FUNCTION FN_GET_ROWS
RETURN TEST_TABTYPE
AS
V_Test_Tabtype Test_TabType;
BEGIN
SELECT TEST_OBJ_TYPE(A.Test_Id, A.Test_Desc)
BULK COLLECT INTO V_Test_TabType
FROM
(SELECT Test_Id, Test_Desc
FROM Test
) A;
RETURN V_Test_TabType;

EXCEPTION
WHEN OTHERS THEN
v_Test_TabType.DELETE;
RETURN v_Test_TabType;
END;
/

Create above mentioned function and then run following SQL to execute it. Below is the SQL followed by result.

SQL> SELECT * FROM TABLE(FN_GET_ROWS);

TEST_ID TEST_DESC
———- ——————————
1 DEPT
2 EMP
3 BONUS
4 SALGRADE
5 TEST

Here we have shown one example of returning record set from function. There are other ways of doing this in function also by initializing named table type and then allocating space using .extend as required.

SQLServer supports scalar functions, inline table-valued functions and multi statement table-valued functions. Here we will see how we can use multi statement table-valued function to return record set. First let us create table and populate it with sample data.

CREATE TABLE Test
(
Test_ID      INT IDENTITY(1,1) NOT NULL,
Test_Desc    VARCHAR(30),
Test_DATE    DATETIME,
CONSTRAINT PK_TEST PRIMARY KEY(Test_ID)
)
— FILEGROUP Clause
GO

INSERT INTO Test(Test_Desc, Test_Date)
SELECT TOP 5 table_name, getDate()
FROM information_Schema.tables
GO

Following is the data in the table. You may have different set of data.

Test_Id              Test_Desc     Test_Date
——-              ———     ———-
1                    syssegments   2007-02-21 23:19:11.717
2                    sysconstraints2007-02-21 23:19:11.717
3                    Invoice       2007-02-21 23:19:11.717
4                    TEMP_Invoice  2007-02-21 23:19:11.717
5                    Test          2007-02-21 23:19:11.717

Let us create function. You can run following code snippet from query analyzer.

CREATE FUNCTION DBO.FN_GET_ROWS()
RETURNS @TABLE TABLE
(
TEST_ID INT,
TEST_DESC VARCHAR(30)
)
AS
BEGIN
INSERT @TABLE
SELECT TEST_ID, TEST_DESC
FROM DBO.TEST
RETURN
END
GO

Once function is created successfully. Execute it using following sql. It is followed by the data returned by the function.

SELECT * FROM DBO.FN_GET_ROWS()
GO

Test_Id              Test_Desc
——-              ———
1             syssegments
2             sysconstraints
3             Invoice
4             TEMP_Invoice
5             Test

Table-valued functions can be used in join conditions also.

DB2 does not support multi statement table-valued function. It supports only inline table-valued function so we don’t have much flexibility in case of DB2. But we will see how we can get data using inline table-valued functions in DB2.

First we will create an empty table. Please make sure that statement terminator is defined as @.

CREATE TABLE Test
(
Test_ID      INT NOT NULL GENERATED BY DEFAULT AS
IDENTITY (START WITH +1, INCREMENT BY +1, CACHE 1000),
Test_Desc    VARCHAR(30),
Test_DATE    TIMESTAMP,
CONSTRAINT PK_TEST PRIMARY KEY(Test_ID)
)@

Let us create some data now.

INSERT INTO Test(Test_Desc, Test_Date)
SELECT table_name, current timestamp
FROM syscat.tables
FETCH FIRST 5 ROWS ONLY
@

Following is the data in the table. You may have different set of data.

Test_Id       Test_Desc                   Test_Date
——-       ———                   ———-
1             TEST                        2007-02-22 21:53:13.237
2             ATTRIBUTES                  2007-02-22 21:53:13.237
3             BUFFERPOOLDBPARTITIONS      2007-02-22 21:53:13.237
4             BUFFERPOOLNODES             2007-02-22 21:53:13.237
5             BUFFERPOOLS                 2007-02-22 21:53:13.237

Let us create function. You can run following code snippet from command center or from command line editor.

CREATE FUNCTION FN_GET_ROWS()
RETURNS TABLE
(
TEST_ID INT,
TEST_DATE TIMESTAMP
)
LANGUAGE SQL
READS SQL DATA
CALLED ON NULL INPUT
NO EXTERNAL ACTION
BEGIN ATOMIC
RETURN
SELECT TEST_ID, TEST_DATE
FROM TEST;
END@

Once function is created in the database, execute it using following SQL. Again SQL is followed by record set it returns.

— This is how we call table function in DB2. T is an alias.
SELECT * FROM TABLE(FN_
GET_ROWS()) AS T@

Test_Id       Test_Desc
——-       ———
1             TEST
2             ATTRIBUTES
3             BUFFERPOOLDBPARTITIONS
4             BUFFERPOOLNODES
5             BUFFERPOOLS

Posted in DB2 LUW, Oracle, SQL Server | 11 Comments »

Oracle Schema Audit Script

Posted by decipherinfosys on October 23, 2008

Yesterday, we had covered a system audit script for Oracle. Today, we are going to provide an audit script for the Oracle Schema/User:

rem
rem user_audit.sql
rem
rem This script will collect audit information for the Schema that you want to collect information on
rem
rem NOTE:
rem Run this script as the schema owner

ALTER SESSION SET NLS_DATE_FORMAT=’DD-MON-YYYY HH24:MI:SS’
/

PROMPT Generating Oracle_Schema audit report …..
SET PAGES 5000 LINES 120 VERIFY OFF ECHO OFF TRIMSPOOL ON TRIMOUT ON FEED OFF TERMOUT OFF
SET MARKUP HTML ON SPOOL ON HEAD ‘<TITLE> user Audit Report </TITLE>’
SPOOL user_audit.html
PROMPT
PROMPT
SELECT ‘Schema AUDIT PERFOMED BY DB USER ‘||USER ||’ AT ‘ || SYSTIMESTAMP AS “Schema AUDIT INFORMATION”
FROM dual
/

PROMPT
PROMPT Collecting DBA Views Select Privileges…
PROMPT
PROMPT Describing v$pending_xatrans$
DESC v$pending_xatrans$
PROMPT Describing v$xatrans$
DESC v$xatrans$

PROMPT
PROMPT Collecting Schema Schema/User SYS Privileges…
PROMPT
COLUMN username HEADING ‘DB|Username’ FORMAT a30
COLUMN privilege HEADING ‘Privilege|Name’ FORMAT a40
COLUMN admin_option HEADING ‘Admin|Option’ FORMAT a6
SELECT username,
privilege,
admin_option
FROM user_sys_privs
/
CLEAR COLUMNS

PROMPT
PROMPT Collecting Schema Schema/User Role Privileges…
PROMPT
COLUMN username HEADING ‘DB|Username’ FORMAT a30
COLUMN granted_role HEADING ‘Role|Granted’ FORMAT a30
COLUMN admin_option HEADING ‘Admin|Option’ FORMAT a6
COLUMN default_role HEADING ‘Default?’ FORMAT a8
SELECT username,
granted_role,
admin_option,
default_role
FROM user_role_privs
/
CLEAR COLUMNS

PROMPT
PROMPT Collecting Explicit Privileges Received by User for other Owners Tables…
PROMPT
COLUMN grantor HEADING ‘Grantor’ FORMAT a15 WRAP
COLUMN owner HEADING ‘Table|Owner’ FORMAT a20 WRAP
COLUMN table_name HEADING ‘Table|Name’ FORMAT a30
COLUMN privilege HEADING ‘Privilege’ FORMAT a30 WRAP
COLUMN grantable HEADING ‘Grantable?’ FORMAT a10
SELECT grantor,
owner,
table_name,
privilege,
grantable
FROM user_tab_privs_recd
/
CLEAR COLUMNS

PROMPT
PROMPT num_rows: Number of rows in the table
PROMPT blocks: Number of used data blocks in the table
PROMPT empty_blocks: Number of empty (never used) data blocks in the table
PROMPT avg_space: Average amount of free space, in kilobytes, in a data block allocated to the table
PROMPT chain_cnt: Number of rows in the table that are chained from one data block to another, or which have migrated to a new block, requiring a link to preserve the old ROWID
PROMPT avg_row_len: Average length of a row in the table in bytes
PROMPT num_freelist_blocks: The number of blocks on the freelist
PROMPT sample_size: Sample size used in analyzing this table
PROMPT user_stats: Were the statistics entered directly by the user?
PROMPT
PROMPT blevel: B*-Tree level: depth of the index from its root block to its leaf blocks. A depth of 0 indicates that the root block and leaf block are the same.
PROMPT leaf_blocks: Number of leaf blocks in the index
PROMPT distinct_keys: Number of distinct indexed values. For indexes that enforce UNIQUE and PRIMARY KEY constraints, this value is the same as the number of rows in the table
PROMPT
PROMPT Collecting Table Information…
PROMPT
COLUMN table_name HEADING ‘Table|Name’ FORMAT a30
COLUMN tablespace_name HEADING ‘Tablespace|Name’ FORMAT a15
COLUMN logging HEADING ‘Log?’ FORMAT a4
COLUMN pct_free HEADING ‘Pct|Free’ FORMAT 9999 JUST RIGHT
COLUMN pct_used HEADING ‘Pct|Used’ FORMAT 9999 JUST RIGHT
COLUMN ini_trans HEADING ‘Ini|Tran’ FORMAT 9999 JUST RIGHT
COLUMN freelists HEADING ‘Free|lists’ FORMAT 99999 JUST RIGHT
COLUMN num_freelist_blocks HEADING ‘Blocks|on|Freelists’ FORMAT 999999 JUST RIGHT
COLUMN table_cached HEADING ‘Cache?’ FORMAT a6
COLUMN buffer_pool HEADING ‘Buffer|Pool’ FORMAT a7
COLUMN row_movement HEADING ‘Row|Move?’ FORMAT a5
COLUMN monitoring HEADING ‘Monitor?’ FORMAT a8
SELECT table_name,
tablespace_name,
logging,
pct_free,
pct_used,
ini_trans,
freelists,
num_freelist_blocks,
cache AS table_cached,
buffer_pool,
(CASE WHEN row_movement = ‘ENABLED’ THEN ‘YES’ ELSE ‘NO’ END) AS row_movement,
monitoring
FROM user_tables
WHERE table_name NOT LIKE ‘BIN$%’
ORDER BY table_name ASC
/
CLEAR COLUMNS

PROMPT
PROMPT Collecting Table Analyze Information…
PROMPT
COLUMN table_name HEADING ‘Table|Name’ FORMAT a30
COLUMN analyzed_date HEADING ‘Analyzed|Date’ FORMAT a11
COLUMN sample_size HEADING ‘Sample|Size’ FORMAT 99999999999
COLUMN num_rows HEADING ‘Total|Rows’ FORMAT 99999999999
COLUMN avg_row_len HEADING ‘Avg|Row|Len’ FORMAT 99999
COLUMN blocks HEADING ‘Total|Blks’ FORMAT 9999999
COLUMN empty_blocks HEADING ‘Empty|Blks’ FORMAT 99999
COLUMN avg_space HEADING ‘Avg|Space’ FORMAT 999999D99
COLUMN chain_cnt HEADING ‘Chain|Count’ FORMAT 9999999
COLUMN chain_pct HEADING ‘Chain|PCT’ FORMAT 999D99
SELECT table_name,
TO_CHAR(last_analyzed, ‘DD-MON-YYYY’) AS analyzed_date,
sample_size,
num_rows,
avg_row_len,
blocks,
empty_blocks,
(avg_space/1024) AS avg_space,
chain_cnt,
(chain_cnt/(CASE WHEN num_rows = 0 THEN 1 ELSE num_rows END))*100 AS chain_pct
FROM user_tables
WHERE temporary=’N’
AND table_name NOT LIKE ‘BIN$%’
ORDER BY TRUNC(last_analyzed) DESC, ((chain_cnt/(CASE WHEN num_rows = 0 THEN 1 ELSE num_rows END))*100) DESC
/
CLEAR COLUMNS

PROMPT
PROMPT Collecting Index Information…
PROMPT
COLUMN table_name HEADING ‘Index|Name’ FORMAT a30
COLUMN index_type HEADING ‘Index|Type’ FORMAT a9
COLUMN tablespace_name HEADING ‘Tablespace|Name’ FORMAT a15
COLUMN uniqueness HEADING ‘Unq?’ FORMAT a4
COLUMN logging HEADING ‘Log?’ FORMAT a4
COLUMN pct_free HEADING ‘Pct|Free’ FORMAT 9999 JUST RIGHT
COLUMN ini_trans HEADING ‘Ini|Tran’ FORMAT 9999 JUST RIGHT
COLUMN freelists HEADING ‘Free|lists’ FORMAT 99999 JUST RIGHT
COLUMN buffer_pool HEADING ‘Buffer|Pool’ FORMAT a7
COLUMN user_stats HEADING ‘User|Stats’ FORMAT a5
COLUMN status HEADING ‘Status’ FORMAT a8
SELECT index_name,
(CASE WHEN index_type = ‘FUNCTION-BASED NORMAL’ THEN ‘FB NORMAL’
WHEN index_type = ‘FUNCTION-BASED BITMAP’ THEN ‘FB BITMAP’
ELSE index_type END) AS index_type,
tablespace_name,
(CASE WHEN uniqueness = ‘UNIQUE’ THEN ‘YES’ ELSE ‘NO’ END) AS uniqueness,
logging,
pct_free,
ini_trans,
freelists,
buffer_pool,
user_stats,
status
FROM user_indexes
WHERE index_name NOT LIKE ‘BIN$%’
ORDER BY table_name, index_name
/
CLEAR COLUMNS

PROMPT
PROMPT Collecting Index Analyze Information…
PROMPT
COLUMN table_name HEADING ‘Index|Name’ FORMAT a30
COLUMN analyzed_date HEADING ‘Analyzed|Date’ FORMAT a11
COLUMN sample_size HEADING ‘Sample|Size’ FORMAT 99999999999
COLUMN num_rows HEADING ‘Total|Rows’ FORMAT 99999999999
COLUMN blevel HEADING ‘BLvl’ FORMAT 9999
COLUMN leaf_blocks HEADING ‘Lf|Blks’ FORMAT 9999
COLUMN distinct_keys HEADING ‘Distinct|Keys’ FORMAT 99999999999
COLUMN clustering_factor HEADING ‘Cluster|Factor’ FORMAT 99999999999
SELECT index_name,
TO_CHAR(last_analyzed, ‘DD-MON-YYYY’) AS analyzed_date,
sample_size,
num_rows,
blevel,
leaf_blocks,
distinct_keys,
clustering_factor
FROM user_indexes
WHERE index_type=’NORMAL’
AND temporary=’N’
AND index_name NOT LIKE ‘BIN$%’
ORDER BY last_analyzed DESC
/
CLEAR COLUMNS

PROMPT
PROMPT Cache information on Sequences
PROMPT
COLUMN sequence_name HEADING ‘Sequence|Name’ FORMAT a30
COLUMN cache_size HEADING ‘Cache|Size’ FORMAT 999,999,999
SELECT /*+rule*/
sequence_name,
cache_size
FROM user_sequences
/
CLEAR COLUMNS

PROMPT
PROMPT Collecting SKU Component Index Information on ITEM_MASTER and PIX_SYNC tables…
PROMPT
COLUMN table_name HEADING ‘Table Name’ form a30
COLUMN index_name HEADING ‘Index Name’ form a30
COLUMN column_name HEADING ‘Column|Name’ form a30
SELECT /*+rule*/
a.table_name,
a.index_name,
b.column_name,
(CASE WHEN a.uniqueness = ‘UNIQUE’ THEN ‘YES’ ELSE ‘NO’ END) AS uniqueness
FROM user_indexes a inner join user_ind_columns b on b.index_name = a.index_name
WHERE a.table_name in (‘ITEM_MASTER’, ‘PIX_SYNC’)
AND a.index_name NOT LIKE ‘BIN$%’
ORDER BY a.table_name, a.index_name, b.column_position ASC
/
CLEAR COLUMNS

PROMPT
PROMPT Collecting Locn_hdr Component Index Information On LOCN_HDR…
PROMPT
COLUMN table_name HEADING ‘Table Name’ form a30
COLUMN index_name HEADING ‘Index Name’ form a30
COLUMN column_name HEADING ‘Column|Name’ form a30
SELECT /*+rule*/
a.table_name,
a.index_name,
b.column_name,
(CASE WHEN a.uniqueness = ‘UNIQUE’ THEN ‘YES’ ELSE ‘NO’ END) AS uniqueness
FROM user_indexes a inner join user_ind_columns b on b.index_name = a.index_name
WHERE a.table_name = ‘LOCN_HDR’
AND a.index_name NOT LIKE ‘BIN$%’
ORDER BY a.table_name, a.index_name, b.column_position ASC
/
CLEAR COLUMNS

PROMPT
PROMPT DB Connection Info
PROMPT
COLUMN program HEADING ‘Program’ FORMAT a80
COLUMN CNT HEADING ‘Count’ FORMAT 999,999,999,999
SELECT program, COUNT(*) AS cnt
FROM v$session
WHERE username NOT IN(‘SYS’, ‘SYSTEM’)
GROUP BY program
HAVING COUNT(*) > 1
ORDER BY cnt DESC
/
CLEAR COLUMNS

PROMPT
PROMPT DB Objects with parallel degree set
PROMPT
COLUMN owner HEADING ‘Owner’ FORMAT a30
COLUMN table_name HEADING ‘Table Name’ FORMAT a30
COLUMN index_name HEADING ‘Index Name’ FORMAT a30
COLUMN degree HEADING ‘Degree’ FORMAT a10
COLUMN instances HEADING ‘Instances’ FORMAT a10
COLUMN name HEADING ‘Parallel Operations’ FORMAT a20
SELECT owner, table_name, DEGREE, INSTANCES
FROM dba_tables
WHERE (TRIM (DEGREE) NOT IN (‘1′,’0’) OR TRIM (INSTANCES) NOT IN (‘1′,’0’))
AND owner NOT IN (‘SYSTEM’,’SYS’,’OUTLN’,’DBSNMP’,’OPS$ORACLE’,’EXPORT_USER’,’IMPORT_USER’)
/

SELECT owner, index_name, degree, instances
FROM dba_indexes
WHERE (TRIM (DEGREE) NOT IN (‘1′,’0’) OR TRIM (INSTANCES) NOT IN (‘1′,’0’))
AND owner NOT IN (‘SYSTEM’,’SYS’,’OUTLN’,’DBSNMP’,’OPS$ORACLE’,’EXPORT_USER’,’IMPORT_USER’)
/

SELECT name,value FROM v$sysstat WHERE NAME LIKE ‘Parallel%’
/
CLEAR COLUMNS

PROMPT
PROMPT Segment Statistics
PROMPT
COLUMN statistic_name HEADING ‘Statistic|Name’ FORMAT a30
COLUMN object_name HEADING ‘Object|Name’ FORMAT a30
COLUMN object_type HEADING ‘Object|Type’ FORMAT a25
COLUMN value HEADING ‘Value’ FORMAT 999,999,999,999
SELECT statistic_name, object_name, object_type, value
FROM v$segment_statistics
WHERE value > 100
AND owner = USER
AND LOWER(statistic_name) IN (‘itl waits’, ‘buffer busy waits’)
ORDER BY statistic_name, value DESC
/
CLEAR COLUMNS

SPOOL OFF
EXIT

Posted in Oracle | Leave a Comment »

System Audit Script for Oracle

Posted by decipherinfosys on October 22, 2008

Here is a system audit script for Oracle environments. You can modify the values and settings as per your needs. Idea is to make sure that all the environments are audited on a regular basis to ensure that there are no discrepancies. There are similar scripts for the user (Schema) level as well. And similar audit set of scripts for SQL Server, DB2 LUW, MySQL environments as well. We will post those in the days to come.

The script takes the data set that it collects and prepares a HTML report.

rem
rem sys_audit.sql
rem
rem NOTE:
rem Run this script as a user with SYS/SYSTEM privileges
rem Script supports up-to Oracle 10g.

ALTER SESSION SET NLS_DATE_FORMAT=’DD-MON-YYYY HH24:MI:SS’
/

PROMPT Generating sys audit report …..
SET PAGES 2000 LINES 120 VERIFY OFF ECHO OFF TRIMSPOOL ON TRIMOUT ON FEED OFF TERMOUT OFF
SET MARKUP HTML ON SPOOL ON HEAD ‘<TITLE> Database Audit Report </TITLE>’
SPOOL sys_audit.html
PROMPT
PROMPT
SELECT ‘SYSTEM AUDIT PERFOMED BY DB USER ‘||USER ||’ AT ‘ || SYSTIMESTAMP AS “SYS AUDIT INFORMATION”
FROM dual
/
PROMPT Database Init Parameters Information
PROMPT ====================================
COLUMN name HEADING ‘Parameter|Name’ FORMAT a35
COLUMN current_value HEADING ‘Current|Value’ FORMAT a35 wrap
COLUMN recommend_value HEADING ‘Recommended|Value’ FORMAT a35 wrap
SELECT name,
current_value,
recommend_value
FROM
(SELECT name,
value current_value,
(CASE
/* General*/
WHEN LOWER(name) = ‘open_cursors’ AND TO_NUMBER(value) <> 512 THEN
‘Recommended setting is 512’
WHEN LOWER(name) = ‘cursor_sharing’ AND LOWER(value) <> ‘exact’ THEN
‘Not OK. Set the value to EXACT’
WHEN LOWER(name) = ‘session_cached_cursors’ AND TO_NUMBER(value) < 400 THEN
‘Not OK. Set the parameter to 400’
WHEN LOWER(name) = ‘db_block_checksum’ AND LOWER(value) <> ‘true’ THEN
‘Not OK. Set the parameter to TRUE’
WHEN LOWER(name) = ‘db_files’ AND TO_NUMBER(value) < 500 THEN
‘Set the parameter to 500’
WHEN LOWER(name) = ‘query_rewrite_enabled’ AND LOWER(value) <> ‘true’ THEN
‘Not OK. Set the parameter to TRUE’
WHEN LOWER(name) = ‘query_rewrite_integrity’ AND LOWER(value) <> ‘trusted’ THEN
‘Not OK. Set the parameter to TRUSTED’
/*Optimizer*/
WHEN LOWER(name) = ‘optimizer_mode’ AND LOWER(value) <> ‘first_rows’ THEN
‘Not OK. Set the parameter to FIRST_ROWS’
WHEN LOWER(name) = ‘optimizer_index_caching’ AND TO_NUMBER(value) <> 90 THEN
‘Recommended setting is 90’
WHEN LOWER(name) = ‘optimizer_index_cost_adj’ AND TO_NUMBER(value) <> 25 THEN
‘Recommended setting is 25’
WHEN LOWER(name) = ‘_sort_elimination_cost_ratio’ AND TO_NUMBER(value) <> 4 THEN
‘Recommended setting is 4’
WHEN LOWER(name) = ‘_b_tree_bitmap_plans’ AND LOWER(value) <> ‘false’ THEN
‘Not OK. Set the parameter to FALSE’
WHEN LOWER(name) = ‘optimizer_dynamic_sampling’ AND TO_NUMBER(VALUE) <= 1 THEN
‘Set it to atleast 2. This will help in accessing data from global temporary tables as upto date statistics may not be available for temporary tables’
WHEN LOWER(name) = ‘db_file_multiblock_read_count’ AND TO_NUMBER(value) > 8 THEN
‘Recommended setting is 8 for a OLTP system’
/*Auto Management*/
WHEN LOWER(name) = ‘workarea_size_policy’ AND LOWER(value) <> ‘auto’ THEN
‘Not OK. Set the parameter to AUTO’
WHEN LOWER(name) = ‘pga_aggregate_target’ AND TO_NUMBER(value) < 1073741824 THEN
‘Not OK. Recommended setting is 1073741824 (1g)’
WHEN LOWER(name) = ‘sort_area_size’ AND TO_NUMBER(value) < 1048576 THEN
‘Increase to 1048576. (Ok, if pga_aggregate_target/work_area_policy are used)’
WHEN LOWER(name) = ‘sort_area_retained_size’ AND TO_NUMBER(value) < 16384 THEN
‘Increase to 16384. (Ok, if pga_aggregate_target/work_area_policy are used)’
/*Auto UNDO management*/
WHEN LOWER(name) = ‘undo_management’ AND LOWER(value) <> ‘auto’ THEN
‘Not OK. Set the parameter to AUTO’
WHEN LOWER(name) = ‘undo_retention’ AND TO_NUMBER(value) <> 900 THEN
‘Not OK. Recommended setting is 900’
/*Memory Management*/
WHEN LOWER(name) = ‘db_cache_size’ AND TO_NUMBER(value) < 1610612736 THEN
‘Not OK. Increase to 1610612736 (1.5g) minimum. Recommended is 2147483648 (2g)’
WHEN LOWER(name) = ‘db_cache_advice’ AND LOWER(value) <> ‘off’ THEN
‘Not OK. Set the parameter to OFF’
WHEN LOWER(name) = ‘db_block_size’ AND TO_NUMBER(value) <> 8192 THEN
‘Not OK. Set the parameter to 8192 (8 kb)’
WHEN LOWER(name) = ‘db_block_buffers’ AND TO_NUMBER(value) <> 0 THEN
‘Not OK. Set the parameter to 0 (zero)’
WHEN LOWER(name) = ‘sga_max_size’ AND TO_NUMBER(value) < 3221225472 THEN
‘Not OK. Increase to 3221225472 (3g) (recommended)’
WHEN LOWER(name) = ‘sga_target’ AND TO_NUMBER(value) < 3221225472 THEN
‘Not OK. Increase to 3221225472 (3g) (recommended)’
WHEN LOWER(name) = ‘log_checkpoint_interval’ AND TO_NUMBER(value) <> 0 THEN
‘Not OK. Set the parameter to 0 (zero)’
WHEN LOWER(name) = ‘shared_pool_size’ AND TO_NUMBER(value) < 419430400 THEN
‘Not OK. Increase to 419430400 (400m) minimum (recommended)’
WHEN LOWER(name) = ‘shared_pool_reserved_size’ AND TO_NUMBER(value) < 16777216 THEN
‘Not OK. Increase to 16777216 (16m) mininum (recommended)’
WHEN LOWER(name) = ‘log_buffer’ AND TO_NUMBER(value) < 1048576 THEN
‘Not OK. Increase to 1048576 (1m)’
WHEN LOWER(name) = ‘processes’ AND TO_NUMBER(value) <> 500 THEN
‘Recommended setting is 500’
/*Recovery*/
WHEN LOWER(name) = ‘fast_start_mttr_target’ AND TO_NUMBER(value) <> 900 THEN
‘Recommended setting is 900’
WHEN LOWER(name) = ‘log_archive_start’ AND LOWER(value) <> ‘true’ THEN
‘Not Archiving?. Set the parameter to TRUE. Production databases should run under ARCHIVELOG mode’
/*Shared Server*/
WHEN LOWER(name) = ‘shared_servers’ AND TO_NUMBER(value) <> 0 THEN
‘Not OK. Set the parameter to 0 (zero)’
WHEN LOWER(name) = ‘dispatchers’ AND UPPER(value) NOT LIKE ‘(PROTOCOL=TCP)%’ THEN
‘Not OK. Set the parameter to default’
/*Other*/
WHEN LOWER(name) = ‘parallel_automatic_tuning’ AND LOWER(value) <> ‘false’ THEN
‘Not OK. Set the parameter to FALSE’
WHEN LOWER(name) = ‘parallel_adaptive_multi_user’ AND LOWER(value) <> ‘false’ THEN
‘Not OK. Set the parameter to FALSE’
WHEN LOWER(name) = ‘parallel_max_servers’ AND TO_NUMBER(value) > 5 THEN
‘Not OK. Set the parameter value to 5 (default)’
WHEN LOWER(name) = ‘skip_unusable_indexes’ AND LOWER(value) <> ‘true’ THEN
‘Not OK. Set the parameter to TRUE’
WHEN LOWER(name) = ‘plsql_optimize_level’ AND TO_NUMBER(value) <> 1 THEN
‘Not OK. Set the parameter value to 1’
WHEN LOWER(name) = ‘audit_trail’ AND (LOWER(value) not in (‘none’,’false’)) THEN
‘Not OK. Audit is not recommended due to performance overhead’
/*Information*/
WHEN LOWER(name) = ‘cluster_database’ AND LOWER(value) = ‘true’ THEN
‘OK. Database setup with RAC’
ELSE
‘OK’
END) recommend_value
FROM v$parameter
UNION
SELECT ‘_b_tree_bitmap_plans’, ‘Not Set’, ‘Add this parameter in your init.ora and set to FALSE’
FROM DUAL
WHERE NOT EXISTS (SELECT 1
FROM v$parameter
WHERE name = ‘_b_tree_bitmap_plans’)
UNION
SELECT ‘_sort_elimination_cost_ratio’, ‘Not Set’, ‘Add this parameter in your init.ora and set it to 4’
FROM DUAL
WHERE NOT EXISTS (SELECT 1
FROM v$parameter
WHERE name = ‘_sort_elimination_cost_ratio’))

/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT NLS Parameter Settings
PROMPT ======================
COLUMN parameter HEADING ‘Parameter|Name’ FORMAT a35
COLUMN current_value HEADING ‘Current|Value’ FORMAT a35 wrap
COLUMN recommend_value HEADING ‘Recommended|Value’ FORMAT a35 wrap
SELECT parameter,
current_value,
recommend_value
FROM
(SELECT parameter,
value current_value,
(CASE
— General
WHEN LOWER(parameter) = ‘nls_length_semantics’ AND LOWER(value) <> ‘char’ THEN
‘Change the semantics to CHAR’
WHEN LOWER(parameter) = ‘nls_characterset’ AND LOWER(value) NOT IN (‘utf8’, ‘al32utf8’) THEN
‘Database CharacterSet should be atleast UTF8 (Unicode)’
ELSE
NULL
END) recommend_value
FROM v$nls_parameters)
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT Database Information
PROMPT ====================
COLUMN name HEADING ‘Database|Name’ FORMAT a9
COLUMN created HEADING ‘Created|Date’ FORMAT a17
COLUMN resetlogs_time HEADING ‘ResetLogs|Time’ FORMAT a17
COLUMN log_mode HEADING ‘Log|Mode’ FORMAT a12
COLUMN open_mode HEADING ‘Open|Mode’ FORMAT a10
COLUMN database_role HEADING ‘Database|Role’ FORMAT a16
COLUMN protection_mode HEADING ‘Protection|Mode’ FORMAT a20
SELECT name,
TO_CHAR(created, ‘DD-MON-YYYY HH24:MI’) AS created,
TO_CHAR(resetlogs_time, ‘DD-MON-YYYY HH24:MI’) AS resetlogs_time,
log_mode,
open_mode,
database_role,
protection_mode
FROM v$database
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT Database Version Information
PROMPT ============================
COLUMN banner HEADING ‘Version|Information’ FORMAT a70
SELECT *
FROM v$version
/
CLEAR COLUMNS
PROMPT
PROMPT
PROMPT SGA Size
PROMPT ========
PROMPT
SHOW SGA

PROMPT
PROMPT
PROMPT Database Archival
PROMPT =================
ARCHIVE LOG LIST

PROMPT
PROMPT
PROMPT User and Tablespace Information
PROMPT ===============================
COLUMN username HEADING ‘User|Name’ FORMAT a30
COLUMN default_tablespace HEADING ‘Default|Tablespace’ FORMAT a30
COLUMN temporary_tablespace HEADING ‘Temp|Tablespace’ FORMAT a30
SELECT username,
default_tablespace,
temporary_tablespace
FROM dba_users
ORDER BY username
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT Database Instance Information
PROMPT =============================
COLUMN instance_name HEADING ‘Instance|Name’ FORMAT a16
COLUMN host_name HEADING ‘Host|Name’ FORMAT a20
COLUMN startup_time HEADING ‘Startup|Time’ FORMAT a17
COLUMN version HEADING ‘Version’ FORMAT a12
COLUMN status HEADING ‘Status’ FORMAT a12
SELECT instance_name,
host_name,
TO_CHAR(startup_time, ‘DD-MON-YYYY HH24:MI’) startup_time,
version,
status
FROM v$instance
/
CLEAR COLUMNS
PROMPT
PROMPT
PROMPT Database Log Information
PROMPT ========================
SELECT *
FROM v$log
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT Database Log Files Information
PROMPT ==============================
COLUMN member HEADING ‘Log|Filename’ FORMAT a60
SELECT *
FROM v$logfile
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT Database Control Files Information
PROMPT ==================================
COLUMN status HEADING ‘Status’
COLUMN name HEADING ‘ControlFile|Name’ FORMAT a60
SELECT status,
name
FROM v$controlfile
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT Shared Pool Usage in (MB)
PROMPT =========================
COLUMN Shared_Pool_Used Heading ‘Shared Pool|Used’ Format 999,999.99
COLUMN Shared_Pool_Size Heading ‘Shared Pool|Size’ Format 999,999.99
COLUMN Shared_Pool_Avail Heading ‘Shared Pool|Available’ Format 999,999.99
COLUMN Shared_Pool_Pct Heading ‘Shared Pool|Percentage’ Format 999.99
SELECT SUM(a.Bytes)/1048576 AS Shared_Pool_Used,
MAX(b.Value)/1048576 AS Shared_Pool_Size,
(MAX(b.Value)/1048576)-(SUM(a.Bytes)/1048576) AS Shared_Pool_Avail,
(SUM(a.Bytes)/MAX(b.Value))*100 AS Shared_Pool_Pct
FROM v$SgaStat a,
v$Parameter b
WHERE a.Pool = ‘shared pool’
AND a.Name != ‘free memory’
AND b.Name = ‘shared_pool_size’
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT Database Tablespace Information
PROMPT ===============================
COLUMN tablespace_name HEADING ‘Tablespace|Name’ FORMAT a30
COLUMN status HEADING ‘Status’ FORMAT a7
COLUMN block_size HEADING ‘Block|Size’ FORMAT 99999 JUSTIFY RIGHT
COLUMN contents HEADING ‘Contents’ FORMAT a10 WRAP
COLUMN extent_management HEADING ‘Extent|Management’ FORMAT a10 WRAP
COLUMN allocation_type HEADING ‘Allocation|Type’ FORMAT a10 WRAP
COLUMN segment_space_management HEADING ‘Segment|Space|Management’ FORMAT a10
SELECT tablespace_name,
status,
block_size,
contents,
extent_management,
allocation_type,
segment_space_management
FROM dba_tablespaces
/

PROMPT
PROMPT
PROMPT Datafile Information
PROMPT ====================
COLUMN tablespace_name HEADING ‘Tablespace|Name’ FORMAT a30
COLUMN file_name HEADING ‘File|Name’ FORMAT a60
SELECT tablespace_name,
file_name
FROM dba_data_files
UNION
SELECT tablespace_name,
file_name
FROM dba_temp_files
ORDER BY tablespace_name
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT Objects that need to pinned in Shared Pool
PROMPT ==========================================
COLUMN object_name HEADING ‘Object|Name’ format a50
COLUMN object_type HEADING ‘Object|Type’ format a25
COLUMN pin_command HEADING ‘Statement|to Pin’ format a80
SELECT owner||’.’||name as object_name,
type as object_type,
‘EXEC DBMS_SHARED_POOL.Keep(‘||chr(39)||OWNER||’.’||name||chr(39)||’,’||
chr(39)||decode(type,’PACKAGE’,’P’,’PROCEDURE’,’P’,’FUNCTION’,’P’,’SEQUENCE’,’Q’,’TRIGGER’,’R’)||chr(39)||’)’ as pin_command
FROM v$db_object_cache
WHERE type NOT IN (‘NOT LOADED’,’NON-EXISTENT’,’VIEW’,’TABLE’,’INVALID TYPE’,’CURSOR’,’PACKAGE BODY’)
AND executions > loads
AND executions > 1
AND kept=’NO’
AND OWNER <> ‘SYS’
ORDER BY owner, namespace, type, executions DESC
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT DBA Jobs Information
PROMPT ====================
COLUMN job HEADING ‘Job#’ FORMAT 99999
COLUMN log_user HEADING ‘Job|Owner’ FORMAT a20 WRAP
COLUMN schema_user HEADING ‘Under|Schema’ FORMAT a20 WRAP
COLUMN last_date HEADING ‘Last|Run|Date’ FORMAT a17
COLUMN next_date HEADING ‘Next|Run|Date’ FORMAT a17
COLUMN total_time_min HEADING ‘Total|Time|(min)’ FORMAT 99999
COLUMN failures HEADING ‘Times|Faild’ FORMAT 99999
SELECT job,
log_user,
schema_user,
TO_CHAR(last_date, ‘DD-MON-YYYY HH24:MI’) AS last_date,
TO_CHAR(next_date, ‘DD-MON-YYYY HH24:MI’) AS next_date,
(total_time/60) AS total_time_min,
failures
FROM dba_jobs
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT DBA Job Description
PROMPT ===================
COLUMN job HEADING ‘Job#’ FORMAT 99999
COLUMN what HEADING ‘Description’ FORMAT a100 WRAP
SELECT job,
what
FROM dba_jobs
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT DBA SYS Privileges
PROMPT ==================
COLUMN grantee HEADING ‘Role|Name’ FORMAT a30
COLUMN privilege HEADING ‘Privilege|Name’ FORMAT a40
SELECT grantee,
privilege
FROM dba_sys_privs
WHERE grantee NOT IN (SELECT username FROM dba_users)
ORDER by grantee
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT DBA Roles
PROMPT =========
COLUMN role HEADING ‘Role Name’
SELECT role
FROM dba_roles
ORDER BY role
/
CLEAR COLUMNS

PROMPT
PROMPT
PROMPT DBA Role Privileges
PROMPT ===================
COLUMN grantee HEADING ‘Role|Name’ FORMAT a30
COLUMN granted_role HEADING ‘Role|Granted’ FORMAT a30
SELECT grantee,
granted_role
FROM dba_role_privs
WHERE grantee NOT IN (SELECT username FROM dba_users)
ORDER BY grantee
/
CLEAR COLUMNS

SPOOL OFF
EXIT

Posted in Oracle | 1 Comment »

SQL Server 2008 and SQL Server 2005 (Post SP2) Builds

Posted by decipherinfosys on October 21, 2008

MSFT has released two KB articles to cover the builds post the RTM version of SQL Server 2008 as well as post the SQL Server 2005 + SP2 release.  You can access them here:

SQL Server 2008: Builds post the RTM release.
SQL Server 2005: Builds post the SP2 release.

Posted in SQL Server | Leave a Comment »

Perfmon Tools

Posted by decipherinfosys on October 17, 2008

We have covered perfmon before in a couple of articles and blog posts. In today’s post, we are going to take a look at some of the freebies that MSFT offers in order to make working with perfmon even easier. It is already easy enough but these tools provide a wizard as well as extract the data in different formats and also allow us to manage and schedule the performance counters. There are three such tools:

1) Performance Monitor Wizard: You can download this tool from here. It provides a wizard to simplify the creation and management of the performance monitor logs. It is a pretty simple tool but does have limited collection methods – you can have OS Counters, Exchange counters etc. and you can select from three different types of collection methods – Standaard, High CPU and Advance Configuration.

It does not have any SQL Counters though. You can use the wizard and then later on add your own objects to monitor once you have the perfmon set up.

2) Relog and 3) Logman are other two very nice utilities. We had blogged about those before and you can read more on those over here.

Posted in Windows | Leave a Comment »