Systems Engineering and RDBMS

Archive for February 14th, 2007

How to move data files in ASM using Mount Mode

Posted by decipherinfosys on February 14, 2007

Here is an example of how a data-file can be moved to another diskgroup when the database is in MOUNT mode.  This is only for Seasoned DBA’s who have full responsibility for the database they are working on.

1.    Check the archive mode:

Log into the db as sysdba

SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /u02/app/oracle/oradata/V1020/archive
Oldest online log sequence 439
Current log sequence 442

2.    Create a tablespace with a datafile in diskgroup DG1:

SQL> create tablespace ts1 datafile ‘+DG1’ size 10M;
Tablespace created.

3.    Check the file name:

SQL> select file_name from dba_data_files where tablespace_name=’TS1′;
FILE_NAME
——————————————————————————–
+DG1/v1020/datafile/ts1.257.588859119

4.    Shut down the database and start it in mount mode:

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.

Total System Global Area 419430400 bytes
Fixed Size 1261188 bytes
Variable Size 121635196 bytes
Database Buffers 293601280 bytes
Redo Buffers 2932736 bytes
Database mounted.

Exit from sqlplus

5.    Copy the file using Rman to the new diskgroup (DG2). Pick up the output filename.

At the command prompt, start rman.
$ rman

Recovery Manager: Release 10.2.0.2.0 – Production on Thu Apr 27 12:01:09 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.

RMAN> connect target /
connected to target database: V1020 (DBID=132646007, not open)

RMAN> copy datafile ‘+DG1/v1020/datafile/ts1.257.588859119’ to ‘+DG2’;
Starting backup at 27-APR-06
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=155 devtype=DISK
channel ORA_DISK_1: starting datafile copy
input datafile fno=00002 name=+DG1/v1020/datafile/ts1.257.588859119
output filename=+DG2/v1020/datafile/ts1.256.588859303 tag=TAG20060427T120142 recid=2 stamp=588859309
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:08
Finished backup at 27-APR-06

RMAN> exit
Recovery Manager complete.

6.    Update the data dictionary with the new file name:

Log into sqlplus as sysdba.

SQL> alter database rename file ‘+DG1/v1020/datafile/ts1.257.588859119’ to ‘+DG2/v1020/datafile/ts1.256.588859303’;
Database altered.

7.    Open the database:

SQL> alter database open;
Database altered.

8.    Check the new file location:

SQL> select file_name from dba_data_files where tablespace_name=’TS1′;
FILE_NAME
——————————————————————————–
+DG2/v1020/datafile/ts1.256.588859303

Posted in Oracle | Leave a Comment »

Collecting nightly statistics in SQL Server

Posted by decipherinfosys on February 14, 2007

As promised in one of the earlier blog posts, here is a stored procedure that you can use to collect stats for the tables and their indexes in SQL Server.  You can pass in a number as the percentage for the percent of the data that should be sampled for collection of the stats.  Ideally, if you do have a window of opportunity, you should collect full 100% stats.  Though, in large systems, this will not be possible because of the amount of data that resides in those databases.  In addition, you should have the “auto create statistics” and “auto update statistics” option on for the database.  If you do have a system which is a true OLTP system and has a lot of transacational load, you might consider turning “auto update statistics” to be off since you do not want the optimizer to kick in (between the transactional operations ) to start collecting the stats.  That can also cause query recompilations.  Optimizer does that based on a formula that is detailed in one of the Microsoft KB articles and is loosely based on 700 or so DML operations against the table/index.

Also, in SQL Server 2005, there is another option: AUTO_UPDATE_STATISTICS_ASYNC.  When this option is turned on, then the queries do not have to wait for the stats to be updated before compiling…it allows for asynchronous updates for the statistics by using a worked thread in one of the background processes.  The queries do not have to wait and they use the current stats that are available.  This is good and bad depending upon how you look at it.  It is good because there is no query delay because of automatic stats updates (btw, this option can be used only when the “auto update statistics” is also on).  It is bad because out of date stats can cause formation of an un-optimal execution plan and can also lead to query re-compilations (this is a big topic in itself and depends upon a variety of factors – we will cover it in one of our whitepapers).

Here is the script:

CREATE PROC USP_UPD_STATS
(
@PCNT    VARCHAR(3)
)
AS
BEGIN
SET NOCOUNT ON
DECLARE @TABLE_CATALOG VARCHAR(50), @TABLE_SCHEMA VARCHAR(20), @TABLE_NAME VARCHAR(50), @COMMAND NVARCHAR(400)
DECLARE SQLSTMT CURSOR FOR SELECT TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = ‘BASE TABLE’
ORDER BY TABLE_NAME
OPEN SQLSTMT
FETCH NEXT FROM SQLSTMT INTO @TABLE_CATALOG, @TABLE_SCHEMA, @TABLE_NAME
WHILE (@@FETCH_STATUS = 0)
BEGIN
SELECT @COMMAND = ‘UPDATE STATISTICS ‘+@TABLE_CATALOG+’.’+@TABLE_SCHEMA+’.’+@TABLE_NAME+’ WITH SAMPLE ‘+@PCNT+’ PERCENT, ALL’
EXEC SP_EXECUTESQL @COMMAND
FETCH NEXT FROM SQLSTMT INTO @TABLE_CATALOG, @TABLE_SCHEMA, @TABLE_NAME
END
CLOSE SQLSTMT
DEALLOCATE SQLSTMT
SET NOCOUNT OFF
END
GO

Sample Execution:

USP_UPD_STATS ‘100’

GO

will collect stats with a sample size of 100%.

Posted in SQL Server | Leave a Comment »