Systems Engineering and RDBMS

Archive for the ‘SQL Server’ Category

Installing/Enabling .Net 3.5 SP1 on Windows 2008 R2 for SQL Server 2008 R2/SQL Server 2012 RC0 Install

Posted by decipherinfosys on February 24, 2012

A junior team member ran into this issue so thought it was worth a post.  While installing SQL Server 2012 RC0 on a new VM of Windows Server 2008 R2, it complained about installing or enabling .Net Framework 3.5 SP1 which is a pre-requisite for the SQL Server install.  Here are the steps to validate whether it is installed and how to enable it and if it is not installed, how to go about installing it.

  1. Under Administrative Tools, Select Server Manager and click on Features.
  2. All the installed Features are displayed in the right side pane where you can validate if .Net Framework 3.5 SP1 (3.5.1) is installed or not.
  3. If it is not installed, then click on “Add Features” and expand “.Net Framework 3.5.1 Features”.
  4. Check the check box for “.Net Framework 3.5.1 Features” and click on Next and then Install.

5.  Once the installation is complete, click on close and you are done.

Now, you can move on with your SQL Server 2012 RC0 install on Windows Server 2008 R2.  Have fun.

Posted in .Net Development, SQL Server, Windows | Leave a Comment »

Reading from a XML Data Type Column in SQL Server

Posted by decipherinfosys on November 23, 2011

We have covered XML capabilities in SQL Server in several posts before – you can read those here.  Yesterday, while helping a client, the developer needed to read from an XML data type column and the way the code was written was to iterate over the records, fetch one record at a time, assign it to a variable and then using the nodes() method, extract out the data and their hierarchies into a flat row/column structure as a data set.  This approach is not very scalable on large data sets and performance becomes an issue.

One can use the nodes() method and CROSS APPLY/OUTER APPLY to do this in a SET based fashion using a single SQL statement.  We had demonstrated this before in some of our Report Server queries as well.  Here is an example of how to do that:

We will first create a demo table:
IF EXISTS (SELECT 1 FROM SYS.objects WHERE TYPE = 'U' AND NAME = 'AP_DEMO')
BEGIN
DROP TABLE AP_DEMO
END
GO
CREATE TABLE dbo.AP_DEMO
(
TRANSACTION_ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
Xml_Data XML NOT NULL
)
GO

And now insert into this table 2 records – note that the XML that is being inserted has a three layer hierarchy – One Transaction can have 1-N Services and 1 Service can have 1-N Responses.  Also note that we are using the row value constructor feature of SQL Server 2008 to do these inserts.

INSERT INTO AP_DEMO (XML_DATA)
VALUES
(
‘<?xml version=”1.0″ ?>
<Root>
<Transaction>
<Tattrib1>Test1</Tattrib1>
<Tattrib2>100</Tattrib2>
<Service>
<Id>1</Id>
<Name>Service1</Name>
<Response>
<Key>KeyTest1</Key>
<Value>Value1</Value>
<Point>10</Point>
</Response>
</Service>
</Transaction>
<Transaction>
<Tattrib1>Test2</Tattrib1>
<Tattrib2>53</Tattrib2>
<Service>
<Id>100</Id>
<Name>Service2</Name>
<Response>
<Key>KeyTest2</Key>
<Value>Value2</Value>
<Point>90</Point>
</Response>
</Service>
</Transaction>
</Root>’)
,
(‘<?xml version=”1.0″ ?>
<Root>
<Transaction>
<Tattrib1>Test3</Tattrib1>
<Tattrib2>1000</Tattrib2>
<Service>
<Id>9</Id>
<Name>Service20</Name>
<Response>
<Key>KeyTest3</Key>
<Value>Value3</Value>
<Point>99</Point>
</Response>
</Service>
</Transaction>
<Transaction>
<Tattrib1>Test4</Tattrib1>
<Tattrib2>999</Tattrib2>
<Service>
<Id>87</Id>
<Name>Service30</Name>
<Response>
<Key>KeyTest4</Key>
<Value>Value4</Value>
<Point>97</Point>
</Response>
</Service>
</Transaction>
</Root>’)

And now, we can extract this data using the nodes() method and applying CROSS APPLY at it to extract out each hierarchy:

SELECT
svc.value('(Tattrib1/text())[1]', 'varchar(100)') as Txn_Attribute_1
, svc.value('(Tattrib2/text())[1]', 'varchar(100)') as Txn_Attribute_2
, rsp.value('(Id/text())[1]', 'int') as Service_Id
, rsp.value('(Name/text())[1]', 'nvarchar(100)') as [Service_Name]
, val.value('(Key/text())[1]', 'nvarchar(100)') as Response_Key
, val.value('(Value/text())[1]', 'nvarchar(100)') as TSR_Value
, val.value('(Point/text())[1]', 'int') as TSR_Point
FROM
AP_DEMO
CROSS APPLY Xml_Data.nodes('//Transaction') AS Txn(svc)
CROSS APPLY svc.nodes('Service') AS svc(rsp)
CROSS APPLY rsp.nodes('Response') as rsp(val)
GO

This will give the following output:

Txn_Attribute_1 Txn_Attribute_2 Service_Id Service_Name Response_Key TSR_Value TSR_Point
Test1           100             1          Service1     KeyTest1     Value1    10
Test2           53              100        Service2     KeyTest2     Value2    90
Test3           1000            9          Service20    KeyTest3     Value3    99
Test4           999             87         Service30    KeyTest4     Value4    97

As you can see from above, one can easily extract out the data from a XML data type column in a table by using a single SQL statement.  Of course, one should ensure that one has proper filter conditions in the WHERE clause on properly indexed columns and is also making use of XML indexes in order to ensure a good execution plan for the SQL.

Hope this small code snippet helps in your work.  Wish all of our readers in the US a very Happy Thanksgiving.

Posted in SQL Server | 1 Comment »

Cannot Access Newly Created Instance

Posted by decipherinfosys on July 6, 2011

Got a question from a reader yesterday:

“I just joined a new company and was given a VM on which my domain account was part of the Administrators group but the SQL Server 2008 R2 install was done by the IT team and even by using my Windows account, I cannot login to the instance.  I tried using the “sa” account thinking that they might have left the password as blank but keep getting errors.  How can I resolve it?  My understanding was/is that if my Windows account is part of the Administrators group on the local machine/VM, I should be able to log in since the BUILTIN\Administrators group is always by default assigned sysadmin rights on a newly created instance.  Is that no longer the case in SQL Server 2008 R2?”

Yes, as part of the new security features in SQL Server 2008, by default, the local Windows group “BUILTIN\Administrators” is no longer made part of the sysadmin fixed server role.  You have to explicitly add it if you want that to be the case. It is very well documented in BOL and MSDN and we would recommend that you go through this article to familiarize yourself with the security changes in SQL Server 2008 R2:

http://msdn.microsoft.com/en-us/library/cc280562.aspx

While helping the user above to resolve the issue, it turned out that IT Admin had used a domain group that only IT team members were a part of and a) Installed in Windows Authentication mode, and b) Only made that group a part of the sysadmin role.  The user above tried to get in by making changes to the registry to change the authentication to mixed mode by changing the value for this registry setting to 2:

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL.1\MSSQLServer\LoginMode

Or in the case of a named instance:

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL.n\MSSQLServer\LoginMode (n: Instance nbr of the named instance)

Post re-starting the service, it did change the authentication mode to mixed mode, however due to un-successful attempts and the password policy in place, the “sa” account was locked out/disabled.  One can easily enable it but first one needs to be able to get in with sysadmin privs. and then make the changes.  Best option was to get one of the IT team members to login and post it, add additional logins, enable the “sa” account, set it’s password and then make whatever changes needed for the logins/their access levels etc.  The user did that and was on his way to managing the newly created instance.  One can enable a disabled/locked SQL Server account through the GUI or by using the “ALTER LOGIN” statement as we had shown in a previous post here.

Posted in SQL Server | Leave a Comment »

Back to the Basics: Restoring an Analysis Services Database

Posted by decipherinfosys on February 24, 2011

Not every DBA or Database Developer has had experience with Analysis Services so it did not come as a surprise when a DBA at a client site approached us for quick help in restoring the Analysis Services Database on-to the development environment.  In our back to the basics series of posts, we will be covering this topic today.

Let’s use SSMS to connect to Analysis Services instance first.  In case you have never done that before, when you connect, select “Analysis Services” from the drop-down in the Server Type, the proper Server Name (your instance) and the authentication and you will be connected to the Analysis Services instance in which we will restore the back-up of the Analysis Services Database:

Once connected, right click on “Databases” and select “Restore”:

You will be presented with a very simple “Restore Database” window – most of the items are self explanatory – you would browse to select the location of your backup file (we selected the production backup file in the example below):

Once that is done, you need to then give a name for the restored database (in our example, we are calling it PRODCOPY), select a storage location using the browse button, Allow for the database overwrite if it already exists on the instance, choose to overwrite the security information and if the backup was encrypted, provide the password so that you can move ahead with the restore process:

Once that is done, you would be able to restore the database on that instance and then do configuration changes, data source changes etc.

Another way of doing the restore: In case you are not a GUI kind of guy and like to us scripts, you can also use XMLA.  You can read up on the command here.  You can invoke XMLA by right clicking on the Analysis Services instance and choosing “New Query” and XMLA.  The command parameters are the same as you saw in the GUI option above so add the values appropriately and then execute it in order to complete the restore process.

Resources:

  • XMLA – here.
  • Backup and Restore Analysis Services Databases – article on SQLServerPerformance.com by Ashish Kumar Mehta – here.
  • MSDN post on backing up and restoring Analysis Services Database – here.
  • Database Journal Article on restoring Analysis Services Database using Windows Powershell and SQL Server 2008 AMO – here.

Posted in Business Intelligence, SQL Server | Leave a Comment »

Next Release of SQL Server – Denali (SQL Server 2011)

Posted by decipherinfosys on February 23, 2011

MSFT had announced the first CTP of the next version of SQL Server at PASS last November.  Since then, there has been quite some buzz on the new features of the next version.  We were at a client site last week who had just now finished migrating to SQL Server 2008 and will be moving to SQL Server 2008 R2 shortly.  So, it was an interesting discussion with the CIO of the company who was putting together a presentation for the board of directors which showed the technology road-map for the company.  It becomes difficult to justify another migration in 2 years time (assuming they move to Denali in late 2012) unless there are key benefits to the business.

Here are some of the URLs to help you get up-to-speed on Denali:

  • You can download the CTP from here.
  • BOL for Denali – here.
  • What’s new in Denali – here.
  • MVP Michael Otey’s post on the new features in Denali – here.
  • Future SQL Server Editions page on MSFT site – here.
  • Analysis Services Roadmap for Denali – post on blogs.technet.com – here.

Posted in SQL Server | Leave a Comment »

OVER clause – much needed enhancements

Posted by decipherinfosys on February 23, 2011

For those of us who work in both Oracle and SQL Server, it is always a natural reaction to look for something equivalent in the other RDBMS in our day to day work.  One of the areas in which Oracle has excelled far ahead of SQL Server has been it’s rich support for analytics.  OVER clause capabilities is just one of those things.  MVP Itzik has posted several suggestions/requests on connect for such enhancements – you can see them here.  It also has links from other readers as well which point to the rich functionality of Oracle and DB2 in that regard. You should also read this whitepaper from Itzik and Sujata on this topic.

As per this update on connect, these did not make it in SQL Server 2008 and might be included in the next release – Denali.

This came up again when a client who is responsible for releasing their product on Oracle as well as SQL Server needed to simulate some of that rich analytic function capability in SQL Server.  There are always workarounds available and we have covered some of them in our previous blog posts (example – for first_value(), last_value() workaround) but it is time that SQL Server had that support now.

Posted in DB2 LUW, Oracle, SQL Server | Leave a Comment »

Ghost Cleanup task in SQL Server

Posted by decipherinfosys on February 23, 2011

This topic of “Ghost Cleanup” background process came up while trying to resolve an issue at a client site recently.  In case you have never heard of the ghost cleanup in SQL Server, here is an excellent post by MVP Paul Randal on this topic.  And another post by him on the performance benefits of using a trace flag to turn it off in systems which have heavy delete workloads.

Posted in SQL Server | Leave a Comment »

High Availability and Disaster Recovery

Posted by decipherinfosys on January 28, 2011

These two terms are often used very interchangeably.

High Availability typically refers to the solutions that use fault tolerance and /or load balancing in order to provide high availability of the applications.  HA is essentially the ability to continue operations when a component fails – could be a CPU, memory failure, disk failure, complete failure of a server etc..  With HA, there is usually no loss of service.  Clustering servers together is a form of HA, having redundant power supplies or redundant network controllers is a form of HA, having proper RAID arrays is a form of HA, having proper load balancing in place is a form of HA.  The primary goal of a HA environment is uptime/providing continuous service w/o disruption.

Disaster Recovery provides increased availability.  It is the process/ability to restore operations that are critical for the business to run after a disaster (human induced or natural causes example – power failure in the production site, floods, earthquake, hurricane etc.).  The key difference between DR and HA is that the recovery time taken in the case of DR is typically more than a HA.  With DR, there is a small loss of service while the DR systems are activated and take over the load in the DR site.

Here are some posts/articles which delve into these differences in more detail:

http://www.channeldb2.com/profiles/blogs/disaster-recovery-high

http://nexus.realtimepublishers.com/sgudb.php

http://www.drj.com/2010-articles/online-exclusive/understanding-high-availability-and-disaster-recovery-in-your-overall-recovery-strategy.html

http://gestaltit.com/all/tech/storage/bas/define-high-availability-disaster-recovery/

Posted in DB2 LUW, Oracle, SQL Server, Technology, Unix | 1 Comment »

Insert data in one table with data from other table(s)

Posted by decipherinfosys on January 27, 2011

This post is in response to this question from a reader:

“Your post on updating data in one table with data from another table was very helpful.  I need to also do an insert into a table (a summary table) with data from a bunch of other tables which are in the same database and also in database(s) on other instances.  Right now, I have a very convoluted process of scheduled jobs, row by row scripts to help me do that.  There is some transformation (mostly lookups) involved as well.  Is there an easier/simpler way to do this?  Appreciate your thoughts on this”

The post that the question above referred to was this one – here.  In that post, we had covered how you can update the data in a table with the data from another table.  Doing an insert into a table with data from other tables (within the same schema or separate schemas or separate databases or separate databases on separate instances) is also fairly simple.  There are many ways of doing it – we will cover the “INSERT INTO …. SELECT FROM … ” approach in this blog post.

Let’s consider this hypothetical scenario to illustrate this (using SQL Server lingo though the same approach is available in Oracle and DB2 as well) – Say the summary table “tblEmpSummary” that this reader wanted to insert the data into resides in a database called EXECUTIVE in a schema called ANALYSIS.  It contains summary information on the employees – their name, their department, their performance review rating, their expense reports per week and the weekly benefit expenses for those employees.

And the source of the data are these:

a) Other table called EMP_PERF_REVIEW in the same schema in the same database (contains EMP_PERF_RATING column that we want and can join on EMP_MASTER_ID column),

b) Other table(s) called EMP_MASTER, EMP_DEPT_XREF and DEPT_MASTER from the schema EMPLOYEE on the same database (contains FIRST_NAME, LAST_NAME, DEPT_NAME information and we can join on EMP_MASTER_ID and DEPT_MASTER_ID),

c) Other table called EMP_EXPENSE_RPT from the EMP Schema in the ACCOUNTING database in the same SQL Server instance (contains WEEK_ENDING_DATE and TOTAL_AMOUNT fields that we are interested in and can join in using EMP_MASTER_ID column), and

d) A view called VW_EMP_PTO from a schema called BENEFIT from the HR database that resides on another instance (Contains PTO_ACCRUED and TOTAL_AMOUNT field for the benefits that we are interested in and can join in using EMP_MASTER_ID).

So, we have data coming from:

a) table in the same schema/same database,
b) tables in the same database but a different schema,
c) tables in a separate database on the same SQL Server instance, and
d) a view from a database in a separate SQL Server instance all together.

If suppose all of our data was coming from (a) alone, all we would have needed to do would be this:

INSERT INTO Analysis.tblEmpSummary (<list of column names here>)
SELECT <list of the column names from one source>
FROM Analysis.EMP_PERF_REVIEW
WHERE <filter conditions>
<any other conditions like GROUP BY, ORDER BY etc. that we need>

Please do note that the WHERE clause above can contain a NOT EXISTS check on the tblEmpSummary to ensure that no dupes are being inserted – needless to state, this INSERT code would be wrapped in a TRY…CATCH block with proper error handling.

If we now extend it to include the data from (b) as well, the above SQL would change to:

INSERT INTO Analysis.tblEmpSummary (<list of column names here>)
SELECT <list of the column names from the two sources – use the proper aliases>
FROM Analysis.EMP_PERF_REVIEW as EPR
INNER JOIN Employee.EMP_MASTER as EM ON EPR.EMP_MASTER_ID = EM.EMP_MASTER_ID
INNER JOIN Employee.EMP_DEPT_XREF as EDX ON EM.EMP_MASTER_ID = EDX.EMP_MASTER_ID
INNER JOIN Employee.DEPT_MASTER as DM ON EDX.DEPT_MASTER_ID = DM.DEPT_MASTER_ID
WHERE <filter conditions>
<any other conditions like GROUP BY, ORDER BY etc. that we need>

As can be seen from above, we have assumed that the select privileges are present and you would be able to join the tables easily then.

Now, if we have to include (c) scenario as well and assuming that we have select privileges in place already on that object, we can either create views/synonyms for those objects in our schema or simply refer to those objects if the permissions are available using a three part naming convention:

<database_name>.<schema_name>.<object_name>

So, the above SQL then gets modified to:

INSERT INTO Analysis.tblEmpSummary (<list of column names here>)
SELECT <list of the column names from the three sources – use the proper aliases>
FROM Analysis.EMP_PERF_REVIEW as EPR
INNER JOIN Employee.EMP_MASTER as EM ON EPR.EMP_MASTER_ID = EM.EMP_MASTER_ID
INNER JOIN Employee.EMP_DEPT_XREF as EDX ON EM.EMP_MASTER_ID = EDX.EMP_MASTER_ID
INNER JOIN Employee.DEPT_MASTER as DM ON EDX.DEPT_MASTER_ID = DM.DEPT_MASTER_ID
INNER JOIN Accounting.Emp.EMP_EXPENSE_RPT as EER ON EM.EMP_MASTER_ID = EER.EMP_MASTER_ID

WHERE <filter conditions>
<any other conditions like GROUP BY, ORDER BY etc. that we need>

Now, that leaves us with the last part – (d).  Since it resides in a database on a separate instance, we can first create a linked server to it first and then either reference it using a couple of ways as was shown in that blog post on linked server.  Or we can simply create a synonym for it and use it.  Assuming we are using a four part naming convention to reference it, the above SQL now becomes:

INSERT INTO Analysis.tblEmpSummary (<list of column names here>)
SELECT <list of the column names from the four sources – use the proper aliases>
FROM Analysis.EMP_PERF_REVIEW as EPR
INNER JOIN Employee.EMP_MASTER as EM ON EPR.EMP_MASTER_ID = EM.EMP_MASTER_ID
INNER JOIN Employee.EMP_DEPT_XREF as EDX ON EM.EMP_MASTER_ID = EDX.EMP_MASTER_ID
INNER JOIN Employee.DEPT_MASTER as DM ON EDX.DEPT_MASTER_ID = DM.DEPT_MASTER_ID
INNER JOIN Accounting.Emp.EMP_EXPENSE_RPT as EER ON EM.EMP_MASTER_ID = EER.EMP_MASTER_ID
INNER JOIN LinkedSrvHR.HR.Benefit.VW_EMP_PTO as VWP ON EM.EMP_MASTER_ID = VWP.EMP_MASTER_ID

WHERE <filter conditions>
<any other conditions like GROUP BY, ORDER BY etc. that we need>

This should cover all the scenarios that you can possibly run into.  You can also try this with MERGE command.  Any transformations like look-up of values that you can do in T-SQL, you can do over the record set that you get using the above SQL.  It is essentially now a data-set for you that is coming from 4 different sources.  What you do with that data-set is entirely up to you – use a CASE statement over it, do aggregation over it, dump it into a temporary table/table variable for further processing etc. – all the normal T-SQL operations you would be able to do including inserting it into the summary table.

In addition to the technique from above, if this is something that you need to do as a quick and dirty thing and you do not intend to keep the new table around post the analysis, you can also do:

select … into <new table> from <old table or set of tables from one or multiple sources> where <the where condition over here>

The “select into” is the same as CTAS in Oracle.  You can read more on that here.

Posted in DB2 LUW, Oracle, SQL Server | Leave a Comment »

Summation over a bit data type column

Posted by decipherinfosys on January 12, 2011

One of the readers asked this question: “I have inherited a legacy application in which one of the columns in a table is declared as a bit data type.  I need to find the total sum/number of the records which have a value of 1 vs 0 vs Null and need to get it summed across different months.  I was trying to use SUM() aggregate function over that field and got an error: Operand data type bit is invalid for sum operator. What is an alternate/right way of getting this information using SQL?

There are a couple of ways to do this.   Here is a simple script to illustrate it:

/*
Declare a table variable with a datetime column and a bit column and populate it with values
*/
declare @test table (create_date_time datetime, col1 bit)
insert into @test values ('12/31/2010', 1), ('12/19/2010', 0), ('01/10/2011', 1), ('01/11/2011', 1), ('01/12/2011', 0), ('01/12/2011', null)
/*
Display the records
*/
select * from @test
/*
This statement will run into the 8117 error message
*/
select SUM(col1), AVG(col1) from @test where col1 = 1


/*
Get the counts using the case statement and also show how to do the SUM() over a bit data type
column by casting it first
*/
select
YEAR(create_date_time) as YEAR_CDT
, MONTH(create_date_time) as MONTH_CDT
, COUNT(case when col1 = 1 then 1 else null end) as CNT_1
, COUNT(case when col1 = 0 then 1 else null end) as CNT_0
, SUM(CAST(COL1 AS TINYINT)) as CNT_Second_Mthd_1
, COUNT(*) as TOTAL_CNT
from @test
group by YEAR(create_date_time), MONTH(create_date_time)

And here is the output:

/*All the records*/

create_date_time                      col1
2010-12-31 00:00:00.000               1
2010-12-19 00:00:00.000               0
2011-01-10 00:00:00.000               1
2011-01-11 00:00:00.000               1
2011-01-12 00:00:00.000              0
2011-01-12 00:00:00.000              NULL

/*Data from the SQL Statement*/

YEAR_CDT    MONTH_CDT    CNT_1    CNT_0    CNT_Second_Mthd_1    TOTAL_CNT
2011                1        2        1                    2            4
2010               12        1        1                    1            2

Posted in SQL Server | Leave a Comment »