Systems Engineering and RDBMS

  • Click Here for Decipher's Homepage

  • Categories

  • Questions?

    Please send your wish list of things that you would like us to write about or if you have suggestions to help improve this blog site. You can send all questions/suggestions to: Blog Support
  • Archives

  • Blog Stats


Archive for February 7th, 2007

Event Classes in SQL Server 2005

Posted by decipherinfosys on February 7, 2007

This is so convenient.  In SQL 2000, one had to keep a static list of the event classes and then reference them for look-ups when running diagnostic SQL statements against the data captured in SQL Server Profiler traces.  In SQL Server 2005, there is a new view that gives you this information : sys.trace_events.  You can use this and cross reference it with the trace data that you store in a table, example (trace_take_two is the trace table in which we saved our data):

select v1.trace_event_id, as event_name, as category_name, x.cnt as nbr_of_occurences
sys.trace_events as v1
inner join (select eventclass, count(1) cnt from trace_take_two group by eventclass) as x
on x.eventclass = v1.trace_event_id
inner join sys.trace_categories as v2
on v1.category_id = v2.category_id
order by v1.trace_event_id asc

There are other trace related views as well that you can use to build up a quick troubleshooting set of scripts.

Posted in SQL Server | Leave a Comment »

Bad Transaction Designs

Posted by decipherinfosys on February 7, 2007

Here is a compilation of the bad transaction designs that we have seen in our experience in the field. Poorly written transactions are the biggest reason for experience blocking as well as data corruption issues.

  1. Not choosing the right transaction isolation level. This differs from application to application and also varies between different RDBMS. Oracle for example supports two transaction isolation levels where as SQL Server and DB2 LUW support the whole set of ANSI isolation levels (and SQL 2005 infact adds additional ones to simulate the multi-version concurrency behavior of Oracle).
  2. Transaction designs that ask for user input in the middle of a transaction. Suppose a transaction takes a lock on a resource and then asks the end user for input and say the end user goes on a break…that lock will be held till the user decides to enter a value or his session is killed.
  3. If you have an OLTP system, the transactions should have queries that are optimal and are preferrably SET based queries rather than cursor based.
  4. Not handling cancelled queries appropriately within a transaction. Suppose that there is a transaction and within that transaction, a SQL statement fails because of say a lock timeout error. The application then needs to issue the rollback or commit statements. Most developers assume that if a statement within a transaction fails, the engine will automatically rollback the transaction. That is not the case. In case you have an ORM (Object Relational Mapper) layer, then you can code is such that in such scenarios, it issues a rollback.
  5. Not having proper error handling and error logging.
  6. In the case of SQL Server, when you have nested transactions, the transaction is either committed or rolled back based on the action taken at the end of the outermost transaction. If the outer transaction is committed, the inner nested transactions are also committed. If the outer transaction is rolled back, then all inner transactions are also rolled back, regardless of whether or not the inner transactions were individually committed i.e. the rollback rollsback all the transactions where as the outermost commit is the only one that commits all the transaction. So, do not assume that the inner transaction results are saved even if the outermost transaction fails.
  7. In the case of SQL Server or DB2, use locking/query hints carefully. Bad locking hints can cause excessive blocking since lock escalation issues can arise.
  8. This is more of a good SQL coding practice rather than good transaction design practice – use bind variables (also known as parameterized queries).
  9. Use client side code to check for and eliminate any bad data issues. The reason why this is important is because it may cause an application to form a query that is resource intensive like a user enters a wild-card for example for the first_name column and the query formed is thus sub-optimal.
  10. Use savepoints judiciously – use them in scenarios where the chances of encountering an error is less.

Posted in DB2 LUW, Oracle, SQL Server | Leave a Comment »