Systems Engineering and RDBMS

Archive for January 7th, 2008

Fetching data across commits

Posted by decipherinfosys on January 7, 2008

We saw this usage at one of the client sites and it is one of the common mistakes done by beginners.  A short snippet of the pseudo code before we delve into this further:

for x in ( select * from tbl where <some condition> )
/* some appl. logic over here */
update tbl set <some column values> where pk_col =;
if ( we covered 100 rows )
end if;
end loop;

Let’s examine what this code is doing and then let’s talk about why it is bad and the kind of issues that you can run into if you do this.  As you can see from the pseudo code itself, the code is fetching the data across commit points.  We have seen this as a common practice in many places where the developer/DBA tries the implicit cursor is fetched across commit points.  This is not a good way of doing things and here are the reasons why:

1) If all the records that are supposed to be updated are part of the same transaction which is usually the case, if you run into an issue after say 4 commit points (and overall, there were say 10 commit points in the above code – assuming 10000 records), then you have absolutely no way of un-doing the data that you have already updated.  The concept of the transaction is lost all together.  You cannot re-start such a transaction.

2) This kind of code can easily lead to the ORA-01555 error.  Yes, “the snapshot too old” error.  Why?  You get 1555 when the UNDO that you need for a query does not exist any longer.  When we issue a commit, we tell Oracle that that UNDO can be re-used.  If we are reading and modifying the same table and committing across fetches, we are increasing the chance of causing ORA-01555 for ourselves.  The long running query in the implicit cursor above might need the UNDO that the code just released for re-use after issuing the commit.  The key thing to understand is that 1555 is not returned for the modification (update) – it is returned for reads – selects or the read component of an update (since prior to the update, the read happens) might run into it.  In the above piece of code, suppose that we read the table “tbl” using an index operation, say we read the blocks in this order…

block B10
block B11
block B25

block B30
block B11
block B25

..and so on

the code would essentially come back to the block but every time that it comes back, the code needs that version of the block that existed “when the query began” but the code will not be able to get that version anymore since the UNDO that was generated from the first commit iteration is gone since we signalled to Oracle to re-use it.

So, what should be done in this case?  Well, preserve transaction integrity and commit only at the transactional boundaries and do not commit across fetches.

Posted in Oracle | 1 Comment »