Systems Engineering and RDBMS

Referential Integrity Logic

Posted by decipherinfosys on August 26, 2007

Ran into another one of those scenarios at a client site.  In one of their applications, the application developers had put the foreign key logic in the application i.e. they had not used the foreign keys in the schema itself but instead the application code was doing that work.  Needless to state, there were a lot of issues with data corruption and lost data integrity.  The primary purpose of the usage of any RDBMS is to maintain data integrity – data should be pristine, with all the relationships and constraints enforced at the DB tier.

To give you an example of how easy it is to run into data integrity issues, here is an example:

When trying to delete the data from the parent table, the application was doing a “select count(*) from child_table where fk_col = :pk_col_value;” and then if the count was 0 for all the child tables, it went on it’s merry way to delete the parent record.  If in the meantime i.e. while doing this wonderful count logic, some session was inserting a record for the same parent table, that would very easily lead to orphaned child table records.  Had a FK been used, when trying to delete the parent table, it would enforce locks in the child table to enforce the data integrity.  A good index on the FK column(s) would also ensure good performance.

Not only that, doing constraint validations in the application layer makes it slower and unmanageable as well.  There is a reason why RDBMS have RI implemented.  Implementing your own RI code in the application layer is not only un-necessary, it is also not performant.  Same is also true for the approach that some DBAs take to enforce FKs via the usage of triggers.  That is also extremely inefficient and causes massive serialization issues (usage of full table locks to ensure that no two sessions are working on the same data set).

Sorry, the comment form is closed at this time.

 
%d bloggers like this: