Updating millions of records in a table
Posted by decipherinfosys on July 8, 2007
Got a call last night from one of our clients – in one of their production systems (which was using Oracle as their RDBMS), one of the staff members had fired off an update statement to update a 7 million records table. He had taken the approach of committing N records at a time in the hope that it will get done faster and at the same time not create any issues with the UNDO (Rollback Segments). This was also a non-partitioned table. And like the energizer bunny, the update was running and running and…
A better option (since this was done during the downtime window of 4 hours) would have been:
- Using CTAS (Create Table AS), one can create a new table from the old table –> use “where 1=0” so that only the definition gets created.
- Alter the new table to be in nologging.
- Use direct path inserts (the append hint) and parallel query options to make it a fast insert from the source (old) into the destination (new) table.
- Create the indexes on the new table.
- Add the constraints to the new table.
- Execute the grants on this new table.
- Drop the old table.
- Re-name the new table to be the old table now.
- Collect Statistics.
- Re-compile the dependent objects or the next time that they get called, they will get re-compiled.
If you have a partitioned table, then you can swap the partition of the original data (old table) with the new table and rebuild the indexes in parallel. When this approach was compared and tested on the test system (same data set and same hardware as the production environment), it got done in less than 30 minutes v/s over 6 hours for the previous approach.
One Response to “Updating millions of records in a table”
Sorry, the comment form is closed at this time.