Sunday 15 April 2012

plsql - Deleting a large number of records using PL/SQL -


I want to delete a large number of records using PL / SQL. Records are identified by a DATE field that identifies the last time the record is modified. I do not want to consume lots of resources, so I thought that I should limit the number of records which were being removed and I felt that the pseudo column ROUUEN could serve the server for this purpose. I then check the number of rows affected by the update and repeat until the number of rows is not affected.

I am looking for the recommended best practice to do this. I am also worried about the warning that I am getting:

"A loop with a DML statement should be re-applied to use bulk collection and fern."

But when I think of the subject, what does it do or does not apply to what I am trying to do?

Your comments and recommendations are welcome.

  Create or replace package MY_PURGE as process PURGE_MY_TABLE (v_Cut_Off_Date DATE, NUMBER C_MAX_DELETE, DEL_COUNT OUT NUMBER) is v_RECORDS_DELETED NUMBER: = 0; V_DONE BOOLEAN: = FALSE; BEGIN DEL_COUNT: = 0; Unless the V_DONE LOOP removal is missed, UPDT_TIMESTMP & lt; V_Cut_Off_Date and ROWNUM & lt; = C_MAX_DELETE; V_RECORDS_DELETED: = SQL% ROWCOUNT; DEL_COUNT: = DEL_COUNT + v_RECORDS_DELETED; If (v_RECORDS_DELETED = 0) then V_DONE: = TRUE; end if; COMMIT; End loop; End;  

thanks

What resources are you concerned about the consumer Are you A single DELETE statement is going to be the most efficient approach *. Assuming that it is meant to be done regularly, the database should actually be properly shaped in terms of the UNDO tables so that you can do a single DELETE .

Actually, taking one step back, the most effective way to divide the table by UPDT_TIMESTMP and leave the old partition. But the split is an additional cost option above your Enterprise Edition license and there may be other effects on the system of partition on the table.

If you really want to remove rows in batches with an interim commitment, then it would appear that to be a very proper implementation, I would really consider it if the single delite The statement took a considerable fraction of the processing window on my night and I was worried that after a few hours delete could fail, forcing a rollback and restarting the whole process. Removing in batches will slow down the same DELETE but it will be easy to resume

bulk archive and FORALL is not understood in this particular case. This applies to a more general condition, where a person is selecting data from one or more source tables, doing some processing in PL / SQL, and then writing the data in a destination table, the slow line-per- Rather than processing the row it would be more efficient to do this through bulk operation. But this one too INSERT ... SELECT .

will be even more efficient to do as

No comments:

Post a Comment