For large data inserts I can suggest you few things like:
- Use trigger(PL/SQL)
- Use APPEND hint
- Remove indexes on tables
Firstly, while you use triggers in the table it could leave data to be logically corrupt. And it will then perform insert in a very conventional way. Which is a time consuming process and won’t helps us!
Secondly, using APPEND hint will help us to an extend.
So, APPEND hint tells the optimizer to perform a direct-insert into the table, which improves the performance.
Now there is a way which we could achieve this by minimizing the Redo generation.
What Redo do is; it basically ensures the recoverability of data to the database. It writes down every transaction to the archive log.
Let’s take a scenario, where if the database is running on the NOARCHIVELOG mode, using APPEND hint will reduce the redo generation i.e; it won’t write into the archive log anymore and thus increases the speed.
But then it won’t be able to recover at any point in time if your data is ambiguous.
Furthermore, in the ARCHIVELOG mode, using APPEND hint won’t help us and will not reduce the redo generation until and unless your table is in NOLOGGING mode.
But also, in NOLOGGING mode our data is no longer protected. In fact, in OLTP it is very rare to see your table to NOLOGGING mode for obvious reasons.
Lastly, yes we can drop the indexes, constraints when bulk inserting into tables. Coz writing down index for every row formed will depreciate the performance of our db.
So, Ya! These are the methods for optimizing Oracle database for large data inserts.
I hope you got this!
Cheers! 👍
Link to my space: Click here
Comments
Post a Comment