Now we all know that having a DB with a large number of records (over 1 million rows should do it) can beome sluggish over time if the content of the key tables changes often, especially with insertions. This is despite your best efforts to tune the SQL you have carefully crafted using the Explain Plan/Execution Plan tools, eliminating unnecessary table scans.
Well children, you are missing a simple trick – regularly recalculating stats. This forces the DB to re-estimate the size of the tables and associated indices. Without going into the nitty gritty of why, the basic fact is that any modern relational database engine relies on an optimiser (via the execution plan) to work out how best to grab your data. It relies heavily in fact on the DB table stats and index stats to achieve this.
By recalculating stats on a frequent basis – most enterprise systems do it overnight – the DB response times remain consistent for the same queries irrespective of the data load.
A note to the observant: you will see the DB performance roll off a lot more rapidly the more indexes you add. This appears as voodoo to people who spent a large amount of time tuning queries and indexes with smaller data loads, when they suddenly find that their query response times suddenly fall off a cliff after a few 1000 more rows are added to the DB.
The less often you do this necessary bit of housework, the slower the recalculation process becomes. Remember our old friend the Law of Diminishing Returns.
So look up in your DBA handbook how to housekeep your key table statistics. Regularly.