Information Lifecycle Management and Oracle databases – part 3

Archiving and purging old data

In the end, if you want to seriously reduce the effective size of a database (after using all innovations on the infrastructure level) is to move data out of the database on to something else. This is a bit against Oracle’s preferred approach as they propose to hold as much of the application data in the database for as long as possible (I wonder why…)

We could separate all archiving methods into two categories:

  • Methods that don’t change the RDBMS representation and just move tables or records to a different location in the same or different database;
  • Methods that convert database records into something else and remove it from the database layer completely

Continue reading

Information Lifecycle Management and Oracle databases – part 2

Database compression

 

Compression

 

Another technique that Oracle has improved as of version 11g is compression. In versions up to 10g you could only compress an entire table, and after that, random performance on a compressed table was poor. It worked well for data warehouses where I/O bandwidth is reduced (compressed data can be read quicker from disk than uncompressed) but only in specific cases.

In 11g Oracle has introduced “advanced” compression. I will not go into details, but it allows compression on a much more granular basis, so that OLTP applications can benefit, and it works on a record-by-record basis. Oracle claims this reduces the total database size (no-brainer :) ) and therefore also the backup size (thereby ignoring the effects of tape compression that most customers use, so your mileage may vary). Data can only be compressed once, so the size of a normal database on tape compared to a compressed one will probably not be different with tape compression enabled.

Continue reading