Time for a change: Migrating Exadata to a modern All-Flash array

 

inflateWith Oracle’s uncertain SPARC future, the rise of very fast and capable All-Flash arrays and existing Exadata customers looking to refresh their hardware, I increasingly get questions on what platform we can offer as an alternative to Exadata or Supercluster. A big challenge can be to break away from the lock-in effect of HCC (Hybrid Columnar Compression although I’d like to call it Hotel California Compression) as it seems hard to get an estimate on how much storage capacity is needed in other storage or migrating to other platforms. Note that any storage could theoretically use HCC but Oracle disabled it on purpose on anything other than Exadata, ZFS appliance or Pillar (huh?) storage.

As far as I know, there is no direct query to figure out how big a HCC compressed table would get after decompressing. HCC tables can get very large and the compression ratio can be fairly high which makes sizing a new environment a challenge.

So in order to provide a reasonable guesstimate, I created a typical scenario on my VMware homelab to estimate generic storage requirements for HCC compressed environments.

Continue reading

The Quick and Dirty Dedupe Analyzer – Part 1 – Hands on

As announced in my last blogpost, qdda is a tool that analyzes potential storage savings by scanning data and giving a deduplication, compression and thin provisioning estimate. The results are an indication whether a modern All-Flash Array (AFA) like Dell EMC XtremIO would be worth considering.

In this (lenghty) post I will go over the basics of qdda and run a few synthetic test scenarios to show what’s possible. The next posts will cover more advanced scenarios such as running against Oracle database data, multiple nodes and other exotic ones such as running against ZFS storage pools.

[ Warning: Lengthy technical content, Rated T, parental advisory required ]

Continue reading