We have moved!

Dirty Cache has moved to a new home: https://dirty-cache.com. New great content is coming up!

If you are a subscriber to my blog and you want to keep getting notices about new blog posts:

  • You’re an email subscriber: No action needed, I have moved email addresses to the new site
  • You’re a WordPress.com subscriber: Re-register on the new site via the SUBSCRIBE button with your email address. You will be informed from now on
  • You want to unsubscribe (email subscribers): Unsubscribe via the SUBSCRIBE section on the new front page.

Welcome back at our new site!

QDDA update – 2.2


Quick post to announce QDDA version 2.2 has been published on Github and in the Outrun-Extras YUM repository.

Reminder: The Quick and Dirty Dedupe Analyzer is an Open Source Linux tool that scans disks or files block by block to find duplicate blocks and compression ratios, so that it can report – in detail – what the expected data reduction rate is on a storage array capable of these things. It can be downloaded as standalone executable (QDDA download), as RPM package via YUM or compiled from source (QDDA Install)

QDDA 2.2 adds:

  • DellEMC VMAX and PowerMAX support (using the DEFLATE compression algorithm)
  • bash-completion (by entering on the command line, RPM version only)
  • Improved options for custom storage definitions
  • Internal C++ code improvements (not relevant to the normal user)

Note to other storage vendors: If you’d like your array to be included in the tool, drop me a note with dedupe/compression algorithm details and I’ll see what is possible.

Continue reading

Webcast: How to save on Oracle licensing fees by replatforming on Dell EMC

On Jan 24th, I will host a webcast on Oracle cost optimization via database replatforming.

Database license fees drive over 80% of total system cost. Many organizations virtualize their applications, but Oracle is often an exception for a variety of reasons.
You will learn why re-platforming Oracle databases on better hardware can drive down TCO by a significant amount. Bart will also cover technical challenges and benefits, as well as myths and facts about licensing Oracle on VMware, and how to deal with Oracle license audits and still stay compliant.

The presentation will be a mix of technical as well as IT management level content, and will discuss how to use Dell EMC platforms with or without virtualization, to optimize database license cost. I will also cover common myths, gotchas and workarounds for licensing issues.

Keith Dobbs, our guest speaker from Madora Consulting, will cover more interesting insights in Oracle licensing, audits and negotiations.

We already have nearly 200 registrations but there is always room for more, so click on the picture below to register:

Webcast

Webcast

See you there on Tuesday!

Update: The presentation is available via the Presentations page

This post first appeared on Dirty Cache by Bart Sjerps. Copyright © 2011 – 2017. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Looking forward: 2016

We’re already over one week in 2016 and I realize I haven’t done much blogging lately.

One of the things that kept me busy is development on Outrun, and the joint Oracle / EMC Solution Center (OSC) on which I intend to write a bit more going forward.

Something I did about a year ago (without mentioning it too much) is upgrade my WordPress.com account to Professional. Not that I really need the extra add-ons, but I want my readers not to be disturbed by ads – OK, there’s ad blockers, but not everyone uses them, and on some platforms you simply can’t (iOS). Dirty Cash well spent (and no, I don’t get it reimbursed by my employer if you’d think that, my blog is mine, mine only and independent).

adblockwelcomeGiven that the number of page views on Dirty Cache passed a quarter million last year (thanks to all my readers), can you imagine the savings in bandwidth and productivity loss by not showing ads? ;-)

So what else can you expect from me this year?

Of course, more about running Oracle on EMC and why I think that’s a pretty good idea. As the competition with Oracle is heating up, I intend to write more on comparing the differences between the solutions of both companies, debunking some marketing and competitive claims, and more. I also hope to find time to maintain the wiki on the Outrun site, and in addition to Outrun documentation, it might be a good place to put Oracle / EMC related howto’s, best practices, FAQs and more.

You also might be wondering what’s going to happen around Oracle / EMC solutions during the Dell / EMC aquisition… Me too. But we can’t (and are not allowed to) comment on it until the merger is final. Until then, business as usual. When the time is ready I’ll comment on new Dell / EMC / Oracle stuff where possible.

Continue reading

Featherweight Linux VNC services

This article describes how to set up a very lightweight VNC service under CentOS/Red Hat.

Virtual_Network_Computing_(logo).svg

Intro

In Red Hat Enterprise Linux (and derivates, I use CentOS) you can run a VNC service to allow graphical connections to a linux system. I was looking for a very lightweight VNC service (no fancy desktop with all the bells and whistles, just something that lets me do some stuff that requires an X session and run an Xterm – such as installing Oracle or running Swingbench, without using another host with an X client). In other words, a typical service for virtual machines that run as servers (such as database servers, web servers, etc).

CentOS standard method

I tried the standard documented way to do this in CentOS: CentOS Virtual Network Computing using the standard tigervnc-server method, but found a few issues with the way they set it up:

  • For every user requiring VNC services, you need to customize the configuration
  • If one user deletes or corrupts his VNC password file, the whole service stops working (fix via normal SSH login but requires skilled user)
  • If a user messes up his xstartup file he is locked out (fix via normal SSH login but requires skilled user)
  • Users need 2 passwords: for their (own) VNC service, and the usual one for Linux
  • Their X window and VNC processes are always running and thus eating resources even if not used
  • If their X session hangs (i.e. window manager killed, or simple logout) it’s hard or even impossible to clean up and restart (see section 4 in the mentioned article: Recovery from a logout) without resetting the whole VNC service
  • Every user requires a separate, unique TCP port

All by all, nice and easy for a small test server with a few users, but no good for larger environments. The good thing is that the desktops are persistent, i.e. you may disconnect and reconnect later and the VNC session will be as you left it. And you can install lighter desktop environments (twm or openmotif) instead of the huge and heavy Gnome desktop.

But I was looking for something better.

Continue reading

Putting an end to the password jungle

manypwdsWith my blog audience all being experts in the IT industry (I presume), I think we are all too familiar with the problems of classic password security mechanisms.

Humans are just not good at remembering long meaningless strings of tokens, especially if they need to be changed every so many months and having to keep track of many of those at the same time.
Some security experts blame humans. They say you should create strong passwords, not use a single password for different purposes, not write them down on paper – or worse – in an unencrypted form somewhere on your computer.

I disagree. I think the fundamental problem is within information technology itself. We invented computers to make life easier for ourselves – well, actually, that’s not true, ironically we invented them primarily to break military encryption codes. But the widespread adoption of computing happened because of the promise of making our lives easier.

I myself use a password manager (KeePass) to make my life a bit easier. There are many password manager tools available, and they solve part of the problem: keeping track of what password was used for what purpose. I now only need to remember one (hopefully, strong enough) password to access the password database and from there I just use the tool to log me in to websites, corporate networks and other services (let’s refer to all of those as “cloud servers”).

The many problems with passwords

The fundamental problem remains – even when using a password manager: passwords are no good for protecting our sensitive data or identity.

Continue reading

Looking back and forward

I have been enjoying a short holiday in which I decided to totally disconnect from work for a while and re-charge my battery. So while many bloggers and authors in our industry were making predictions for 2013, I was doing some other stuff and blogging was not part of that ;-)

Now that we survived the end of times let’s look back and forward a bit. I don’t want to burn myself making crazy predictions about this year but still like to present some thoughts for the longer term. Stay tuned…

Continue reading

The Zero Dataloss Myth

In previous posts I have focused on the technical side of running business applications (except my last post about the Joint Escalation Center). So let’s teleport to another level and have a look at business drivers.

What happens if you are an IT architect for an organization, and you ask your business people (your internal customers) how much data loss they can tolerate in case of a disaster? I bet the answer is always the same:

“zero!”

This relates to what is known in the industry as Recovery Point Objective (RPO).

Ask them how much downtime they can tolerate in case something bad happens. Again, the consistent answer:

“none!”

This is equivalent to Recovery Time Objective (RTO).

Now if you are in “Jukebox mode” (business asks, you provide, no questions asked) then you try to give them what they ask for (RPO = zero, RTO = zero). Which makes many IT vendors and communication service providers happy, because this means you have to run expensive clustering software, and synchronous data mirroring to a D/R site using pricey data connections.

If you are in “Consultative” mode, you try to figure out what the business really wants, not just what they ask for. And you wonder if their request is feasible at all, and if so, what the cost is of achieving these service levels.

Continue reading

Thank you, Larry Ellison

My colleague Vince Westin published this great post on his blog:

During his opening keynote at Oracle OpenWorld 2012, Larry Ellison launched the new Exadata X3.
LarryOOW2012 The new version appears to have some nice new capabilities, including caching writes to EFD, which are likely to improve the usability of Exadata for OLTP workloads. And he was nice enough to include the EMC Symmetrix VMAX 40K in detail on 30% of his slides as he announced the new Exadata. And for that, I give thanks. I am sure that Salesforce.com were similarly thankful when Larry focused so much of his time on their product in his keynote last year.

Read the rest of his post here.

The post provides a bunch of good reasons why EMC VMAX might be a better choice for customers that run high-performance mission-critical environments. A highly recommended read!

POC: Piece Of Cake or Point Of Contradiction?

Every now and then I get involved in Customer Proof of Concepts. A Proof of Concept (POC) is, according to Wikipedia, something like a demonstration of feasibility of a certain idea, concept or theory.

Concept Performance Aircraft

Concept Aircraft

Continue reading