Featherweight Linux VNC services

This article describes how to set up a very lightweight VNC service under CentOS/Red Hat.

Virtual_Network_Computing_(logo).svg

Intro

In Red Hat Enterprise Linux (and derivates, I use CentOS) you can run a VNC service to allow graphical connections to a linux system. I was looking for a very lightweight VNC service (no fancy desktop with all the bells and whistles, just something that lets me do some stuff that requires an X session and run an Xterm – such as installing Oracle or running Swingbench, without using another host with an X client). In other words, a typical service for virtual machines that run as servers (such as database servers, web servers, etc).

CentOS standard method

I tried the standard documented way to do this in CentOS: CentOS Virtual Network Computing using the standard tigervnc-server method, but found a few issues with the way they set it up:

  • For every user requiring VNC services, you need to customize the configuration
  • If one user deletes or corrupts his VNC password file, the whole service stops working (fix via normal SSH login but requires skilled user)
  • If a user messes up his xstartup file he is locked out (fix via normal SSH login but requires skilled user)
  • Users need 2 passwords: for their (own) VNC service, and the usual one for Linux
  • Their X window and VNC processes are always running and thus eating resources even if not used
  • If their X session hangs (i.e. window manager killed, or simple logout) it’s hard or even impossible to clean up and restart (see section 4 in the mentioned article: Recovery from a logout) without resetting the whole VNC service
  • Every user requires a separate, unique TCP port

All by all, nice and easy for a small test server with a few users, but no good for larger environments. The good thing is that the desktops are persistent, i.e. you may disconnect and reconnect later and the VNC session will be as you left it. And you can install lighter desktop environments (twm or openmotif) instead of the huge and heavy Gnome desktop.

But I was looking for something better.

Read more of this post

Putting an end to the password jungle

manypwdsWith my blog audience all being experts in the IT industry (I presume), I think we are all too familiar with the problems of classic password security mechanisms.

Humans are just not good at remembering long meaningless strings of tokens, especially if they need to be changed every so many months and having to keep track of many of those at the same time.
Some security experts blame humans. They say you should create strong passwords, not use a single password for different purposes, not write them down on paper – or worse – in an unencrypted form somewhere on your computer.

I disagree. I think the fundamental problem is within information technology itself. We invented computers to make life easier for ourselves – well, actually, that’s not true, ironically we invented them primarily to break military encryption codes. But the widespread adoption of computing happened because of the promise of making our lives easier.

I myself use a password manager (KeePass) to make my life a bit easier. There are many password manager tools available, and they solve part of the problem: keeping track of what password was used for what purpose. I now only need to remember one (hopefully, strong enough) password to access the password database and from there I just use the tool to log me in to websites, corporate networks and other services (let’s refer to all of those as “cloud servers”).

The many problems with passwords

The fundamental problem remains – even when using a password manager: passwords are no good for protecting our sensitive data or identity.

Read more of this post

Looking back and forward

I have been enjoying a short holiday in which I decided to totally disconnect from work for a while and re-charge my battery. So while many bloggers and authors in our industry were making predictions for 2013, I was doing some other stuff and blogging was not part of that ;-)

Now that we survived the end of times let’s look back and forward a bit. I don’t want to burn myself making crazy predictions about this year but still like to present some thoughts for the longer term. Stay tuned…

Read more of this post

The Zero Dataloss Myth

In previous posts I have focused on the technical side of running business applications (except my last post about the Joint Escalation Center). So let’s teleport to another level and have a look at business drivers.

What happens if you are an IT architect for an organization, and you ask your business people (your internal customers) how much data loss they can tolerate in case of a disaster? I bet the answer is always the same:

“zero!”

This relates to what is known in the industry as Recovery Point Objective (RPO).

Ask them how much downtime they can tolerate in case something bad happens. Again, the consistent answer:

“none!”

This is equivalent to Recovery Time Objective (RTO).

Now if you are in “Jukebox mode” (business asks, you provide, no questions asked) then you try to give them what they ask for (RPO = zero, RTO = zero). Which makes many IT vendors and communication service providers happy, because this means you have to run expensive clustering software, and synchronous data mirroring to a D/R site using pricey data connections.

If you are in “Consultative” mode, you try to figure out what the business really wants, not just what they ask for. And you wonder if their request is feasible at all, and if so, what the cost is of achieving these service levels.

Read more of this post

Thank you, Larry Ellison

My colleague Vince Westin published this great post on his blog:

During his opening keynote at Oracle OpenWorld 2012, Larry Ellison launched the new Exadata X3.
LarryOOW2012 The new version appears to have some nice new capabilities, including caching writes to EFD, which are likely to improve the usability of Exadata for OLTP workloads. And he was nice enough to include the EMC Symmetrix VMAX 40K in detail on 30% of his slides as he announced the new Exadata. And for that, I give thanks. I am sure that Salesforce.com were similarly thankful when Larry focused so much of his time on their product in his keynote last year.

Read the rest of his post here.

The post provides a bunch of good reasons why EMC VMAX might be a better choice for customers that run high-performance mission-critical environments. A highly recommended read!

POC: Piece Of Cake or Point Of Contradiction?

Every now and then I get involved in Customer Proof of Concepts. A Proof of Concept (POC) is, according to Wikipedia, something like a demonstration of feasibility of a certain idea, concept or theory.

Concept Performance Aircraft

Concept Aircraft

Read more of this post

The Dutch Diginotar Hack

Slightly off-topic here considering my normal focus on business applications (or actually, maybe not, decide for yourself).

False passports

False passports

On the Dutch ICT news sites it is currently a big topic. And the impact for the whole internet is probably still underestimated. What happened? On August 29 2011, I read a news post on webwereld.nl (a Dutch ICT news site) that Iran (actually it seemed to be Iranians but this is still not sure) could tap internet traffic to GMail. This happened because they used an SSL certificate that was signed by the Dutch Certificate Authority Diginotar. Diginotar is a Dutch company providing PKI (Public Key Infrastructure) “certificates” for secure connections, both for regular commercial customers and for the Dutch government. Ouch!
Read more of this post

Monkey Business

Monkey eating bananaMaybe you have heard the story of the Monkey Experiment. It is about an experiment with a bunch of monkeys in a cage, a ladder, and a banana. At a certain point one of the monkeys sees the banana hanging up high, starts climbing the ladder, and then the researcher sprays all monkeys with cold water. The climbing monkey tumbles down before even getting the banana, looks puzzled, wait until he’s dry again and his ego back on its feet. He tries again, same result, all monkeys are sprayed wet. Some of the others try it a few times until they learn: don’t climb for the banana or you will get wet and cold.

The second part of the experiment becomes more interesting. The researcher removes one of the monkeys and replaces him with a fresh, dry monkey with an unharmed ego. After a while he spots the banana, wonders to himself why the other monkeys are so stupid not to go for the banana, and gives it a try. But when reaching the ladder, the other monkeys kick his ass and make it very clear he is not supposed to do so. After the new monkey is conditioned not to go for the banana, the researcher replaces the “old” monkeys, one by one, with new ones. Every new monkey goes for the banana until he learns not to do so.

Eventually the cage is full of monkeys who know that they are not allowed to climb the ladder to get the banana. None of them knows why – it’s just the way it is and always has been…
Read more of this post

Greenplum links and resources

Converted to resource page, please follow this link:
Greenplum links and resources

Thin Provisioning

Some customers ask us – not surprisingly – how they can reduce their total cost of ownership in their information infrastructure even more. In response, I sometimes ask them what the utilization is of their storage systems.

Their answer: often something like 70% – you need of course some spare capacity for sudden application growth, so close to 100% is probably not a good idea.

Overallocating storage

Overallocating storage

If you really measure the utilization you often find other numbers. And I don’t mean the overhead of RAID, replication, spare drives, backup copies etc. because I consider these as required technology – invisible from the applications but needed for protection and so on. So the question is – of each net gigabyte of storage, how much is actually used by all applications?

Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 345 other followers