Featherweight Linux VNC services

This article describes how to set up a very lightweight VNC service under CentOS/Red Hat.

Virtual_Network_Computing_(logo).svg

Intro

In Red Hat Enterprise Linux (and derivates, I use CentOS) you can run a VNC service to allow graphical connections to a linux system. I was looking for a very lightweight VNC service (no fancy desktop with all the bells and whistles, just something that lets me do some stuff that requires an X session and run an Xterm – such as installing Oracle or running Swingbench, without using another host with an X client). In other words, a typical service for virtual machines that run as servers (such as database servers, web servers, etc).

CentOS standard method

I tried the standard documented way to do this in CentOS: CentOS Virtual Network Computing using the standard tigervnc-server method, but found a few issues with the way they set it up:

  • For every user requiring VNC services, you need to customize the configuration
  • If one user deletes or corrupts his VNC password file, the whole service stops working (fix via normal SSH login but requires skilled user)
  • If a user messes up his xstartup file he is locked out (fix via normal SSH login but requires skilled user)
  • Users need 2 passwords: for their (own) VNC service, and the usual one for Linux
  • Their X window and VNC processes are always running and thus eating resources even if not used
  • If their X session hangs (i.e. window manager killed, or simple logout) it’s hard or even impossible to clean up and restart (see section 4 in the mentioned article: Recovery from a logout) without resetting the whole VNC service
  • Every user requires a separate, unique TCP port

All by all, nice and easy for a small test server with a few users, but no good for larger environments. The good thing is that the desktops are persistent, i.e. you may disconnect and reconnect later and the VNC session will be as you left it. And you can install lighter desktop environments (twm or openmotif) instead of the huge and heavy Gnome desktop.

But I was looking for something better.

Read more of this post

Fun with Linux UDEV and ASM: Using UDEV to create ASM disk volumes

floppy-disksBecause of the many discussions and confusion around the topic of partitioning, disk alignment and it’s brother issue, ASM disk management, hereby an explanation on how to use UDEV, and as an extra, I present a tool that manages some of this stuff for you.

 

 

 

 

The questions could be summarized as follows:

  • When do we have issues with disk alignment and why?
  • What methods are available to set alignment correctly and to verify?
  • Should we use ASMlib or are there alternatives? If so, which ones and how to manage those?

I’ve written 2 blogposts on the matter of alignment so I am not going to repeat myself on the details. The only thing you need to remember is that classic “MS-DOS” disk partitioning, by default, starts the first partition on the disk at the wrong offset (wrong in terms of optimal performance). The old partitioning scheme was invented when physical spinning rust was formatted with 63 sectors of 512 bytes per disk track each. Because you need some header information for boot block and partition table, the smart guys back then thought it was a good idea to start the first block of the first data partition on track 1 (instead of track 0). These days we have completely different physical disk geometries (and sometimes even different sector sizes, another interesting topic) but we still have the legacy of the old days.

If you’re not using an Intel X86_64 based operating system then chances are you have no alignment issues at all (the only exception I know is Solaris if you use “fdisk”, similar problem). If you use newer partition methods (GPT) then the issue is gone (but many BIOSes, boot methods and other tools cannot handle GPT). As MSDOS partitioning is limited to 2 TiB (http://en.wikipedia.org/wiki/Master_boot_record) it will probably be a thing of the past in a few years but for now we have to deal with it.

Wrong alignment causes some reads and writes to be broken in 2 pieces causing extra IOPS. I don’t have hard numbers but a long time ago I was told it could be an overhead of up to 20%. So we need to get rid of it.

ASM storage configuration

ASM does not use OS file systems or volume managers but has its own way of managing volumes and files. It “eats” block devices and these block devices need to be read/write for the user/group that runs the ASM instance, as well as the user/group that runs Oracle database processes (a public secret is that ASM is out-of-band and databases write directly to ASM data chunks). ASM does not care what the name or device numbers are of a block device, neither does it care whether it is a full disk, a partition, or some other type of device as long as it behaves as a block device under Linux (and probably other UNIX flavors). It does not need partition tables at all but writes its own disk signatures to the volumes it gets.

[ Warning: Lengthy technical content, Rated T, parental advisory required ]

Read more of this post

Getting the Best Oracle performance on XtremIO

XtremIO+Stack+NB+copy
(Blog repost from Virtual Storage Zone – Thanks to @cincystorage)

UPDATE: I’ll say it again because there seems to be some confusion: THIS IS A REPOST!

Original content is from the Virtual Storage Zone blog (not mine). Just reposted here because it’s interesting and related to Oracle, performance and EMC storage. Enjoy…

XtremIO is EMC’s all-flash scale out storage array designed to delivery the full performance of flash. The array is designed for 4k random I/O, low latency, inline data reduction, and even distribution of data blocks.  This even distribution of data blocks leads to maximum performance and minimal flash wear.  You can find all sorts of information on the architecture of the array, but I haven’t seen much talking about archive maximum performance from an Oracle database on XtremIO.

The nature of XtremIO ensures that’s any Oracle workload (OLTP, DSS, or Hybrid) will have high performance and low latency, however we can maximize performance with some configuration options.  Most of what I’ll be talking about is around RAC and ASM on Redhat Linux 6.x in a Fiber Channel Storage Area Network.

Read the full blogpost here.

 

Oracle, VMware and sub-server partitioning

costsaveLast week (during EMC world) a discussion came up on Twitter around Oracle licensing and whether Oracle would support CPU affinity as a way to license subsets of a physical server these days.

Unfortunately, the answer is NO (that is, if you run any other hypervisor than Oracle’s own Oracle VM). Enough has been said on this being anti-competitive and obviously another way for Oracle to lock in customers to their own stack. But keeping my promise, here’s the blogpost ;-)

A good writeup on that can be found here: Oracle’s reaction on the licensing discussion
And see Oracle’s own statement on this: Oracle Partitioning Policy

So let’s accept the situation and see if we can find smarter ways to run Oracle on a smaller license footprint – without having to use an inferior hypervisor from a vendor who isn’t likely to help you use it to reduce license cost savings…

The vast majority of enterprise customers run Oracle based on CPU licensing (actually, licensing is based on how many cores you have that run Oracle or have Oracle installed).
Read more of this post

TheCube interview EMC World 2014

Being interviewed yesterday at EMC World 2014 by Wikibon and SiliconAngle.

Enjoy!

Bart @ The Cube

Update: You can find a report of the interview here: Best Practices for Putting Your Oracle Database into the Cloud

Putting an end to the password jungle

manypwdsWith my blog audience all being experts in the IT industry (I presume), I think we are all too familiar with the problems of classic password security mechanisms.

Humans are just not good at remembering long meaningless strings of tokens, especially if they need to be changed every so many months and having to keep track of many of those at the same time.
Some security experts blame humans. They say you should create strong passwords, not use a single password for different purposes, not write them down on paper – or worse – in an unencrypted form somewhere on your computer.

I disagree. I think the fundamental problem is within information technology itself. We invented computers to make life easier for ourselves – well, actually, that’s not true, ironically we invented them primarily to break military encryption codes. But the widespread adoption of computing happened because of the promise of making our lives easier.

I myself use a password manager (KeePass) to make my life a bit easier. There are many password manager tools available, and they solve part of the problem: keeping track of what password was used for what purpose. I now only need to remember one (hopefully, strong enough) password to access the password database and from there I just use the tool to log me in to websites, corporate networks and other services (let’s refer to all of those as “cloud servers”).

The many problems with passwords

The fundamental problem remains – even when using a password manager: passwords are no good for protecting our sensitive data or identity.

Read more of this post

Debunking Oracle certification myths

Another frequently asked question I get asked a lot:
not_insane

Is Oracle certified on Vmware?

There are plenty articles discussing this very topic, here’s a few examples:

oracle blog – is Oracle certified on VMware
vmware understanding oracle certification support licensing environments
virtualization.info – oracle linux fully supported vmware esxi and hyper-v
longwhiteclouds – fight the fud oracle licensing and support on vmware vsphere/
oraclestorageguy – what the oracle vmware support statement really means and why
everything oracle @ emc – vmwares official support statement regarding oracle certification and licensing

…and yet it still seems to bother many people I talk to when showing the clear and present benefits of going all-virtual.

It seems there is a lot of confusion between the meaning of “certified”, “supported”, and even the term “validated” comes up every now and then. To make things worse, the context in which those words are used makes a big difference.
Read more of this post

The public transport company needs new buses

Future-British-Bus-1A public transport company in a city called Galactic City, needs to replace its aging city buses with new ones. It asks three bus vendors what they have to offer and if they can do a live test to see if their claims about performance and efficiency holds up.

The transport company uses the city buses to move people between different locations in the city. The average trip distance is about 2 km. The vendors all prepare their buses for the test. The buses are the latest and greatest, with the most efficient and powerful engines and state of the art technology.

Read more of this post

Getting the most out of your server resources

hearseespeak

As an advocate on database virtualization, I often challenge customers to consider if they are using their resources in an optimal way.

And so I usually claim, often in front of a skeptical audience, that physically deployed servers hardly ever reach an average utilization of more than 20 per cent (thereby wasting over 80% of the expensive database licenses, maintenance and options).

Magic is really only the utilization of the entire spectrum of the senses. Humans have cut themselves off from their senses. Now they see only a tiny portion of the visible spectrum, hear only the loudest of sounds, their sense of smell is shockingly poor and they can only distinguish the sweetest and sourest of tastes.

– Michael Scott, The Alchemyst

About one in three times, someone in the audience objects and says that they achieve much better utilization than my stake-in-the-ground 20 percent number, and so use it as a reason (valid or not) for not having to virtualize their databases, for example, with VMware.

Read more of this post

Announcing my Openworld 2013 presentation material

oow2013flashLast Tuesday I had the privilege to present at Oracle Openworld 2013 together with Sam Marraccini (the guy with the big smile here in the pic) from EMC’s Flash products division. Sam introduced the various EMC Flash offerings we have, and I discussed some experiences and best practices from the field. We really got lots of interaction with the audience, and many questions (at one point I was looking at about 5 hands raised simultaneously) which caused me to run out of time finishing some of the best practices I planned to discuss at the end. But interaction is always better than just us talking so I got the feeling the session was successful – although I’d like to hear from people in the audience what their thoughts are (feel free to comment!)

When people started to make snapshots of the slides with their iPhones, we promised the audience to make the slides available ASAP. So here they are. They will probably also be available via Oracle’s OOW pages within time. Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 312 other followers