The Dutch Diginotar Hack
September 7, 2011 1 Comment
Slightly off-topic here considering my normal focus on business applications (or actually, maybe not, decide for yourself).
On the Dutch ICT news sites it is currently a big topic. And the impact for the whole internet is probably still underestimated. What happened? On August 29 2011, I read a news post on webwereld.nl (a Dutch ICT news site) that Iran (actually it seemed to be Iranians but this is still not sure) could tap internet traffic to GMail. This happened because they used an SSL certificate that was signed by the Dutch Certificate Authority Diginotar. Diginotar is a Dutch company providing PKI (Public Key Infrastructure) “certificates” for secure connections, both for regular commercial customers and for the Dutch government. Ouch!
Quickly, more news dropped in. The problem turned out to be even much worse. To make a long story short, it became clear that Diginotar had already detected signing of 128 rogue certificates on july 19th 2011. These were immediately revoked. More rogue certificates were detected and revoked a day later. However, the incident was kept secret until it became clear by an Iranian user (using Google GMail) that a false certificate for *.google.com was issued. This certificate had not yet been revoked. The security breach probably led to many Iranian users having their email tapped (not sure by whom but you get the idea). The full (intermediate) report based on an investigation by Fox-IT can be downloaded here and is a source to some of my statements in this post.
It turned out that the hackers had the possibility of creating false SSL certificates for just about every website in the world without being immediately detected (unless you inspect the certificate manually as an expert user). If you combine this with a Domain Name Service “hack” (or not even a real hack if you control the DNS server) – then you can redirect users to a falsified website (i.e. an online banking application, email, social media and just about everything else where you want to spy on people and steal information).
I am not a security specialist but interested in security aspects of computing, and also the implications of security and privacy issues for normal people (who do not understand IT technology) and companies running IT for business. The impact of this hack and the aftermath is only beginning to become clear to us.
For those not familiar with how SSL certificates work (and I must admit, I don’t yet fully understand the whole mechanism either), a Certificate Authority (CA) is much like a government office that hands out a passport which you can use to prove to others who you really are. This also means the person you show your passport to must recognize and trust your government as a valid issuer of passports. If you’re British, this is the British government. If you’re Italian, it’s the Italian government. An Italian should not be able to get an Italian passport from the British government (this sounds obvious, but it isn’t like this in the digital world as we will see). Besides, the government must verify that you are who you say you are and take measures that nobody but themselves can create passports. With paper documents, this is done, for example, by using watermarks and other hard to imitate markings, and to add a photo of your face on the passport so it cannot be used by anybody else. With digital certificates, this is done by digitally signing someone’s certificate (a file holding information about an identity, of a person, or web service, or company) with a secret encryption key. Needless to say that if the authority’s secret key gets stolen, the thief can create certificates as he likes, much like he can create passports if he steals the special printing press that is used to create them. The authority would have to tell the world that all passports in the world are now unreliable until he replaces the stolen printing machine with a new one using different watermarks – and gives every passport owner a new passport. Stealing a passport press is not so easy though (these machines are big, fortunately).
The digital certificates are used on the internet to guarantee users they are communicating with the one they think they are talking to. If I log in to my bank to perform transactions or view my account details, I want to be sure it is the real bank I am connected to, and not a fake website that looks exactly like the real one used to steal my passwords, information, and money. The bank’s certificate proves this and also prevents unauthorized people eavesdropping (because the certificate is used to establish encryption keys between the bank and the end user, making it impossible for the so-called “Man-in-the-middle” to decrypt the communications).
For people not familiar with basic encryption methods, I’ll try to use an analogy with the real passport world.
Certificate Authority (CA) – an organization that can provide certificates. like the government, an organization that can provide passports
CA’s private key – the secret that others recognize as being authentic. Like the watermark on the passport.
Certificate server – like the passport printing press. Can be used to create certificates for people (or organizations) on request
Certificate – a digital file holding some personalized information such as the name and location of the owner, or the web URL of the website. Like the passport in the physical world.
Web Browser – holds a list of valid authorities. Much like I, as a person, know which countries issue what kind of passports and what their watermarks look like. If I would not trust a country’s passports for their citizens (which rarely happens as an individual), I would not allow those passports as a valid ID. Identically, if I don’t trust a CA, I would (as a user) remove this one from my browser (which people never do typically).
I could also add authorities myself such as my employer to the browser, so that I trust my employer’s “passports” for services on the corporate intranet, such as the internal email system. Much like I trust employee badges from my colleagues but not directly from other companies.
Digesting the news on the incident, I was deeply shocked by a few facts that came out of the news reports, the formal investigations and the things I learned about SSL security in general:
- The company that was hacked had taken virtually NO security measures at all. They did not even use a virus scanner, the servers used for generating certificates, the web servers and the users workstations were all in the same network and Windows domain, and the web server was running an outdated and unpatched web server version.
- They also decided to keep the security breach quiet. They would probably have kept it quiet if they were not exposed by the Iranian user discovering the fake Gmail certificate.
- After the initial news came out, both this company and the Dutch government officials stated that it did not affect government IT infrastructure as this was hosted on a different PKI infrastructure host. It turned out later that this domain was also compromised. In other words, the officials had been lying, or, best case, hoping and assuming the government domain was still safe.
- To avoid problems with certain government websites using Diginotar certificates, initially they recommended to ignore the security warnings from their browsers and just continue with the potentially unsafe connection. They also pushed pressure on the browser makers to not revoke the root certificate, or at least, keep parts of it active for a while (even when it could really no longer be trusted anymore).
- Diginotar had to undergo frequent security audits. They passed the audit every time.
How can an auditor pass a security audits where they don’t even use a virus scanner and isolated networks? (actually I know the answer to this question, as I have been involved in similar security audits by a similar company a long time ago – before joining EMC – and frankly, let’s put it in mild wording: the auditor did not do a very deep investigation, but produced a big, fat, nice looking, expensive report anyway. I could punch many holes in their findings, found a big security problem (simple, short, hard-coded database passwords in application code) a day later, causing a severe project delay, and I am not even a true security specialist!).
So far on the severe fails on the Dutch side. What about the worldwide internet security mechanism as a whole?
- Major web browsers could not revoke the root CA without issuing a patch of the whole browser. I.e. Mozilla came out with a Firefox update just to revoke Diginotar as a root CA. If users don’t download the update, they would still be exposed. On my iPad I could not even find the list of certificate authorities and manipulate them, so probably the iPad is still exposed until you download new firmware.
- There is a mechanism to revoke issued certificates (OCSP) but this can only revoke individual certificates, not the root authorities. As the hackers can create many new ones as they like, the OCSP mechanism fails completely here.
- The number of certificate authorities in a browser like Mozilla Firefox is quite large (a few hundred? Check for yourself…)
- Any of these is technically capable of generating certificates for everyone else. It is as if Kazachstan could theoretically create passports for American citizens. No way to fence it off to a certain top-level or sub-level domain, region, whatever.
- CA’s seem not to publish a complete list of certificates they have issued which are still valid, which would allow end user software (i.e. the web browser) to check fakes against the official list. Any false certificate would be spotted quickly and could then be blacklisted.
- There is a hierarchy of CA’s where you would expect just a one-dimensional list.
A top level CA can create a root certificate for a sub-CA who can create a root certificate for a sub-sub-CA and so on. The tree can technically be limited by the root (top level) CA, but in many cases this isn’t done. There is a saying which goes “If I trust my friend, and my friend trusts the president, does that mean I trust the president?” well, exactly that seems to be the case in the PKI world… And if I trust my friend he can make me trust his sister, she can make me trust her hair cutter, the hair cutter can make me trust his dentist, and the dentist’s little brother and in turn the little brother’s girlfriend from school. All without notifying me. All of them can issue certificates indirectly on my behalf.
- What would have happened if nobody would have discovered the fake certificates and Diginotar would have kept it quiet because they would probably go bankrupt if they would make it public they were hacked? (actually another hack has taken place a few months earlier, and the hacker of Diginotar claims to have similar access to a few other authorities as well)…
- Of the hundreds of other authorities, how many have been compromised, too? How many of the compromised ones know they are compromised? How much information has already been leaked this way over the years, to organized crime and dictatorial regimes?
- Who undergoes similar “audits” that pass everytime as long as the auditor gets payed without doing deep investigation and disqualify the infrastructure if it is just not safe?
- How easy is it for a country (let’s say, a dictatorship somewhere) to enroll like Honest Achmed (tip!) as a CA themselves so they can issue similar certificates for all websites they want? Or have a spy inside a CA organization helping them to create a few fake certificates every now and then?
- What happened to the Iranians whose security was compromised? Will they be identified as opposers to the Iranian government?
I have a lot more questions and much more on this is written. Google for Diginotar hack (tip: disable .nl sites in google using “-site:.nl” if you don’t speak Dutch) and you will find dozens of news articles, comments, explanations etc.
But I really think we cannot go on like this anymore in the ICT world. And it’s not just the PKI certificate mechanism, it’s the whole way we deal (or don’t deal) with security in the ICT business. The information we store in the cloud is too important, too valuable, too risky to protect it so poorly.
We, as ICT professionals and companies, have to make sure that the normal people, who don’t understand IT like we do, are protected against crime, dictators, privacy issues and other violations against basic human rights. We cannot think any longer that it will not happen to our systems, our customers, our information. We cannot just leave it to the security guys to bolt on a few firewalls, encryption appliances and intrusion detection systems. Security cannot be added to a cloud service just like you cannot add a flight safety thingy to a crappy airliner. You have to design your systems and applications from the ground up with security in mind, just like the Boeing and Airbus guys.
They don’t assume some problems will never happen, they design their aircraft to handle the biggest forces, the most severe engine failures, the most unexpected communications problems, etc. and the 400 ton machine should still keep flying instead of crashing into the ground.
We must change our thinking.
From “how do we put some security thingy on this service to keep the auditors happy before we can take it live?” to “what if everything fails, are we still safe then?”
We have a long way to go…