Desktop security: Application data got blurred
April 1, 2011 Leave a comment
In the old days, when I started messing around with computers for fun as a young geek guy, computer security was pretty simple.
In those times we were using 8 or 16-bit PC’s with MS-DOS (for the poor guys) or, for the wealthy like myself, Commodore Amiga or comparable computers with real magic inside (who else around 1988 had 4-channel 8-bit stereo sound, 4096 colors, coprocessors for audio and graphics, true multitasking, a mouse-driven GUI handling multiple screens and windows, capable or running a word processor, graphics editor, sound tracker and some other stuff, all at the same time in 512 KB RAM?)
Those systems had no memory management capabilities, so especially when multitasking, a single program that messed up memory allocation could hang the whole system. Nowadays such a program would cause just the application to crash (resulting in core dump or general protection failure) and that’s it (if you have a bit decent operating system). Still, because these programs were lean and mean (the full GUI word processor with all bells and whistles was no more then a few hundred kilobytes in size), my Amiga computer was remarkably stable.
The spartan MS-DOS PC’s did not do multitasking, and the Windows version in those days was sort of experimental, and a far cry from what we could do with our Amiga’s.
Still I must say the DOS software was also pretty stable and low profile (well, developers had to fit everything – Application and some data – in 640K – or what was left after loading DOS and drivers).
What about security? Well, we had no internet. Initially we were running standalone (can you imagine running your computer for weeks without connecting it to the rest of the world?) and only got new software by copying floppy disks from friends owning the same platform.
Computer viruses did exist – but you would every now and then run a virusscan to wipe them off your system – especially you would check floppies after they had been exposed to someone else’s system.
I had a few virus infections a few times back then, but they were easy to remove and mostly harmless for my data.
In the early nineties we started using Bulletin Board Systems (BBS’es) with modems (at 2400 bits/sec which is about 1/3000th of what I have available today on my DSL line – and I only have a basic subscription).
You could download a simple picture in a minute, a simple tool in a few minutes, and a whole game on floppy disk in just under an hour or so. A mail (not internet mail, but a predecessor called Fidonet) to someone in another country took anywhere between a few hours and a few days because the BBS called up an “uplink” BBS (by modem) and so on until the mail was delivered.
Viruses became a bit more prominent now that we had direct modem connections to central BBS systems. Still, we were not too concerned about security.
First of all, even if we got infected by a virus, it could not spread fast because it had no means to spread other than to wait for a floppy to be inserted in the system. We did not upload that much to the BBS system so the virus had little means to spread very quickly. And worst case it could delete some data on your standalone system, of which (ahem) of course we had backups anyway (sure).
Anyway, the data on the system was not that valuable to me anyway. Never lost a document due to viruses, the most severe data loss I had was because a college friend tossed one of my 5.25 inch floppies – with a report document containing hours of work – through the classroom like a boomerang, and it ended up in a dirty sink with coffee spots all over it. No longer readable. It took a very delicate process to get the round magnetic flexible disc out of the envelope, clean it carefully with clean warm water, and let it dry for a few hours before I could read my document again (with a few read errors but eventually all data was still there ;-)
Back to my initial point on security and what went wrong along the line. Fast forward to today and we all are trying desperately to protect ourselves against viruses, trojans, spam, botnets, rootkits, keyloggers, spyware, identy theft and other malware (do I need to go on?)
And as an IT expert, try to explain your parents, for example, what they need to do to make sure their PC does not get infected with all that crap. How can you explain that if they open a wrong document, click on a wrong weblink or answer “OK” to a question their own trusted computer asks them (popup) can compromise security?
I quote my father: “I have nothing to hide so I don’t care if someone steals information from my PC”….
And how can you justify that they need a password consisting of at least 8 characters, with at least two capitals and one numerical character and a special character like “!” or “_” to make sure the password is secure enough? And to make things worse, they are not allowed to write it down, and memorize a different password for each website and need to change all of them at least every three months? I don’t even try to explain.
In my opinion, people (as in “users”) are not the weakest link in computer security. The system complexity and lack of transparency we introduced over the last 15 years or so is the weak link – we as an IT industry will never get security right, if we don’t make it easy and simple for non-geek users. But that is the start of a very long discussion and I leave that for another time.
Data or Programs?
We are missing another point here. Back in the old days you could split all files into two basic types. You had programs (.EXE, .COM in DOS) and you had data (such as a picture, text document, spreadsheet).
Data could not “do” anything unless processed by an application. You could never infect a computer with a virus by loading a picture or copying just a text document from PC A to B. Viruses were hidden in applications so if you never executed untrusted applications, your system could never be infected. Even in the first world wide web pages with HTML v1, a web page could only contain content, and dynamic content had to be generated by the server (such as with CGI scripts). No algorithms of any kind could be hidden in HTML markup.
But then we wanted to make neat, nifty, complex applications that could handle pieces of software inside data objects. It started with macros inside word processing and spreadsheet documents, because that way you could add some functionality not natively existing in the office application. At first this was not so bad as the macros could only operate on its own document, and never touch data outside of the document (unless it asked you to save something).
Even worse, some applications are no more than web content. And on app stores from various vendors, there is often no clear distinction between software and data at all. Everything is treated as “content” no matter the purpose.
I can’t explain to my dad anymore that even websites from more or less trusted companies can be infected – he thinks he can’t be infected by just reading a web site, right? Or opening an e-mail? Ehm….. [long silence]
Security vs. complexity
In order to make computer systems more secure, we also increase complexity. We now have trusted computing on OS’es, Data Execution Prevention on the CPU, secure browsing in browsers, client based firewalls, desktop policies, weekly security updates (not just for the OS but also for the document readers, office software and everything else. No day passes by or one of our installed goodies shows a pop-up asking if we want to update). And we have virusscanners that slow down the system and eat up to 50% of your performance, because they are running in paranoia mode all the time, and have to scan each and every file that is read or written, every mail that is sent or received, every HTML transfer and much more – and still not catching some 10% of all recent malware.
It will only get worse. I was focusing on desktop security as an example, but similar problems are going on with servers, databases, storage, networking, tablets, etc (Did I hear someone say “cloud computing” ?).
Unless we go back to the basics and make a couple of architectural changes in the way we handle software and data, I guess we will always lag behind the facts.
Computer security is a complex knowledge area. It is certainly not as simple as changing only one thing. But designing computers that treat algorithms (software programs) differently from data might be a good start. Then, not allowing a computer to run any program that has not been validated by the user some way, and certainly not allowing code to run that is hidden in data objects, might solve a significant part of our problems. Maybe having a central repository that “signs” every executable, library, java object etc. could help here (modern Linux distributions already do this but only for “installed” software from the Linux repositories, not for java junk inside a browser’s cache). Or an OS that destroys all non-OS or installed app components after the app closes or OS reboots. Virtual Machines and sandboxes could help here – isolating every tiny app component into it’s own secure container – unable to access anything else than it’s own data. I’m no expert in that field – comments welcome.
Can it be done? I believe so. Is it easy? No, we are tempted every time to violate the rule because it makes computers more usable to their human users (and therefore also for malware). Look at the most user-friendly, most productive devices out there, such as smart phones and iPads. As good as they are in offering simplicity and productivity, they are just as good at not offering the owner protection against data theft. It is much easier for the developer of those devices to allow 3rd party applications access to everything – because this speeds up development, lowers cost and makes the application more user-friendly.
But we tend to forget about the downside.
“No worries, the next patchset will fix another few security holes. In 10 years, all holes will be closed.”