If you run an e-commerce site and handle card payments yourself (rather than using a third party such as Paypal, SagePay etc), the phrase “PCI DSS compliance” is likely to make your heart sink. This annual ritual is essential a security audit forced on you by the Payment Card Industry (basically a group of banks) who want to ensure that you are taking adequate measures to keep their customers’ card details safe. DSS stands for Data Security Standard, and is a set of requirements for those who process cards. If you want to be able to process cards issued by any of the major banks (VISA, AMEX, MasterCard etc), you need to be certified as DSS compliant.
So how do you become PCI DSS compliant? The task is handled by a QSA (Qualified Security Assesor) – a third party security assessor, approved by the PCI, who perform a security scan of your server and give you a pass or fail. You can then wave this certificate around and say “look, I’m PCI DSS compliant, let me process credit cards”.
Things Aren’t So Simple In Practice …
The problem with PCI DSS is that the security benefits are debatable, and the whole thing seems to be as much about generating money for the card industry and the QSAs who feed of it. Everyone profits except you, the website owner. Let’s look at these points in more details.
First, the security scan. If you think that a QSA is a team of dedicated ethical hackers who thrive on the buzz of finding ingenious ways to penetrate your security, you’ve got it all wrong; they are businesses who are in it for the money. That impressive looking 100 page report that you received from the QSA is actually just a Nessus report which they’ve tarted up by adding their logo to the bottom of each page (for those of you who don’t know, Nessus is a very good, freely available security scanner). I’ve seen clients quoted thousands of pounds for such a scan, which involves little more than entering an IP address as clicking ‘go’.
Nessus is a great tool, but there is only so much it can reliably tell from a remote scan. For instance, Nessus will grab the version number from the Apache banner and compare it against its extensive list of known exploits. If it finds a match, it reports it; for instance “You appear to be running Apache x.x. This version of Apache has a vulnerability in mod_env which allows a remote attacker to gave a remote shell on your server”. Great, except banner strings aren’t reliable. If you use a distro like RHEL, CentOS, Fedora, Debian, Ubuntu (and these distros make up the vast majority of the server market) that implements security back-porting, the exploit could have been fixed, but the Apache version number will not have been changed. QSAs don’t know what back-porting is though, and will report that you *have* got a vulnerability. The onus is then on you to prove you haven’t (for instance, by quoting the RPM changelog). Should it not be the other way around? If you think I have an exploit which allows you to gain a remote shell on my box, prove it; don’t force me to waste time disproving it.
Or you can just turn your Apache banner off, then the scan doesn’t report any problems with your Apache version, even if you are using an old, exploit-ridden version of Apache. If the QSAs were in any way professional, they would notice you were trying to trick them by turning the banner off, and would manually intervene: attempt to establish the Apache version in other ways (for instance, look at the default Apache that comes with your distro), checking if this had any known exploits, then explaining this to you. But they don’t, because they are stupid and lazy, and only interested in your money.
A couple of years ago, I remember how one of my clients failed a PCI DSS scan because of (amongst other things) an exploit in Apache’s mod_proxy_ftp. Mod_proxy_ftp isn’t really used that much, and it would be rare to find it enabled on a web server (but yes, I appreciate that many security issues come about because things have been accidentally turned on). Needless to say, my client didn’t have it enabled. Convincing the QSA that we weren’t vulnerable BECAUSE WE DIDN’T HAVE THE BLOODY THING LOADED proved to be tedious. In the end we had to send them a copy of httpd.conf (even though we could have been loading the module in one of the other Apache config files). Again, shouldn’t the onus be on the QSA to prove that we are vulnerable, not us prove that we aren’t?
So, the QSA has performed a scan of your server, and they’ve sent you a pretty report showing the areas you failed on (because you will fail). You then have the choice of asking the QSA to remedy the problems (so let’s get this straight: they do a half-arsed scan of your server, finding dozens of non-existent security holes, then charge you even more money to prove that the alleged security holes don’t actually exist), or hiring someone like me. I have half a dozen clients who come to me each year for help with their DSS scan, and I always feel bad that they are even having to pay me. Over the years I’ve only ever seen one genuine issue (it’s one that comes up a lot): use of weak SSL cyphers in Apache and SSL-aware services such as pops and imaps; the rest of time it’s just a big pile of false positives, and I have to spend an hour or two trawling through RPM changelogs to prove that the alleged vulnerabilities have been fixed.
Some of the alleged vulnerabilities are just plain silly. SSL exploits which the change logs show were fixed over a decade ago, bugs that are only present on x86 architecture when the target server is actually x64, exploits which don’t affect the distro in question etc. Then there are web exploits. This is one area where I’ve always found Nessus to be weak, perhaps because there is so much variety in how people set up their websites. For instance, the scan might check for the presence of /cgi-bin/oldbuggyscript.pl. If it doesn’t receive a 404, it assumes the script exists. If you’re doing something funky with your .htaccess, such as redirecting back to your homepage or sending a custom 404 page (but forgetting to actually send a 404 header), you’ll fail.
Again, the QSA could easily eliminate many of these false positives by manually reviewing the report and using a bit of common sense. “Is it likely that the client is running oldbuggyscript.pl, a popular script back in the 90s which has been known to be vulnerable for over 15 years? No, so let’s investigate a bit more and see if this really is the case”. But they don’t; instead they hand the client a report full of scare-mongering, and with any luck the client – who may not be particularly technically-minded – will say “thank God you found all these problems for me. I won’t be able to sleep at night thinking until I know I’m safe from hackers. Here’s a big pile of money, please fix these problems”.
Now, at this point you may be thinking, “isn’t it better to be safe than sorry? If there’s even a slight chance that an exploit exists, it should be investigated”. That’s perfectly true, but it’s also not fair on webmasters who have to pay someone (be it someone like me, or the QSA), or waste their own time, to refute a bunch of obvious false-positives.
There’s also the question of just how thorough a DSS scan is, and whether it fosters a false sense of security. At first glance it seems pretty thorough – we’ve seen how even quite unlikely bugs will be flagged. But I’ve never seen a PCI DSS scan that does any form of basic password brute-forcing: for instance, pick the names of a dozen or so common system accounts (admin, root, test, webmaster etc) and try to access them over SSH using a list of 1,000 or so common passwords. Your root password could be ‘letmein’ but a PCI DSS scan won’t spot it.
For a proper audit, there needs to be some degree of manual interaction. Sure, tools like Nessus are great for quickly probing for thousands of known bugs, but they still need to be operated by someone with half a brain. Nessus won’t tell you if there’s a link in 34pt font on your home page saying “click here to download all customer details from the database”. Nessus won’t tell you if, when MySQL reaches max connections (perhaps because in your pen testing, you’ve flooded Apache with requests), the PHP code spits out an error which reveals the username and password it is trying to connect to MySQL with. Nessus won’t tell you if the form data on your checkout page can be tampered with to allow a customer to place an order without paying. Stuff like that needs a human to find.
Similarly, there is only so much you can learn from a remote scan. One of my clients had an e-commerce site that emailed him when a customer placed an order; the (unencrypted )email contained the customer’s card details. That’s a whopping big security hole, but not one a PCI DSS scan would spot.
So is PCI DSS creating a false sense of security? If, after you finally pass a DSS scan, you think “right, that’s me safe from hackers for another year”, then definitely. It’s easy to be PCI DSS compliant but still wide open to exploitation. If you view PCI DSS as just one part of the security jigsaw, you’re thinking more prudently. Sadly, the majority of webmasters seem to think the former.
Is there a better solution? Not really. Webmasters are always going to be lax about security because it takes time and money – two resources which are always in short supply. In an ideal world e-commerce webmasters would be concerned enough about security that they would initiate regular security auditing themselves, but that isn’t going to happen, so perhaps some kind of enforced auditing is necessary; but it should be a hell of a lot better than the present system of greedy banks and bottom-feeding QSAs.
Actually, perhaps there is a solution. If an e-commerce site fails to keep your data adequately secure, the law is on your site. You could launch a Small Claim in the County Court (but then it comes down to the discretion of the magistrate as to whether the site owner was indeed negligent in failing to keep his server secure, and that is going to vary a lot of from one magistrate to another) if you suffered monetary loss (for instance, if a fraudster got your card details). In the UK we also have the ICO (Information Commissioner’s Office), a government body which deals with enforcing data protection laws. If a webmaster is slap-happy with a customer’s personal details (storing them for longer is necessary, storing non-required private information, failing to adequately secure them), the ICO have the powers to heavily fine the webmaster. Not that the ICO really seem to use these powers. I’ve had dealings with the ICO in the past, such as the time Barclaycard blatantly lied to me about holding my account statements from over 6 years ago (I was reclaiming credit card penalty charges; and yes, I won: they settled out of court and paid up with 29% interest) Even though the ICO have received dozens of similar complaints about Barclaycard, they did little more than give them a nudge (I suspect the conversation was something along the lines of “Now then old chap, we don’t want any trouble, so if you could give Mr Smith his personal data it would be jolly much appreciated).
I digress. If the ICO had cojones and webmasters were aware of the legal implications of failing to stay secure, perhaps PCI DSS won’t be needed. The banks wouldn’t be quite to rich, e-commerce webmasters wouldn’t be quite so poor, and the whole industry of charlatan QSAs that PCI DSS has spawned would be gone. We can only dream …