Consider me a career-long computer security curmudgeon. When a vendor guarantees its latest and greatest will defend the world against all computer maliciousness, I yawn. Been there; it didn't pan out.
All computer security vendors want us to think that signing on the dotted line and sending them a check will mean our worries are over. Rarely do they deliver. And although a little marketing hype never really hurts -- we're all used to taking it with a grain of salt -- some vendors can be caught outright lying, expecting us to buy what amounts to security snake oil.
If you're a hardened IT security pro, you've probably had these tactics run by you over and over. It's never only one vendor touting unbelievable claims but many. It's like a pathology of the computer security industry, this all-too-frequent underhanded quackery used in the hopes of duping an IT organization into buying dubious claims or overhyped wares.
Following are seven computer security claims or technologies that, when mentioned in the sales pitch, should get your snake-oil radar up and primed for false promises.
Security snake oil No. 1: Unbreakable softwareBelieve it or not, vendors and developers alike have claimed their software is without vulnerability. In fact, "Unbreakable" was the name of one famous vendor's public relations campaign. The formula for this snake oil is simple: The vendor claims that its competitors are weak and don't know how to make invulnerable code the way it does. Buy the vendor's software and live in a world forever without exploits.
The last vendor to claim this had its software exploited so badly, so quickly that it should serve as notice to every computer security organization never to make such a claim again. Amazingly, even as exploit after exploit was discovered in the vendor's software (the vendor is best known for database software), the "Unbreakable" ad campaign continued for another year. We security professionals wondered how many CEOs might have fallen for the PR pitch, not realizing that the vendor's support queues were full of calls demanding quick patches. To this day, dozens of exploits are found every year in that vendor's software.
Of course, this vendor isn't alone with its illusions of invulnerability. Browser vendors used to kick Microsoft for making an overly vulnerable browser in Internet Explorer. But then they would release their invulnerable browsers, only to learn they had more uncovered public vulnerabilities than the browser they claimed was overly vulnerable. You don't hear browser vendors bragging about making perfectly secure browsers anymore.
And then there's the infamous University of Illinois at Chicago professor who consistently lambasts software vendors for making software full of security holes. He chides and belittles them and says they should be subject to legal prosecution for making imperfect software. He even made his own software programs and challenged people to find even one security bug, backing this challenge with a reward. Not surprisingly, people found bugs. Initially he tried to claim that the first found vulnerability wasn't an exploitable bug "within the parameters of the guarantee." Most people disagreed. Then someone found a second bug, in another of his programs, and he paid the reward. Turns out making invulnerable software is pretty difficult.
I don't mean to negate that professor's contributions to computer security. He's one of the best computer security experts in the world -- truly a hero to the cause. But you won't hear him claim anymore that perfect software can be made.
Remember these high-profile lessons in humility the next time you hear a vendor claim that its software is invulnerable.
Security snake oil No. 2: 1,000,000-bit crypto
Every year a vendor or coder no one has heard of claims to have made unbreakable crypto. And, with few exceptions, they fail miserably. Although it's a claim similar to unbreakable software, technical discussion will illuminate a very different flavor of snake oil at work here.
Good crypto is hard to make; even the best in the world don't have the guts (or sanity) to claim theirs can't be broken. In fact, you'll be lucky to get them to concede that their encryption is anything but "nontrivial" to compromise. I trust the encryption expert who doesn't trust himself. Anything else means trusting a snake-oil salesman trying to sell you flawed crypto.
Case in point: A few years ago a vendor came on the scene claiming he had unbreakable crypto. What made his encryption so incredible was that he used a huge key and distributed part (or parts) of the secret key in the cloud. Because the key was never in one place, it would be impossible to compromise. And the encryption algorithm and routine was secure because it was a secret, too.
Most knowledgeable security pros recognize that a good cipher should always have a known encryption algorithm that stands up to public review. Not this vendor.
But the best (and most hilarious) part was the vendor's claim that his superior cipher was backed by a million-bit key. Never mind that strong encryption today is backed by key sizes of 256-bit (symmetric) or 2,048-bit (asymmetric). This company was promising an encryption key that was orders of magnitude bigger.
Cryptologists chuckled at this for two reasons. First, when you have a good encryption routine, the involved key size can be small because no one can brute-force all the possible permutations of even relatively small encryption keys -- think, more than the "number of atoms in the known universe" type of stuff. Instead, to break ciphers today, cryptologists find flaws in the cipher's mathematics, which allow them to rule out very large parts of the populations of possible keys. In a nutshell, found cryptographic weaknesses allow attackers to develop shortcuts to faster guessing of the valid possible keys.
All things being equal, a proven cipher with a smaller key size is considered more secure. A prime example is ECC (elliptic curve cryptography) versus RSA. Today, an RSA-protected key must be 2,048 bits or larger to be considered relatively secure. With ECC, 384 bits is considered sufficient. RSA (the original algorithm) is probably nearing the end of its usefulness, and ECC is just starting to become a primary player.
So saying you have a million-bit key is akin to saying your invented cipher is so sucky it takes a million bits of obscurity (versus 384 bits) to keep the protected data secure. Five thousand bits would be overkill from any good cipher, because no one is known to be able to come close to breaking even 3,000-bit keys from a really good cipher. When you make a million-bit key, you're absolutely saying you don't trust your cipher to be good at smaller key sizes. This paradox is perhaps only understood by cipher enthusiasts, but, believe me, you'd slay the audience at any crypto convention by repeating this story.
Second, if you were required to use a million-bit key, that means you would somehow have to communicate that huge mother from sender to receiver, making that communication at least a megabyte. Suppose you encrypted an email containing a single character. The resulting encrypted blob would be 1MB. That's pretty wasteful.
A "secret" million-bit cipher being split among the cloud was enough to do that crypto in. No one took it seriously, and at least one impressive encryption expert, Bruce Schneier, publicly mocked it.
The worst part was that the vendor claimed to have proof that it sold $5 million of its crypto to the military. I hope the vendor was lying; otherwise, the military purchaser has a lot of explaining to do.
Security snake oil No. 3: 100 percent accurate antivirus software
Also akin to the claim of unbreakable software is the claim from multiple vendors that their anti-malware detection is 100 percent accurate. And they almost all say this detection rate has been "verified independently in test after test."
Ever wonder why these buy-once-and-never-worry-again solutions don't take over the world? It's because they're a lie. No anti-malware software is, or can be, 100 percent accurate. Antivirus software wasn't 100 percent accurate when we only had a few viruses to contend with, and today's world has tens of millions of mutating malware programs. In fact, today's malware is pretty good at changing its form. Many malicious programs use "mutation engines" coupled with the very same good encryption mentioned above. Good encryption introduces realistic randomness, and malware uses the same properties to hide itself. Plus, most malware creators run their latest creations against every available anti-malware program before they begin to propagate, and then they self-update every day. It's a neverending battle, and the bad guys are winning.
Some vendors, using general behavior-detection techniques known as heuristics and change-detecting emulation environments, have valiantly tried to up their accuracy. What they've discovered is that as you enter the upper ranges of detection, you run into the problem of false positives. As it turns out, programs that detect malware at extremely accurate rates are bad at not detecting legitimate programs as malicious. Show me a 100 percent accurate anti-malware program, and I'll show you a program that flags nearly everything as malicious.
Even worse, as accuracy increases, performance decreases. Some antivirus programs make their host systems so slow that they're unusable. I know users who would rather knowingly compute with active malware than run antivirus software. With tens of millions of malware programs that must be checked against hundreds of thousands of files contained on a typical computer, doing a perfectly accurate comparison would simply take too long. Anti-malware vendors are acutely aware of these sad paradoxes, and, in the end, they all make the decision to be less accurate.
Counterintuitively, being less accurate actually helps security vendors sell more of their products. I don't mean that lowered accuracy allows malware to propagate, thereby ensuring security vendors can sell more software. It's that the trade-offs of extremely accurate anti-malware detection are unacceptable to those shopping for security software.
And if you do find yourself buying the claim of 100 percent accuracy, just don't ask your vendor to put it in writing or ask for a refund when something slips by. They won't back the claim.
Security snake oil No. 4: Network intrusion detection
IDSes (intrusion detection systems) have been around even longer than antivirus software. My first experience was with Ross Greenberg's Flu-Shot program back in the mid-1980s. Although often described, even by the author, as an early antivirus program, it was more of a behavioral-detection/prevention program. Early versions didn't have "signatures" with which to detect early malware; it was quickly defeated by malware.
During the past two decades, more sophisticated IDSes were invented and released. Popular ones are in use in nearly every company in America. Commercial, professional versions can easily cost in the hundreds of thousands of dollars for only a few sensors. I know many companies that won't put up a network without first deploying an NIDS (network-based IDS).
Unfortunately, IDSes have worse accuracy and performance issues than antivirus programs. Most NIDSes work by intercepting network packets. The average computer gets hundreds of packets per second, if not more. An NIDS has to perform a comparison of known signatures against all those network packets, and if they did so, even somewhat accurately, it would slow down network traffic so much that the computer's network communications, and involved applications, would become unbearably sluggish.
So what NIDSes do is compare network traffic against a few dozen or hundred signatures. I've never seen an NIDS with even two hundred signatures activated -- paltry in comparison to the tens of millions of malware and thousands of network attack signatures they should be checking to be truly accurate. Instead, we've become accustomed to the fact that NIDSes can't be configured to be meaningfully accurate, so we "fine-tune" them to be somewhat accurate against things antivirus software is less accurate at detecting.
Security snake oil No. 5: Firewalls
I spend part of my professional career telling people to make sure they use firewalls. If you don't have one, I'll probably write up an audit finding. But the truth is that firewalls (traditional or advanced) rarely protect us against anything.
Firewalls block unauthorized traffic from vulnerable, exploitable listening services. Today, we don't have that many vulnerable services or truly remote attacks. We do get and have vulnerable services, such as the recent OpenSSL Heartbleed vulnerability, but even most of those attacks would not have been stopped by a firewall.
The websites using OpenSSL already opened the ports that OpenSSL needed to function. The vulnerable version of OpenSSL was available for any knowledgeable attacker to compromise. Today, most attacks (and I mean 99.99 percent) are application-layer attacks that require user involvement to succeed. Once the user is tricked into running something, the malicious program executes in the user's computer's memory, and the firewall can't help. The badness scoots past the firewall on allowed ports and executes on the user's desktop.
Firewalls can help only if they prevent attacks against blocked ports. But everyone allows port 80 and 443 into their networks, and those are the two ports that most successful attacks will target. You can't block them because it would bring business to a halt.
Don't believe me? When is the last time you thought, "Wow, if I had just had a firewall enabled, I wouldn't have been successfully attacked"? I'll give you full credit if you can even remember the year.
A lot of firewall vendors already know my personal feelings, and they will often tell me that the problem is only with "traditional" firewalls and that their "advanced" firewall solves the problem. Their advanced firewall is always an application proxy or filter that includes an anti-virus scanner or IDS capabilities. See above. If advanced firewalls worked, we'd all be running them, and our hacker problems would be over.
Security snake oil No. 6: Redundancy
The oft-forgotten third word of the information-security acronym CIA is availability (the other two are confidentiality and integrity). As a concept, availability makes for great sales pitches. The reality, however, is that availability is more snake oil than we might like to admit.
Availability, and its sibling redundancy, drives a significant amount of hardware sales. These days, we have redundant power supplies, redundant hard drives, even redundant motherboards and CPUs. Before redundancy became a thing, I never needed the second unit. It's almost as if vendors give us components they know will fail.
I have a computer that's been running on the same hard drive, motherboard, and power supply for more than 20 years. Never had a problem. I don't even clean out all the dust. But I rarely buy a $100K server or appliance with redundant everything that I don't end up having problems with.
My first fully redundant server system ended up being a hard-earned lesson about the promise of redundancy. The system included a secondary clone of everything, with the backup unit ready to pick up where the failed unit quit, without a millisecond of downtime. I convinced my CEO to spend the extra $100K so we would never have an outage again. That promise lasted two days, when we had our first crash with the resplendent redundant system. We experienced unexpected data corruption, and that corruption was dutifully copied between the first server and the backup unit. Admittedly, the failover was flawless, with the corruption cloned impeccably between systems. My upset CEO didn't want to listen to my explanations of server system backups and RAID levels. He just knew I'd wasted his money on false promises.
Security snake oil No. 7: Smartcards
Almost every company I know that doesn't have smartcards wants to have smartcards. Smartcards are two-factor authentication, which, as everyone knows, is better than one-factor authentication. But most companies think that enabling smartcards in their environments will significantly reduce the risk of hacker attack -- or stop all attacks outright. Or at least that's how it's sold to them.
Every company I know that's implemented smartcards is just as thoroughly hacked as the companies that don't. Smartcards do give you added security, but it's only a small amount and not in the places you really need it. Want to stop hackers? Improve your patch management processes and practices, and help your users refrain from installing stuff they shouldn't. Those two solutions will work hundreds of times better than smartcards.
Making the best of a compromising situation
Today's computer security world is a crazy, paradoxical one. Computer security companies are collecting billions of dollars for customers who are still routinely hacked.
Firewalls, IDSes, and antivirus programs don't work. How do I know? Because most companies have all these security technologies in place, and they are still compromised by hackers, almost at will. Even our good, reliable, secure encryption is mostly meaningless. Either hackers go around the crypto (by directly attacking the target in its unencrypted state on the endpoint), or the cryptography is poorly implemented (the OpenSSL Heartbleed bug is an example).
As a result, we security professionals are knowingly accepting that our computer security defenses are partial at best, while our vendors tout their solutions as incredibly accurate and impenetrable. It ain't so. We're being sold snake oil and being told it's sound, scientifically researched medicine.
What's a defender to do?
Well, push for real solutions. Take a look at how your environment and systems are being compromised on a daily basis, and push for solutions that fix those real problems. Don't get lost in the myriad promises of computer security products.
Me, I trust the vendor who tells me the truth, warts and all. I understand his product won't solve all my ills, and I know his product can't be 100 percent accurate. Avoid vendors who claim otherwise.