At the moment I'm a bit of a security grouch. I keep seeing product after product that has significant vulnerabilities. And this isn't just happening with the things I deal with at work. Even Election Day had me grousing about the state of our software security.
At work, over the past several weeks I've been forced to deal with one serious vulnerability after another. First, it was Heartbleed. Next came Shellshock. That was quickly followed by the Poodle vulnerability.
Fortunately, fixing the Poodle vulnerability is not rocket science. We simply have to configure our Web servers to stop supporting SSL Version 3.0. I'm also taking the opportunity to upgrade our certificates while we're at it, to support a longer encryption key length and a better hashing algorithm.
I have never understood why companies don't bake better security into their products during development. Sure, the latest vulnerabilities have been more complicated than the simple buffer overflows and privilege escalation that have been common over the last several years, but I nonetheless think they could have been avoided with better programming practices. It amazes me how common it is to find software with code that doesn't follow even the most basic best practices, like input validation and bounds checking to prevent the overflows that lead to vulnerabilities. Why do we let companies get away with patching problems as they're found rather than avoiding them up front? It seems as if they start thinking about adding security only after their products have been hacked -- and then, only when it affects their market share.
The Nest thermostat is another example. At a security conference last spring, researchers demonstrated how a Nest thermostat could be compromised. Even though I don't have any Nest thermostats in my office, it still makes me wonder why its designers didn't build security into the product from day one.
Other devices may have similar security flaws just waiting to be exploited, forcing us to rush to update them. A recent study by Hewlett-Packard found that 70% of common consumer devices had significant security vulnerabilities. And more and more, we are surrounding ourselves by these devices. You have to worry when even our cars -- incredibly complex machines that we stake our lives on -- become increasingly dependent on software systems.
Electronic voting machines are another technology I have long been worried about. And for good reason. Voting machines made by Premier Election Solutions (formerly Diebold) and Election Systems & Software have had security flaws over the years, and as far as we know, those flaws are still around. We can't know for sure, since the manufacturers have not been forthcoming with information about their code or their security measures. Researchers have demonstrated that votes can be changed by inserting devices into the machine's hardware, as well as by modifying the software directly through physical access to the machines.
Because of my mistrust of such machines, I vote with an absentee (paper) ballot. The only way to keep systems secure is to constantly apply updates that the manufacturers release in response to testing that uncovers vulnerabilities in the underlying platforms. The problem is that the evidence of such testing and updating is largely absent in our electronic voting systems. This is a shocking neglect in one of our nation's most important functions. So I use paper, in an effort to safeguard the integrity of my vote. The problem, of course, is that as citizens, we need to be concerned with the integrity of all votes. Without it, the system falls apart.
In fact, of all the systems that are in dire need of better security, our voting systems get my vote as the No. 1 priority.
This week's journal is written by a real security manager, "J.F. Rice," whose name and employer have been disguised for obvious reasons. Contact him at firstname.lastname@example.org.
Click here for more security articles.