We don't take the Internet seriously. If we did, we'd be treating it a lot differently than we do. But, as witnessed by our own actions, the Net clearly isn't a serious thing -- at least it's not an infrastructure to us. Maybe one day it will be, but that day hasn't come yet.
Seriously, we don't treat it like an infrastructure at all. So why should we be surprised when we see egregious security failures repeating themselves ad infinitum?
Don't believe me? Well, let's imagine for a moment that the Net was a for-real infrastructure such as, say, commercial aviation has. Now, let's further pretend that a commercial banking site has just experienced a major security compromise in which customer data was lost, and money was stolen out of customers' retirement accounts in huge volumes.
If that happened in my fictional scenario, no doubt the "National Web Safety Board" (NWSB) would be called in to investigate. It would take the site down and conduct a painstaking review of every aspect of the compromise. No needle would be lost in that haystack, to be sure.
At the end of the investigation, let's imagine that the NWSB determined that the underlying cause of the attack was a mutable SQL call in some middleware software that enabled the attackers to execute a successful SQL injection attack. (Yes, we all know about SQL injection already, and it holds the No. 1 position on the 2010 OWASP Top 10 list for good reason.)
In its report, the NWSB would most certainly note how SQL injection attacks can be thwarted by using immutable SQL APIs, such as parameterized queries (e.g., PreparedStatement and stored procedures), and it would recommend that every Web application be thus upgraded in order to be safe.
Next, every single site on the net -- at least every one that is considered to be in the public interest -- would be grounded until all of their SQL interfaces could be converted to parameterized queries. This upgrade process would include certifying that every single such vulnerability had been removed and tested to ensure that the weakness no longer exists.
It could then be quite safely assumed that that problem has been vanquished -- and that we could then move on to new problems instead of making the same mistakes repeatedly.
Now, I can almost hear the screaming as I type this column. Rest assured, I'm not naive enough to think that the scenario I just painted would work in today's world.
But then again, why shouldn't it? After all, that's how the commercial aviation world works today. When critical mechanical failures are discovered in aircraft, they get corrected systematically across the entire public commercial fleet. Until they are verifiably fixed, they are grounded. The aviation world has dozens of examples of this sort of thing.
We have a long way to go to get there. But the fictional scenario speaks to one of my biggest pet peeves with our industry: We repeatedly fail to learn from our own mistakes. Human failures happen; we all accept that. But rather than repeating them in sometimes breathtakingly spectacular ways, let's at least strive toward finding new mistakes to make.
To that end, one of the best things software developers can do to help prevent repetition is to publish safe code examples for common mechanisms. We're starting to see some industry movement in that direction, but clearly not enough of it. In the meantime, organizations that write software can help themselves out substantially by internally publishing thoroughly annotated design and code patterns that help developers use safe code.
Some mechanisms that can and should be included here are authenticators (with strong mutual authentication), encryption of sensitive data in transit and at rest, access control, server interfaces (including SQL servers) and so on.
I often encounter organizations with security policies on these sorts of things. Policy statements such as, "Only safe SQL mechanisms should be used" are relatively common. Well, consider taking those a step further and actually pushing out code examples -- perhaps as standard libraries -- and then requiring that those mechanisms be used.
There's a somewhat hidden benefit to doing it this way: Code reviews become less onerous. The reason for this is that the code base can now be reviewed for positive compliance to coding standards, and not just scoured for possible mistakes.
Of course, it's not all as simple as I've described it here. But it's a good first step in the right direction. There'll be hurdles aplenty to clear. For one thing, putting code into policies would require a level of collaboration among security and development organizations that very few companies can realize.
But at the very least, let's stop fooling ourselves. Today's Net isn't an infrastructure. At times, we seem to expect it to be, but we don't treat it the same way we do real infrastructures. Until we're ready to take on the burdens that a true infrastructure demands, the Net will be little more than a fun hobby for the world's technocrats.
Still don't agree? Well then, just consider a self-policing aviation world that isn't required to fix mechanical design failures as they're discovered. Would you strap yourself to a seat in an airliner? I sure wouldn't. I'll wait for Version 1.1, thank you very much.
With more than 20 years in the information security field, Kenneth van Wyk has worked at Carnegie Mellon University's CERT/CC, the U.S. Deptartment of Defense, Para-Protect and others. He has published two books on information security and is working on a third. He is the president and principal consultant at KRvW Associates LLC in Alexandria, Va.
Read more about security in Computerworld's Security Topic Center.