There's been a lot of fuss in the press recently about Web 2.0 security. In the past year, Facebook and Twitter both have had serious problems that have made some waves among the technically savvy.
People are starting to wonder if we, as an industry, just don't know anything about securing Web 2.0 applications. There's a bit of truth to that, but mostly the software development industry is just plain bad at creating secure software of any kind.
Part of the problem is that developers generally aren't security experts. Even in organizations where all developers receive software security training, it's rare for them to remember anything significant. Developers and development organizations are thinking about features, first and foremost. When it comes to security, they just go through the motions. The ability to log in with a password is a feature. SSL support is a feature. It's unusual for anybody to pay attention to doing things right -- until they get bitten publicly a few times.
Take Twitter, for example. The site has had a litany of security glitches over the past year, including cross-site scripting problems. Until it got burned, it wasn't so much that Twitter thought it didn't have to worry about security. It was more that it thought its people were smart enough to address the problem as a matter of course.
After a couple incidents proved that the company didn't actually have it together, the Twitter guys wanted to do the right thing. They didn't want a bad reputation for security. As a result, they've brought in outside consultants to look for security flaws in their code. And they've been trying hard to recruit a full-time person to take ownership of product security. I expect that Twitter, like many other companies, is finding that it's extremely difficult to find high-caliber software security talent.
But if you take a closer look at Twitter, a lot of its problems aren't necessarily problems in the software platform (although some of them definitely are). For example, it isn't uncommon for bad guys to hack into a celebrity's Twitter account and make fake posts or hack into the accounts of Twitter employees. Sure, the software platform can try to address those threats, but a big part of the problem is the operational security.
Twitter's employees need to make sure they are selecting strong passwords. And they should be doing as much as possible to encourage their users to do the same.
To some degree, Twitter is already doing these things. But even if the company makes a big effort to encourage responsible behavior among its employees and customers, people are still going to get hacked.
Some people may use the same password everywhere, including on hacked sites. Others may try hard but still choose passwords that can't withstand guessing attacks. And still others may be victimized by phishing scams, tricked into typing their credentials into a phony Web form. This has been a big concern with Twitter, where there are lots of add-on services that ask for your credentials, including Bit.ly, Mr. Tweet and so on.
The truth is, most security breaches require the end user to take -- or fail to take -- some kind of action.
There are certainly issues with AJAX and cloud-centric application models that leave Web 2.0 applications open to attack. That's to be expected -- security always lags a bit behind innovation. But at the end of the day, those issues pale in comparison to the threat users pose to themselves. People are largely very trusting, and bad guys are always going to be able to take advantage of that trust. That will be true even if the day comes when our software has no holes in it and our software vendors are perfect citizens.
John Viega is chief technology officer of the software-as-a-service business unit at McAfee Inc. and author of The Myths of Security (O'Reilly Media, June 2009).