Security researchers in the US last week disagreed over how to educate Web users to prevent phishing attacks, but agreed on one thing: most current methods of user education are inadequate.
Moreover, it's also difficult to find a method that works because of the diversity of people who use the Web, said Lorrie Faith Cranor, associate research professor at Carnegie Mellon University.
"We've taken user education and found that some things work [with some users], but if we e-mail them out to others they don't work," she said at the Anti-Phishing Work Group (APWG) eCrime Researchers Summit
Markus Jakobsson, an associate professor of informatics at Indiana University, said that some of the mainstream advice for Web users about phishing can be misleading, and phishers are changing tactics, making that advice obsolete.
For instance, he cited a recent article in a widely read consumer magazine that provided tips for surfing the Web safely, including to "Install security software and stay current with the latest patches." While well-meaning, he said, this tip make users vulnerable.
"If we tell users that, then phishers may send out an e-mail saying, 'Here is the latest patch," Jakobsson said. A nervous user might follow the phisher's advice and unwittingly become prey, he said.
The situation isn't totally dire, however, and researchers are finding that some things do work. Usually education that appeals to human nature and people's general intuitiveness is more successful at making them less vulnerable to phishing, researchers said.
Aaron Emigh, executive vice president of technology at blog software and services provider Six Apart, said that people have been duped by miscreants for thousands of years, and that technology has made it easier for people to fall for scams in an infinitely scalable way. He said that security researchers should focus more on creating user interfaces that can't be compromised rather than trying to train users to identify scam sites.
"People learn a lot more from the experiences they have interacting with things than from declarative lessons," he said. "Right now a user can't tell the difference [between a good or bad URL] without a lot of passive indicators. The point is, people shouldn't even have to know what a URL is."
While that may be true, researchers said it doesn't solve the current problem at hand. But Cranor and researchers at Carnegie Mellon have had some success improving users' ability to identify phishing sites with two recent studies they've conducted.
In one, users were paid to read materials about phishing for 10 minutes. However, this method is not something that could work on a broader scale, she said. "We found that if you forced people to read the materials they do work, but you don't always have the [opportunity] to do that," she said.
Another, more viable method to reach users is a game Carnegie Mellon researchers invented called "Anti-Phishing Phil." The animated online game has Web users control a fish swimming around in an ocean filled with other creatures. When he gets close to another sea creature, a URL will appear and users must decide if the URL is legitimate or a phishing URL.
If the users get an answer right, they -- as Phil -- will get praise from Phil's father, another fish sitting at the bottom of the ocean scene. If the answer is wrong, a message will come up with information about what was wrong with the URL -- for example, the URL contains a series of numbers before the actual bank domain name, or the ".com" is broken up as "c.om."
Researchers tested users before and after they played the game, and found that their ability to spot phishing sites "improved significantly" after playing, Cranor said.
Why has a child's game been more effective than other methods to help educate Web users about phishing? According to Cranor, Anti-Phishing Phil, unlike other more banal educational materials, appeals to human nature. "It's fun and people like to win things," she said. "The training is fast and we focus on teaching actionable steps people can take to prevent phishing."