The headline seems a bit melodramatic, "Antivirus tools pave the way for malware". A company called n.runs is claiming to have found hundreds of security holes in multiple antivirus programs, which can be exploited by the very malware the products are supposed to protect you against. The company's press release implies that many of the holes have to do with the way various antivirus programs parse inspected data files. Not surprisingly, n.runs AG has a solution they can sell worried companies.
I'm not sure whether I agree with the conclusion (that buying another third-party product is the solution) of the obviously self-interested paper, but the overall theme (that anti-malware software is often an avenue for exploitation) is true. And exploitation using parsing avenues has often been the way those vendor products were exploited. Many of the top anti-malware vendors have ended up finding parsing issues in their product over the past few years (not all, but many), and some have repaired the original parsing issues only to see them raised again with exploits found after the original vulnerability.
I've even had to change the way that I use the excellent Wireshark protocol analyzer on my honeynets. I use Wireshark to capture network traffic data streams, but over the years, dozens of parsing errors have been found in Wireshark that would allow malicious exploitation of my network. To prevent hackers from taking over my honeynet monitoring workstation, I changed my data capturing techniques to capture the data live, but do the parsing, formatting, flow analysis, and identification offline.
Even pulling out the parsing issues, some anti-malware tools seem to be plagued with buffer overflow holes. Not one or two, but one or two a year, year after year. And I bet many of the anti-malware tools that have not been found to contain buffer overflow holes are vulnerable, but people are looking or advertising the vulnerabilities they find, not those that simply might exist. Strange as this may sound, anti-malware vendors are no better at secure coding than any other software vendor. You would think that all computer security vendors would give their software programmers special training in Secure Development Lifecycle (SDL) techniques and offer incentives for secure coding, but you would be wrong. Some do, but most don't.
When I was at a penetration testing firm, we did code reviews for two of the big antivirus companies, and found dozens of high-risk code errors that would lead to buffer overflows and the like. Like most programmers, security software programmers are hired for their programming knowledge and with the (false) expectation that they will naturally code securely ... just like most other companies not practicing SDL wrongly assume.
You might even assume that public, critical security issues in a software defense program would be tackled and solved with all possible haste and resources by all anti-malware companies. You would be wrong. I was teaching a CISSP course to one of the world's largest anti-malware companies a few years back and their central product was found to have a nasty, remotely exploitable buffer overflow that was being actively exploited. The programmer who owned that portion of code was in my class, and I was there when they came to get him to fix it. He refused to leave the class, as they had been promising that he could attend a class -- any class -- during the previous 5 years and all his previous classes had been canceled. So, he refused to leave. He worked, part-time, on the problem in class using a VM session on the class image, debugging and fixing the problem. This was while the exploit was being actively exploited by automated malware for nearly a week. He took 5 days to code the fix and then finally pushed it out a few days later.