Share with Your Network
Dealing with false positives during a vulnerability assessment is a fact of life. As applications and infrastructure grow larger and more complex, the likelihood of running into these Type I errors increases along with it.
Although these issues become more commonplace as you grow, there are a number of known ways to help decrease the amount of false positives that are produced by automated tools. From a network and host standpoint, the easiest way to reduce false positives is to perform authenticated scans. This will dramatically reduce the false positive findings leaving you with a result set likely requiring some form of remediation. You can further break these results down using additional automated tools such as Metasploit to confirm exploitablilty. Having multiple tools evaluating the same infrastructure can increase your coverage as well.
Web application assessments bring an altogether greater challenge when automating. That said, there are additional ways you can limit the number of false positives you have to deal with. One of the best methods, or at least the most thorough, is through manual validation; however, if you’re performing manual validation internally you’re going to hit a scaling problem real quick. And if you are outsourcing your manual validation, that could get quite costly depending on frequency and number of applications. Like network vulnerability assessments, a diverse set of tools is your friend. If you happen to be running both dynamic application security testing (DAST) and static application security testing (SAST), in some cases you can correlate these results and actually use them against each other. One of the common complaints we hear about static analysis tools is the amount of false positives they can produce and the large amount of tuning that is required. If you correlate your result sets from these tools, you can verify some of your SAST results via the DAST output (by the way we hear the opposite complaints about DAST with false negatives, but that’s a post for another day).
If you are intending to use virtual patches via web application firewalls and intrusion prevention systems, you’re going to want to stay on top of your false positive rates in order to prevent implementing unnecessary rules. We have built out several features within Risk I/O to support all of the methods above to make weeding these out easier. Another benefit of having a central repository for your vulnerabilities and security defects is the ability to flag once regardless of source. For our enterprise customers, we can flag these through custom fields. I went ahead and created a short video how-to in order to get you started and embedded it below.
I’d love to hear how others are dealing with these issues. I’ll likely be writing another post on false negatives in the near future. Happy viewing!