While false detections should be eliminated as much as possible, these are an inherent part of any vulnerability assessment tool. Possible reasons for false detections include rapid changes in vendor-specific patches/updates, zero-day vulnerabilities, access restrictions, and network glitches.
The goal is to have the fewest vulnerabilities detected in an enterprise network, preferably with low scores/criticality levels. A low vulnerability score, which is an objective and scientifically reproducible measurement, means that host is vulnerable but still relatively secure. Top management and mitigation teams can be comparably happy with this result, and assessment tools that show fewer vulnerabilities should be generally applauded.
However, too many false negatives could mean something entirely different. The identification of these “suspected” vulnerabilities may result from a variety of factors, including a “spray-and-pray” approach to vulnerability testing. Such detections could potentially render a system more vulnerable and prone to exploitation if more targeted testing isn’t implemented.
So what’s the best approach when a detection comes back as false, a false positive declares your system vulnerable while it’s in actuality secure, or a false negative proclaims that a vulnerable system is secure?
Let us consider a few examples in order to determine which could be a better option:
- Environment: The Payment Card Industry (PCI) Security Standards Council states that a false positive is better than a false negative considering how PCI DSS–compliant systems are implemented. The disputes are subsequently raised to PCI ASV with proofs.
- Rollback/Disaster Recovery: Red Hat allows keeping some old kernel packages in order to perform a rollback. These older packages could be vulnerable even if the running kernel is not. So if you did not consider a vulnerable rollback package, the system could be susceptible in the event a recovery happens from those packages.
- Configuration Change: Consider a Windows system that is currently not flagged for any vulnerability associated with Active Directory but in the future is updated and hence is prone to attacks exploiting AD or LDAP architecture.
- Backwards Compatibility: Sometimes certain configurations are still maintained in order to make the system backward compatible. This is especially the case when using legacy vulnerable crypto-related algorithms. Even if your system does not use those ciphers anymore, an attacker could still exploit them.
The above-mentioned examples teach us a few lessons. These are as follows:
- A host score does not present a true picture of a system’s security. A high score system may illustrate a system’s true security posture while a low score could depict an illusion of a secure system.
- Risk acceptance is part of a risk mitigation strategy. It should be used carefully in order to consider that acceptance of risk is not always undesirable.
- Most importantly, configuration/change management becomes important. For a complete picture of whenever updates/rollbacks are applied, those changes should be noted and a vulnerability scan should be run. Therefore, configuration and vulnerability management should be deployed in synergy.
It is not always undesirable to see false positives on your reports. Each false positive should simply be investigated on merit. After all, it is still a secure practice to accept false positives as opposed to leting your system be exposed to vulnerabilities that could cause frustrations, loss of trust, and hours of reading audit logs and trying to contain a potential attack in addition to restoring the system to a secured state.
Are you overwhelmed by vulnerabilities? Tripwire provides an enterprise-class Vulnerability Management solution that accurately prioritizes risk so you can take action on your most exposed assets. Find out more about Tripwire IP360 here.