In association with heise online

30 May 2007, 14:15

Michael Barwise

Magic Numbers or Snake Oil?

The Common Vulnerability Scoring System

Can a single number sum up the full significance of a security vulnerability? The CVSS attempts to prove that it can, but it has its weak points.

The Common Vulnerability Scoring System (CVSS) is a relatively new attempt at consistent vendor-independent ranking of software and system vulnerabilities. Originally a research project by the US National Infrastructure Advisory Council, the CVSS was officially launched in February 2005, and hosted publicly by the Forum for Incident Response and Security Teams (FIRST) from April 2005. By the end of 2006 it had been adopted by 31 organisations and put into operational service by 16 of these, including (significantly) the US-CERT, a primary source of vulnerability information.

The CVSS attempts to reduce the complicated multi-variable problem of security vulnerability ranking to a single numerical index in the range of 0 to 10 (maximum), that can be used in an operational context for such tasks as prioritising patching. It uses a total of 12 input variables each of which can take from two to four alternative pre-determined values. The calculation is broken into three cascaded stages, the first two of which yield visible intermediate results each of which is fed into the following stage. This three-stage process consists of an absolute Base Score calculation that describes ease of access and scale of impact, followed by a Temporal Score calculation that applies a zero to moderate negative bias depending on the current exploitability and remediation position (both of which may well change over time), and, finally, an Environmental Score calculation that is performed by end users to take into account their individual exposure landscape (target space and damage potential).

The third (Environmental) stage has the greatest influence on the final result, and without it a CVSS ranking is really only a partial index. Therefore it must be recognised that a published CVSS score is, unlike the public conception of common vendor rankings (e.g. Microsoft "Critical"), not the final answer. Of course in reality they are not final indices either. The end user should always expect to complete the ranking process by applying some kind of environmental calculation to any published index to allow for local priorities, and the task becomes very difficult where vendor-specific rankings are derived using differing proprietary methods. In the case of the CVSS, a maximal Temporal score of 10 may be reduced by the Environmental calculation even to zero, or alternatively even very low Temporal scores raised up to around 5, once the user's exposure landscape is taken into account. The second condition is significant, as, while nobody would ignore, for example, a Microsoft "Critical" rating, vulnerabilities classified as low priority by vendors could have major impact on certain users, depending on the criticality of the vulnerable systems to their specific business processes.

The Good, the Bad and the Ugly

So what are the pros and cons of the CVSS? On the positive side, it attempts to formalise and objectify the decision process applied to a very complicated problem, potentially improving consistency, both over time and across platforms, vendors and products. It is quite simple, and the input variables and their pre-determined alternative numerical values in the main appear well chosen. It is transparent in that its mechanism is publicly documented. It breaks new ground in attempting to include formal recognition of the user's all-important exposure landscape. But on the other hand, no system is better than its inputs. Choices have to be made as to which value of each variable to select, and the quality of the result depends entirely on the quality of all the choices that lead to it. These choices are externally expressed in natural language in the available calculators. Fortunately, the alternatives contributing to the Base and Temporal scores are relatively unambiguously expressed, and as these decisions will normally be made by experienced security specialists in reporting organisations, the opportunity for significant error is minimised.

However, while the inclusion of the environmental component in the calculation is one of the greatest potential strengths of CVSS, it could also prove to be its Achilles' heel. Not only does the Environmental calculation have the greatest single influence on the final score, but the values of the two variables that contribute to it (collateral damage potential and target distribution) are expressed as "low", "medium" and "high": a notoriously subjective classification system. Poor decisions here will lead to serious errors that can completely undermine the quality of the more objective earlier stages. Furthermore the techno-centric thinking of the originators of CVSS is most apparent here. The guidance notes describe these two environmental variables solely in terms of the percentage of hosts that are vulnerable and the potential for physical damage. This completely misses the point of the differing business criticality of individual systems, which cannot in the real world be assessed "off the cuff" by technical personnel alone.

Print Version | Permalink: http://h-online.com/-747205
  • Twitter
  • Facebook
  • submit to slashdot
  • StumbleUpon
  • submit to reddit
 


  • July's Community Calendar





The H Open

The H Security

The H Developer

The H Internet Toolkit