Kenna Security is now part of Cisco

|Learn more

Using Databases to Automate Assessment and Remediation

Jan 31, 2013
Ed Bellis
Chief Technology Officer, Co-founder

Share with Your Network

The National Vulnerability Database (aka NVD) is a US Government repository for standards-based vulnerability management data. Its content is represented using the Security Content Automation Protocol, SCAP (pronounced “ess-cap”). SCAP is designed to facilitate reporting, collection, management, and monitoring of vulnerability data through automated software facilities. SCAP encompasses a wide range of inputs and information, and enables automation of vulnerability management, security measurement, and security compliance. The NVD includes collections of security checklists, security related software flaws, misconfigurations, product names, and a variety of security impact metrics to support risk-based management of vulnerabilities and potential exposures. NIST maintains SCAP specifications on its website. For some time now, Risk I/O has included a data feed from the NVD as part of its own RiskDB, a database that aggregates vulnerability data from numerous global vulnerability databases.

Understanding the Security Automation Protocol (SCAP) and Its Components 

Beneath the SCAP specifications themselves—version 1.0 is in widespread use, although versions 1.1 and 1.2 are defined and in final status—numerous open standards, known as SCAP components, are used to enumerate software and configuration issues related to security. The NVD itself describes data collected using these component standards as “data feeds” and “product integration services.” A list of those standards, organized by SCAP version, is presented so that their names can speak to the breadth and scope of coverage that SCAP—and by extension, the NVD—provides (please note that even though SCAP specifications for versions 1.1 and 1.2 may be final, some of their or all of the new components they add to the mix are still in development):

1. SCAP 1.0 (current version in use)

  • Common vulnerabilities and exposures (CVE): an XML SchemaXML ChangeLog, and various data files are available for CVE (see the NVD Data Feed and Product Integration page for more info).
  • Common configuration enumeration (CCE): originally developed by MITRE, CCE is represented via an XML Schema and a set of mappings to NIST 800-53 data representation (still in beta as I write this post).
  • Common platform enumeration (CPE), a structured naming system for information technology systems, software and packages, using URLs to establish a formal name format, name check methods, and a description format to bind text and tests to any given name.
  • Common Vulnerability Scoring System (CVSS), a framework for communicating characteristics and impacts associated with IT vulnerabilities, designed to ensure repeatable and accurate measurement with transparent access to vulnerability characteristics used in scoring.
  • Extensible Configuration Checklist Description Format (XCCDF), a specification language for writing security checklists, benchmarks, and related documents, where an XCCDF document represents a structured collection of security configuration rule for a set of target systems. The specification seeks to facilitate information exchange, document generation, organizational and situational tailoring of checklists and benchmarks, automated compliance testing, to produce compliance scores.
  • Open Vulnerability and Assessment Language (OVAL), an international infosec standard to promote open, publicly available security content, that seeks to standardize transfer for such content across the entire spectrum of security tools and services. The oval language is used to encode system details, which are stored in a variety of content repositories throughout the security community. The MITRE OVAL Pages provide an excellent entry point into this standard.

2. Add for SCAP 1.1

  • Open Checklist Interactive Language (OCIL) Version 2.0, a framework for expressing sets of questions to users, with matching procedures to interpret responses to such questions, developed for use with IT security checklists. OCIL is still under development, and not yet part of the current SCAP standard.

3. Add for SCAP 1.2

  • Asset Identification specification, provides various constructs needed to uniquely identify assets based on known identifiers or upon information known about specific assets. The specification describes the purpose of asset identification, defines a data model for identifying assets, and offers guidance on how to make best use of asset identification, along with illustrations of known use cases.
  • Asset Reporting Format (ARF), a data model used to transport information about assets, and relationships between assets and reports. This standard data model enables reporting, correlating, and merging of asset information across and between organizations. ARF is vendor and technology neutral, designed to work across a wide spectrum of reporting applications.
  • Common configuration scoring system (CCSS), defines metrics for the severity of software configuration issues, derived from the Common Vulnerability Scoring System (CVSS) developed to measure vulnerabilities associated with software issues. CCSS is design to assist organizations in deciding how to address security configuration issues, and to provide data for the quantitative assessment of a system’s overall security posture.
  • Trust Model for Security Automation Data (TMSAD), a trust model that applies to specifications within security automation, such as SCAP, and focuses on processing of XML documents through recommendations on how to use digital signatures, hashes, keys, and identity information to ensure secure information transfer.

How SCAP Is Used

SCAP is used to measure systems that find vulnerabilities, and to score them in order to estimate their possible impact. SCAP checklists standardize and enable automation of linkage between computer security configurations and the controls framework outlined in NIST Special Publication 800-53 (aka SP 800-53). Today’s version of SCAP handles initial measurement and ongoing monitoring of security settings and their corresponding SP 800-53 controls. Planned future versions are expected to standardize and enable automation of implementing and changing security settings for SP 800-53 controls. This explains how SCAP enables the implementation, assessment and monitoring steps described in the NIST Management Framework, which also makes SCAP integral to the NIST FISMA implementation project.  In turn, FISMA is the Federal Information Security Management Act of 2002. FISMA defines legislation that requires all federal agencies to develop, document, and implement across-the-board programs to provide information security for information and information systems that house agency assets and support agency operations, including any provided or managed by other agencies, contractors, or other third parties. FISMA emphasizes a “risk-based policy for cost-effective security” and includes annual security reviews/audits whose results are reported to the Office of Management and Budget (OMB). OMB uses this data to provide oversight, and to report to Congress annually on agency compliance with FISMA. Today, federal agencies spend in excess of $7B per year on securing the US Government’s overall technology investment.

Where the NVD and RiskDB Come Into Play

Far from being a government-only project, the NVD accommodates considerable input and interaction with the research, academic, and private sector interests as well. The entire NVD is available for instant download-on-demand, and NIST encourages widespread investigation into and use of its contents, tools, markup languages, and technologies. With its primary goal to enable intelligent automation of security infrastructures using risk-based assessments and priorities, the NVD is equally interesting to software development and security monitoring for private and commercial use, as it is for the public sector. Furthermore, the NVD is designed to define a compelling data model for the international community as well. That’s what makes the NVD so compelling and extremely worth digging into and getting to know. That’s also why Risk I/O uses the NVD as one of the major data feeds for RiskDB.

Automation has become necessary for managing vulnerabilities simply because—as stated clearly and cogently by Risk I/O’s Ed Bellis in the CSO story “The Vulnerability Arms Race”—the rate at which information security and compliance efforts introduce work into IT organizations exceeds their ability to complete it. In part, this is because of the sheer effort involved in patching vulnerabilities or implementing controls to match compliance requirements and objectives. In part, it’s also because all too often IT organizations with lots of other work to do must decide for themselves how to prioritize, schedule, and implement vulnerability and compliance responses, in the absence of sufficient guidance about what’s burning hot and needs immediate action, what must be done but can wait some time before responding, and what can simply be allowed to slide.

Fixing the problem of managing vulnerabilities and compliance requirements means not just capturing and cataloging vulnerabilities, but also assessing the level of associated risk and/or severity. That’s what enables automation to become effective, and allows for more offloading of effort involved in responding to the vast reams of information produced on a monthly basis (at a rate of over 600 items per month, according to recent analyses). The real point of the NVD and RiskDB is that it helps create a holistic view of the data, and uses a prioritized risk-based approach to drive remediation, and to ensure the most serious, weighty issues get addressed. In addition, using standards such as SCAP helps provide a more objective, balanced (and data-driven) view of vulnerabilities, and removes the knee-jerk element from reactions and resolutions that a rational scoring system applies to the overall threat and vulnerability posture within an organization. This helps keep priorities straight, and makes sure that the most critical issues get the attention and responses that they need.

About the Author: Ed Tittel is a full-time freelance writer and researcher who covers information security, markup languages, and Windows operating systems. A regular contributor to numerous TechTarget websites, Tom’s IT Pro, and, and, Ed also blogs on Windows Enterprise Desktop and IT Career topics. His latest book is Unified Threat Management For Dummies. Learn more about or contact Ed at his website.

Read the Latest Content

Research Reports

Prioritization to Prediction Volume 5: In Search of Assets at Risk

The fifth volume of the P2P series explores the vulnerability risk landscape by looking at how enterprises often view vulnerabilities.

5 Things Every CIO Should Know About Vulnerability Management

If you view vulnerability management (VM) as just a small part of your operation, it might be time to take another look.  Managing vulnerabilities is...



Get Started Using the Exploit Prediction Scoring System (EPSS).

Cyentia Institute’s Chief Data Scientist and Founder Jay Jacobs gives tips on how to get started using the Exploit Prediction Scoring System (EPSS). You...

© 2022 Kenna Security. All Rights Reserved. Privacy Policy.