cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4525
Views
0
Helpful
0
Comments
AdvocateRick
Cisco Employee
Cisco Employee

As a security professional, you might want a list of vulnerabilities per asset under your purview to assess risk and begin the remediation process. With Kenna Security APIs, you’ll be able to extract the asset and its vulnerability data. This blog will demonstrate different strategies for different numbers of assets.

As you might recall, the Inactivate an Asset blog demonstrated how to list assets with 500 or less. If you want to list more than 500, you need to fetch pages. With the list assets API, you can list up to 20 pages with 500 assets, each bringing the total to 10,000 assets. If you have more than 10,000 assets, you will need to use the export data APIs.  Let’s look at how to fetch pages first.

Fetching Asset Pages

To use pagination with the list assets API, a page number is specified in the query parameter. The following code is from page_assets.py.

 8 def get_asset_page(page_num):
 9     page_param = "?page=" + str(page_num)
10     url = base_url + page_param
11
12     # Obtain the specified page.
13     try:
14         response = requests.get(url, headers=headers)
15     except Exception as exp:
16         print("List Asset Error: " + exp.__str__())
17         sys.exit(1)

The code snippet above adds the page query parameter to a base URL, in this case: https://api.kennasecurity.com/assets.

The rest of the code just loops through the pages, counting them up to the maximum number of pages obtained from the meta attribute.

41 # Obtain the first page.
42 resp_json = get_asset_page(page_num)
43
44 # Determine the number of pages and print an appropriate error message.
45 meta = resp_json['meta']
46 num_pages = meta['pages']
47 if num_pages > max_allowed_pages:
48     print(f"Number of pages = {num_pages} which exceeds the maximum allowed of {max_allowed_pages}")
49     print("Will only output the first 10,000 assets.")
50     num_pages = max_allowed_pages

This code also checks to see if the number of asset pages is greater than 20, the maximum number of pages, and a message is printed if there are over 20 pages or 10,000 assets.  The pages beyond 20 are not accessible; therefore, you will have to export them to obtain all the assets.

Exporting Asset Data

To export all the assets, the request data exports API is used.  Since this can take anywhere from 30 seconds to 10 minutes, there is a check data export status API to check when the data can be downloaded via the retrieve data export API.

The code located in export_assets.py requests a data export for assets, checks the status and downloads the gzip file containing all the assets. Let’s look at the code. 

24     filter_params = {
25         'status' : ['active'],
26         #'records_updated_since' : 'now-01d',
27         'export_settings': {
28             'format': 'jsonl',
29             'model': 'asset'
30         }
31     }
32    
33     try:
34         response = requests.post(url, headers=headers, data=json.dumps(filter_params))
35     except Exception as exp:
36         print("Assets Data Exports Error: " + exp.__str__())
37         exit(1)

The filter_params inform the request data exports API what to request.  The model field is set to “asset” because we want assets, while the format field is set to “jsonl,” which indicates JSON lines. I used JSONL so that I didn’t have to suck the whole JSON output in at one time.  Only the “active” assets are being requested. The records_updated_since field is commented out for now. You can uncomment it when you want to perform incremental exports. A search ID is returned, which is used to check the export status and retrieve a gzip file.

Here is the code that checks the data export status using the check data export status API:

46 def get_export_status(api_key, base_url, search_id):
47     check_status_url = base_url + "/data_exports/status?search_id=" + search_id
48     headers = {'Accept': 'application/json',
49                'Content-Type': 'application/json; charset=utf-8',
50                'X-Risk-Token': api_key}
51
52     try:
53         response = requests.get(check_status_url, headers=headers)
54     except Exception as exp:
55         print("Get Export Status Error: " + exp.__str__())
56         exit(1)
57
58     resp_json = response.json()
59     return resp_json['message'] == "Export ready for download"

The key points are that search_id is a query parameter, and the return message we’re looking for is “Export ready for download.”

The code attempts to calculate the amount of time it takes for the export to be ready while checking the status every five or ten seconds, depending on the estimated time.

To retrieve the asset data, use the retrieve data export API:

89 # Obtain the exported asset data.
90 def retrieve_asset_data(api_key, base_url, id, asset_file_name):
 91     get_data_url = base_url + "/data_exports/?search_id=" + id
 92     headers = {'Accept': 'application/gzip; charset=utf-8',
 93                'X-Risk-Token': api_key}
 94
 95     gz_asset_file_name = asset_file_name + ".gz"
 96     try:
 97         response = requests.get(get_data_url, headers=headers, stream=True)
 98
 99         with open(gz_asset_file_name, 'wb') as file_gz:
100             for block in response.iter_content(8192):
101                 file_gz.write(block)

Again, the search_id is a query parameter.  One way to download the gzip file is with streaming shown here with stream=True set in the requests.get() call along with the Response.iter_content with an open gzip file.

After the gzip is downloaded, it is unzipped, and lines are counted.  File names are: assets_<search_id>.gz and assets_<search_id>.  Currently, the gzip file is left around for you to dispose of as you wish.

Obtaining Vulnerabilities per Asset

Now that we know how to obtain assets let’s look at how to obtain the vulnerability information for each asset.  Each program style, page, and export has an accompanying program with obtaining the vulnerabilities for each asset, page_asset_vulns.py, and export_asset_vulns.py.  The additional code is very similar in each style, so I will only look at export_asset_vulns.py

After the assets file is unzipped, the code looks at each asset. Here’s where the usefulness of JSONL comes in. The code reads each line of JSON and parses it, extracts the vulnerability URL, and calls get_vuln_info()with the vulnerability URL.

220 with open(asset_file_name) as asset_file:
221     for json_line in asset_file:
222         asset = json.loads(json_line)
223         vuln_url = asset['urls']['vulnerabilities']
224
225         vuln_cntr += get_vuln_info(api_key, vuln_url, str(asset['id']), avfp, avlfp)

The vuln_url is the show asset vulnerability API.  In get_vuln_info(), the API URL is invoked with some extra logic for “too many requests” and timeouts.

128 def get_vuln_info(api_key, vuln_url, asset_id, avfp, avlfp):
129     headers = {'Accept': 'application/json',
130                'Content-Type': 'application/json; charset=utf-8',
131                'X-Risk-Token': api_key}
132
133     vuln_url = "https://" + vuln_url
134
135     retry_cnt = 0
136     success = False
137     while not success:
138        try:
139            response = requests.get(vuln_url, headers=headers)
140            http_status_code = response.status_code
141
142            # If too many requests, wait a second.  If it happens again, error out.
143            if http_status_code == 429:
144                time.sleep(1)
145                response = requests.get(vuln_url, headers=headers)
146                response.raise_for_status()
147
148        except requests.Timeout as tme:
149            retry_cnt += 1
150            print(f"\nGet vuln info Timeout error: {tme.__str__()}.  Sleeping 60s ({retry_cnt})")
151            if retry_cnt > 3:
152                return
153            time.sleep(60)
154
155        except Exception as exp:
156            print(f"\nGet vuln info error: {exp.__str__()}")
157            return
158
159        success = True

This function logs the asset_id and the number of vulnerabilities for the asset in a file named asset_vuln_log_<search_id>.

163     vulns = resp_json['vulnerabilities']
164     num_vulns = len(vulns)
165     print(f"Vulnerabilities for asset ID {asset_id} ({num_vulns})", file=avlfp)

Vulnerability information for each asset is written to asset_vuln_info_<search_id>.

166     for vuln in vulns:
167         print(vuln, file=avfp)

Conclusion

These programs produce a lot of data, so you’ll have to determine what is important, like asset description, CVE ID, or CVE description.  The asset and vulnerability data could be sliced differently; for example, the appropriate asset and vulnerability information could be written into a database or separate files.

If you’re interested in playing with these samples, they’re located in a Kenna Security blog_samples repo in the vulns_per_assets directory.

Rick Ehrhart  - May 10, 2021

API Evangelist

This blog was originally written for Kenna Security, which has been acquired by Cisco Systems.
Learn more about Cisco Vulnerability Management.

 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: