288 Commits

Author SHA1 Message Date
a9289d8e47 Merge pull request 'Traduction du modèle de rapport de bug' (#1) from mes-modifs into master
Reviewed-on: #1
2025-07-07 12:56:24 +00:00
67ec8af3ae Traduction du modèle de rapport de bug 2025-07-07 14:41:21 +02:00
691f45a1dc Merge pull request #232 from HASecuritySolutions/dependabot/pip/lxml-4.6.5
Bump lxml from 4.1.1 to 4.6.5
2022-06-11 20:39:14 -05:00
80197454a3 Update README.md 2022-02-03 10:33:12 -06:00
841cd09f2d Bump lxml from 4.1.1 to 4.6.5
Bumps [lxml](https://github.com/lxml/lxml) from 4.1.1 to 4.6.5.
- [Release notes](https://github.com/lxml/lxml/releases)
- [Changelog](https://github.com/lxml/lxml/blob/master/CHANGES.txt)
- [Commits](https://github.com/lxml/lxml/compare/lxml-4.1.1...lxml-4.6.5)

---
updated-dependencies:
- dependency-name: lxml
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-12-13 19:44:12 +00:00
e7183864d0 Merge pull request #216 from Yashvendra/patch-1
Updated 3000_openvas.conf
2020-07-20 10:45:00 +02:00
12ac3dbf62 Merge pull request #217 from andrew-bailey/patch-1
Update README.md
2020-07-20 10:43:09 +02:00
e41ec93058 Update README.md
Fix license badge from MIT to Apache 2.0 which is the current license applied in Github
2020-07-20 11:57:36 +09:30
8a86e3142a Update 3000_openvas.conf
Fixed Description
2020-07-19 14:41:21 +05:30
9d003d12b4 improved error logging and excepcions 2020-04-08 12:01:47 +02:00
63c638751b Merge pull request #207 from spasaintk/patch-1
Update vulnwhisp.py
2020-02-29 20:05:51 +01:00
a3e85b7207 Update vulnwhisp.py
Code triggers a crash:
ERROR:root:main:local variable 'vw' referenced before assignment
ERROR: local variable 'vw' referenced before assignment

Proposed fix deals with the issue.
After fix:
INFO:vulnWhispererOpenVAS:process_openvas_scans:Processing complete
2020-02-28 00:33:38 +01:00
4974be02b4 fix of fix... 2020-02-21 16:17:00 +01:00
7fe2f9a5c1 casting port from jira local download to an int 2020-02-21 16:09:25 +01:00
f4634d03bd Merge pull request #206 from HASecuritySolutions/jira_ticket_download_attachment_data
Jira ticket download attachment data
2020-02-21 15:58:05 +01:00
e1ca9fadcd fixed issue where when actioning all actions, if one failed it exited the program 2020-02-21 15:50:14 +01:00
adb7700300 added on Jira local download an extra field with affected assets in json format for further processing in Splunk/ELK 2020-02-21 11:00:07 +01:00
ced0d4c2fc Hotfix #190 2020-02-04 16:47:37 +01:00
f483c76638 latest qualysapi version that supports python 2 is 6.0.0 2020-01-13 11:34:21 +01:00
f65116aec8 fix requirements issue, new version of qualysapi to be reviewed 2020-01-13 11:03:04 +01:00
bdcb6de4b2 Target CentOS 7 (issue #199) (#200) 2019-12-03 16:21:48 +01:00
af8e27d075 Bump requests from 2.18.3 to 2.20.0 (#196)
Bumps [requests](https://github.com/requests/requests) from 2.18.3 to 2.20.0.
- [Release notes](https://github.com/requests/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/master/HISTORY.md)
- [Commits](https://github.com/requests/requests/compare/v2.18.3...v2.20.0)

Signed-off-by: dependabot[bot] <support@github.com>
2019-12-03 16:20:36 +01:00
accf926ff7 fixed ELK7 logstash compatibility, #187 2019-09-16 15:35:34 +02:00
acf387bd0e added ELK versions supported (6 and 7) 2019-08-24 15:06:33 +02:00
ab7a91e020 Update frameworks_example.ini (#186) 2019-08-10 05:32:19 +02:00
a1a0d6b757 Merge pull request #182 from HASecuritySolutions/save_assets_no_DNS_record
[JIRA] added local file save with assets not resolving hostname
2019-06-18 12:05:49 +02:00
2fb089805c [JIRA] added local file save with assets not resolving hostname 2019-06-18 10:53:55 +02:00
6cf2a94431 Support tenable API keys (#176)
* support tenable API keys

* more flexible config support

* add nessus API key support

* fix whitespace
2019-05-02 10:26:51 +02:00
162636e60f Fix newlines in MAC Address field output (#178)
* fix newlines in all MAC Address field

* remove newline

* only cleanse if col exists
2019-05-02 08:58:18 +02:00
60c56b778e Update README.md
Fixed link references
2019-04-17 10:52:13 +02:00
093f963adf Merge pull request #170 from HASecuritySolutions/beta-1.8
VulnWhisperer Release 1.8
2019-04-17 10:36:35 +02:00
3464cfed68 Merge pull request #174 from pemontto/docker-fixes
Docker fixes
2019-04-17 10:29:32 +02:00
c78f22ed88 Merge pull request #12 from pemontto/travis-docker-latest 2019-04-17 15:09:37 +10:00
c3167bd76b fix test output 2019-04-17 14:52:03 +10:00
30e3efe2cb set default path and fix restore 2019-04-17 14:52:03 +10:00
549791470a Set limit to bail out on 2019-04-17 14:52:03 +10:00
e9aba0796f increase timeout for ES sync 2019-04-17 14:52:03 +10:00
2c5fbfc3ef restore deleted files 2019-04-17 14:52:03 +10:00
60b9e2b3d9 Test updates 2019-04-17 14:52:03 +10:00
bb60fae67e Move vulnwhisperer tests to a script 2019-04-17 14:52:03 +10:00
e30dbe244b standardise /tmp to /opt 2019-04-17 14:52:03 +10:00
c3fb65e67a Update test 2019-04-17 14:52:03 +10:00
a7ae44f981 Add docker test script 2019-04-17 14:50:06 +10:00
e0de8c6818 Expose Logstash API port 2019-04-17 14:50:06 +10:00
47a96a2984 sudo chown 2019-04-17 14:50:06 +10:00
5828d05627 fix 2019-04-17 14:50:06 +10:00
bfcb10ea0e Fix permissions for ES 2019-04-17 14:50:06 +10:00
0102ccb2f7 Fix build command 2019-04-17 14:50:06 +10:00
3860438903 Test travis docker 2019-04-17 14:50:06 +10:00
e17ff42adb update kibana objects to match template 2019-04-17 14:41:25 +10:00
f7d47ae753 update index template 2019-04-17 14:41:14 +10:00
d67122a099 Retry template installation a few times 2019-04-17 14:40:07 +10:00
3433231bb4 Add initial ELK6 index template 2019-04-16 11:30:27 +10:00
d9ab33d6c9 Set logstash and vw to use the same volume 2019-04-16 11:18:27 +10:00
4d153ec7f2 Add index template to ES for docker 2019-04-16 09:57:20 +10:00
1d92f71f9c fix issue mentioned in #163, although not applied to ELK6 2019-04-15 17:06:09 +02:00
3ecb26886a added proxy config to instructions 2019-04-15 12:43:47 +02:00
4c9fa9d241 Merge pull request #172 from pemontto/feature-fixes
Feature fixes
2019-04-15 11:47:02 +02:00
bf5070f361 fix vulnwhisperer image 2019-04-12 17:55:59 +10:00
0227636c4c unify case among config 2019-04-12 17:54:17 +10:00
b35da1c79e reduce docker layers and support test data 2019-04-12 17:51:15 +10:00
668efe2b7a Add extra test case 2019-04-12 11:44:04 +10:00
8433055f17 Fix more unicode issues 2019-04-12 11:40:01 +10:00
90908bd0c6 Remove deps from docker image 2019-04-12 11:39:49 +10:00
f23dd0bc83 Merge pull request #171 from pemontto/feature-separate-qualys
Feature separate qualys
2019-04-11 22:06:28 +02:00
8dc3b2f8ac Add qualys paths to elk5 logstash config 2019-04-11 10:41:13 +10:00
d2a7513ed1 Fix nessus logstash field cvss3_vector 2019-04-11 10:36:41 +10:00
4ed6827ee6 Clean config and separate qualys data 2019-04-11 08:27:28 +10:00
b25c769a01 readme details 2019-04-10 15:46:57 +02:00
4405284015 Merge branch 'beta-1.8' of https://github.com/HASecuritySolutions/VulnWhisperer into beta-1.8 2019-04-10 15:30:18 +02:00
7960bd3c59 updating documentation 2019-04-10 15:29:29 +02:00
4800d42eef Merge pull request #169 from HASecuritySolutions/submodule
updating submodule
2019-04-10 12:07:41 +02:00
8b8938e7b3 updating submodule 2019-04-10 12:04:36 +02:00
db669c531a changing submodule reference 2019-04-10 11:47:58 +02:00
74db06b17a Merge pull request #168 from HASecuritySolutions/qualys_was_fix
Qualys was fix
2019-04-10 11:35:37 +02:00
45e23985d3 added comment 2019-04-10 11:25:28 +02:00
cde2fe2dd8 final commit for qualys web 2019-04-10 11:19:44 +02:00
001462a848 changed version to 1.8 2019-04-08 16:37:42 +02:00
36a8528abc Jira Workflow documentation 2019-04-08 12:27:26 +02:00
913bbfb2de Merge pull request #167 from pemontto/feature-nessus-stream
Feature nessus stream
2019-04-08 11:45:14 +02:00
302037893d Add test path to env vars 2019-04-08 19:41:48 +10:00
c8d906c05f Fix tenable downloads 2019-04-08 19:30:48 +10:00
e1f2c00b9e fix tests 2019-04-08 19:17:47 +10:00
3d2c939cfb Update .travis.yml 2019-04-08 19:13:45 +10:00
7b1ebb51fa Updates tests 2019-04-08 19:02:02 +10:00
8086e7cf9f Fix tests directory 2019-04-08 18:46:30 +10:00
1ef7289b8d reduntant replace, formatting 2019-04-08 18:44:30 +10:00
a12e9f70a1 Remove redundant param 2019-04-08 18:38:03 +10:00
873066a419 reorder imports 2019-04-08 17:43:50 +10:00
973c69dffb Updates tests 2019-04-08 17:43:15 +10:00
12e6c6d0d5 Merge pull request #166 from pemontto/feature-fix-import
Fix missing sys import
2019-04-08 09:07:26 +02:00
ec5d6cd388 Iterate through nessus download data 2019-04-08 12:25:50 +10:00
33f2a5a3d1 Use a session and don't overwrite imports 2019-04-08 12:24:22 +10:00
5edde8760a Fix missing sys import 2019-04-06 11:02:42 +11:00
7370f5b608 Merge branch 'beta-1.8' of https://github.com/HASecuritySolutions/VulnWhisperer into beta-1.8 2019-04-05 23:37:41 +02:00
0a877ce267 fix nessus download 'imported' scans 2019-04-05 23:37:04 +02:00
1ef67d48be Feature error codes (#165)
* Use error codes for failed scans

* Fix indentations

* Fix more indentation

* Continue after failed download

* Add tests for failed scans

* Add more tests

* move definition

* Update nessus.py

This function was used by function `print_scans` which at the same time was an unused one that had been deleted in the PR itself.
2019-04-05 11:36:13 +02:00
27412d31b4 Merge branch 'beta-1.8' of https://github.com/HASecuritySolutions/VulnWhisperer into beta-1.8 2019-04-05 11:04:29 +02:00
71352aee57 Add external API mocking and travis tests (#164)
* Fix closing logging handlers

* Fix *some* unicode issues for nessus and qualys

* Prevent multiple requests to nessus scans endpoint

* More unicode fixes

* Remove unnecessary call

* Fix whitespace

* Add mock module and argument

* Add test config and data

* Fix whitespace again

* Disable qualys_web until data is available

* Use logging module

* Delete report_tracker.db

* Cleanup mock calls

* Add httpretty to requirements

* Refactor into a class

* Updates travis tests

* Fix exit codes

* Remove print statements

* Remove test

* Add test directory as submodule
2019-04-05 10:57:39 +02:00
eae64a745d cleanup of unused code and fixes, still breaks 2019-04-04 11:24:01 +02:00
03f7a4cedb fixed line 2019-04-04 11:05:39 +02:00
a30a22ab98 fix wrong parenthesis on qualys was 2019-04-03 15:15:31 +02:00
f33644b814 fix reported tracking for jira 2019-04-02 11:58:44 +02:00
fa0b3c867b added tracking of scans processed by jira, will only process if new scans now (backwards compatibility 2019-04-01 15:55:02 +02:00
e32c9bf55d Fix *some* unicode issues for nessus and qualys (#160)
* Fix *some* unicode issues for nessus and qualys

* More unicode fixes
2019-04-01 10:06:16 +02:00
9619a47d7a Fix Tenable and Nessus scan listing (#162)
* Prevent multiple requests to nessus scans endpoint

* Remove unnecessary call
2019-04-01 10:04:12 +02:00
383e7f5478 Fix closing logging handlers (#159) 2019-04-01 09:07:29 +02:00
3601ace5e1 improved file logging format 2019-03-22 10:42:30 +01:00
97e4f073bf added logging to file 2019-03-22 10:38:55 +01:00
a4b1b9cdd4 fixed issue where, asset after a removed one, was ignored due to python listing 2019-03-21 15:52:18 +01:00
843aac6a83 fixing issue with new vulns of already risk accepted issues not being reported anymore; now, new ticket is raised, excluding all the assets that have been previously considered risk accepted in another ticket 2019-03-20 16:37:50 +01:00
47df1ee538 typo 2019-03-20 10:55:54 +01:00
a4420b7df8 reverse unintended change on frameworks_example.ini 2019-03-20 09:11:18 +01:00
9d52596be9 fix xml encoding issue #156 2019-03-20 08:49:36 +01:00
5cdb2552f0 Merge branch 'beta-1.8' of https://github.com/HASecuritySolutions/VulnWhisperer into beta-1.8 2019-03-20 08:35:32 +01:00
70e1d7703f fix missing section specification on qualys was connector #156 2019-03-20 08:35:03 +01:00
2d3a140042 fix bug 2019-03-19 15:19:27 +01:00
936c4a3e1b added automatic jira server_decommission label removal after x time 2019-03-19 12:58:38 +01:00
e7bd4d2a55 deleting dependency and pulling qualysapi official library, vulnwhisperer compatible 2019-03-15 12:03:02 +01:00
401dfec2c8 fix #143, added a temporary container to upload through kibana API 2019-03-04 15:10:51 +01:00
86e792f5aa workaround regarding ignoring ticket updates after risk accepted 2019-03-01 15:18:49 +01:00
a288f416f7 added label *false positive* for reporting on jira 2019-02-27 18:06:16 +01:00
623c881928 fix jira issue index when comparing created tickets 2019-02-27 11:27:44 +01:00
4e94bef245 fix bug not detecting existent label due to string format 2019-02-26 15:26:14 +01:00
a3da41e487 added to readme openvas supported versions 2019-02-26 09:59:50 +01:00
46ddee391b confirm openvas 9 works 2019-02-25 22:09:29 +01:00
b36e31566e fix #142 2019-02-25 22:02:20 +01:00
05420ddfd0 readding docker-compose credentials template 2019-02-25 12:32:32 +01:00
bdbe31d425 resources reorg 2 2019-02-25 12:29:00 +01:00
f170dcb05f reorg resources files 2019-02-25 12:27:30 +01:00
5dd6503d38 Merge branch 'beta-1.8' of https://github.com/HASecuritySolutions/VulnWhisperer into beta-1.8 2019-02-25 12:09:46 +01:00
2c7965d2d9 fix #151 2019-02-25 12:08:04 +01:00
521184d079 Update bug_report.md
added debug trail request
2019-02-21 22:20:19 +01:00
c2d80c7fce made host resolution optional from the config file with dns_resolv var 2019-02-15 16:24:52 +01:00
587546a726 fix typo 2019-02-14 14:16:31 +01:00
177c2548ba allow jira sync module to run after the rest 2019-02-12 18:18:24 +01:00
bc3367e310 exception of empty scans 2019-02-12 18:01:46 +01:00
8c53987270 tracking of processing was in debug instead of info logging 2019-02-12 16:56:00 +01:00
ccf2e4b1d1 fix #147 2019-02-12 16:51:26 +01:00
b0caccdc89 fixed issues plus jira comment formatting 2019-02-12 16:25:28 +01:00
4ea384c9cc fix issue #110 (one line) 2019-02-08 10:56:32 +01:00
699fc75446 Update README.md
Nessus v8 also supported
2019-02-08 09:10:04 +01:00
53dc65e492 fix qualysapi library dependencies 2019-02-08 09:08:21 +01:00
0ea144bf87 Qualysapi fix (#146)
* moved qualysapi to branch master-update

* fixing bug of qualys scan without vulnerabilities: vulnWhispererQualysVuln[1361] ERROR Could not process scan/1549159480.84792: 'severity'

* change to fixed qualysapi branch

* fix bug and changed to qualysapi fork master branch

* updated submodule to master branch
2019-02-06 17:00:43 +01:00
14b71a25b8 Created the version 6 for ELK. Fixed #135 (#145)
* Created the version 6 for ELK. Fixed #135

* Needed to make sure all the data volumes were set up properly.  Some paths had VulnWhisperer, vulnwhisperer, vulnwhisp/data.

* Delete 9998_output_broker_rabbitmq.conf

* Delete 9998_input_broker_rabbitmq.conf

* Delete 0001_input_beats.conf

* add to gitignore creds files + correct elk5 docker-compose

* elk changed to 6.6.0 from 6.5.2, output path from logstash to elasticsearch host
2019-02-05 17:30:51 +01:00
3cd13229a3 Update issue templates (#144)
* Update issue templates

Add an issue template for bug reports

* Update bug_report.md

Changing the "Desktop" label to "System in which VulnWhisperer runs"
2019-02-01 11:01:49 +01:00
177d384353 Fixed #134 (#139) 2019-01-15 23:57:09 -05:00
b1404cf0be change ./dep/qualysapi origin to https due to Github complains 2018-12-14 15:47:11 +01:00
48b17c5cbe Add a Dockerfile (#132)
* updating my base to match original vulnwhisperer (#1)

* Create docker-compose.yml

* Update 9000_output_nessus.conf

* Added an argument for username and password, which takes precendece over nessus.  Fixed #5

* Update README.md

* Silence NoneType object

* Put in a check to make sure that the config file exists.  FIXES austin-taylor/VulnWhisperer#4

* remove leading and trailing spaces around all input switches. Fixes austi-taylor/VulnWhisperer#6

* Update README.md

* Allow for any directories to be monitored

* Addition of Qualys WebApp Processing

* Addition of Qualys WebApp Processing

* Fixed multiple bugs, cleaned up formatting, produces solid csv output for Qualys Web App scans

* Adding custom version of QualysAPI

* Field Cleanup

* Addition of submodules, update to connectors, base class start

* Addition of submodules, update to connectors, base class start

* Addition of submodules, update to connectors, base class start

* Refactored classes to be more modular, update to ini file and submodules

* Refactored classes to be more modular, update to ini file and submodules

* Removing commented code

* Addition of category class and special class for Qualys Scanning Reports. Also added additional enrichments to reports

* Column update for scans and N/A cleanup

* Fix for str casting

* Update README.md

* Update to README

* Update to README

* Update to README

* Update to requirements.txt

* Support for json output

* Database tracking for processed Qualys scans

* Database tracking for processed Qualys scans

* Bug fix for counter in Nessus and format fix for qualys

* Check for new records

* Update to count tracker

* Update to write path logic

* Better database handling

* Addition of VulnWhisperer-Qualys logstash files

* Addition of VulnWhisperer-Qualys logstash files

* Update to logstash template

* Updated dashboard

* Update to README

* Update to README

* Logo update

* Readme Update

* Readme Update

* Readme Update

* Adding name of scan and scan reference

* Plugin name converted to scan name

* Update to README

* Documentation update

* README Update

* README Update

* Update README.md

* Add free automated flake8 testing of pull requests

[Flake8](http://flake8.pycqa.org) tests can help to find Python syntax errors, undefined names, and other code quality issues.  [Travis CI](https://travis-ci.org) is a framework for running flake8 and other tests which is free for open source projects like this one.  The owner of the this repo would need to go to https://travis-ci.org/profile and flip the repository switch __on__ to enable free automated flake8 testing of each pull request.

* Testing build with no submodules

* flake8 --ignore=deps/qualysapi

* flake8 . --exclude=/deps/qualysapi

* Remove leading slash

* Add build status to README

* Travis Config update

* README Update

* README Update

* Create CNAME

* Set theme jekyll-theme-leap-day

* README Update

* Getting started steps

* Getting started steps

* Remind user to select section if using a config

* Update to readme

* Update to readme

* Update to readme

* Update to readme

* Update to README

* Update to README

* Update to example logstash config

* Update to qualys logstash conf to reflect example config

* Update to README

* Update to README

* Readme update

* Rename logstash-nessus-template.json to logstash-vulnwhisperer-template.json

* Update 1000_nessus_process_file.conf

* Delete LICENSE

* Create LICENSE

* Update to make nessus visualizations consistent with qualys

* Update to README

* Update to README

* Badge addition

* Badge addition

* Addition of OpenVAS Connector

* Addition of OpenVAS

* Update 9000_output_nessus.conf

* Delete 9000_output_nessus.conf

* Update 1000_nessus_process_file.conf

* Automatically create filepath and directory if it does not exist

* Addition of OpenVas -- ready for alpha

* Addition of OpenVas -- ready for alpha

* Allow template defined config form IDs

* Completion of OpenVAS module

* Completion of OpenVAS module

* Remove template format

* Addition of openvas logstash config

* Update setup.py

* Update README.md

* ELK Sample Install (#37)

Updated Readme.md to include a Sample ELK Install guide addressing multiple issues around ELK Cluster/Node Configuration.

* Update vulnwhisp.py

* VulnFramework Links (#39)

Quick update regarding issue #33

* Updating config to be consistent with conf files

*  Preserving newlines & carriage returns  (#48)

* Preserve newlines & carriage returns

* Convert '\n' & '\r' to newlines & carriage returns

* Removed no longer supported InsecureRequestWarning workaround. (#55)

* Removed no longer supported InsecureRequestWarning workaround.

* Add dependencies to README.md

* Update vulnwhisp.py

* Fix to apt-get install

* Nessus bugfixes (#68)

* Handle cases where no scans are present

* Prevent infinite login loop with incorrect creds

* Print actual config file path

* Don't overwrite Nessus Synopsis with Description

* Tenable.io support (#70)

* Basic tenable.io support

* Add tenable config section

* Use existing variable

* Fix indent

* Fix paren

* Use ternary syntax

* Update Logstash config for tenable.io

* Update README.md

* Update template to version 5.x (#73)

* Update template to Elasticsearch 5.x

* Update template to Elasticsearch 5.x

I think _all field is no longer needed from ES 5.x because of the search all field execution if _all is disabled

* Qualys Vulnerability Management integration (#74)

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Fix error: "Cannot convert non-finite values (NA or inf) to integer"

When trying to download the results of Qualys Vulnerability Management scans, the following error pops up:

[FAIL] - Could not process scan/xxxxxxxxxx.xxxxx - Cannot convert non-finite values (NA or inf) to integer

This error is due to pandas operating with the scan results json file, as the last element from the json doesn't fir with the rest of the response's scheme: that element is "target_distribution_across_scanner_appliances", which contains the scanners used and the IP ranges that each scanner went through.

Taking out the last line solves the issue.

Also adding the qualys_vuln scheme to the frameworks_example.ini

* Update README.md

* example.ini is frameworks_example.ini (#77)

* No need to specify section to run (#88)

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Fix error: "Cannot convert non-finite values (NA or inf) to integer"

When trying to download the results of Qualys Vulnerability Management scans, the following error pops up:

[FAIL] - Could not process scan/xxxxxxxxxx.xxxxx - Cannot convert non-finite values (NA or inf) to integer

This error is due to pandas operating with the scan results json file, as the last element from the json doesn't fir with the rest of the response's scheme: that element is "target_distribution_across_scanner_appliances", which contains the scanners used and the IP ranges that each scanner went through.

Taking out the last line solves the issue.

Also adding the qualys_vuln scheme to the frameworks_example.ini

* No need to specify section to run

Until now it vulnwhisperer was not running if a section was not specified,
but there is the variable "enabled" on each module config, so now it will
check which modules are enabled and run them sequentialy.

Made mainly in order to be able to automate with docker-compose instance,
as the docker with vulnwhisperer (https://github.com/HASecuritySolutions/docker_vulnwhisperer)
has that command run at the end.

* added to readme + detectify

* Silence requests warnings

* Docker-compose fully working with vulnwhisperer integrated (#90)

* ignore nessus requests warnings

* docker-compose fully working with vulnwhisperer integrated

* remove comments docker-compose

* documenting docker-compose

* Readme corrections

* fix after recheck everything works out of the box

* fix exits that break the no specified section execution mode

* fix docker qualysapi issue, updated README

* revert change on deps/qualysapi/qualysapi/util.py (no effect)

* temporarily changed Dockerfile link to the working one

* Update README.md

* Update README.md

* Fix docker-compose logstash config (#92)

* ignore nessus requests warnings

* docker-compose fully working with vulnwhisperer integrated

* remove comments docker-compose

* documenting docker-compose

* Readme corrections

* fix after recheck everything works out of the box

* fix exits that break the no specified section execution mode

* fix docker qualysapi issue, updated README

* revert change on deps/qualysapi/qualysapi/util.py (no effect)

* temporarily changed Dockerfile link to the working one

* fix docker-compose logstash config

* permissions needed for logstash container to work

* changing default path qualys, there are no folders

* Update 1000_vulnWhispererBaseVisuals.json

Update field to include keyword to prevent error: TypeError: "field" is a required parameter

* Update docker-compose.yml (#93)

increase file descriptors to allow elasticsearch to start.

* Update Slack link on README.md

* Update README.md

Added to README.md @pemontto as contributor

* Jira module fully working (#104)

* clean OS X .DS_Store files

* fix nessus end of line carriage, added JIRA args

* JIRA module fully working

* jira module working with nessus

* added check on already existing jira config, update README

* qualys_vm<->jira working, qualys_vm database entries with qualys_vm, improved checks

* JIRA module updates ticket's assets and comments update

* added JIRA auto-close function for resolved vulnerabitilies

* fix if components variable empty issue

* fix creation of new ticket after updating existing one

* final fixes, added extra line in template

* added vulnerability criticality as label in order to be able to filter

* Added jira section to config file and fail check for config variable (#105)

* clean OS X .DS_Store files

* fix nessus end of line carriage, added JIRA args

* JIRA module fully working

* jira module working with nessus

* added check on already existing jira config, update README

* qualys_vm<->jira working, qualys_vm database entries with qualys_vm, improved checks

* JIRA module updates ticket's assets and comments update

* added JIRA auto-close function for resolved vulnerabitilies

* fix if components variable empty issue

* fix creation of new ticket after updating existing one

* final fixes, added extra line in template

* added vulnerability criticality as label in order to be able to filter

* jira module gets now minimum criticality from config file

* added jira config to frameworks_example.ini

* fail check for config variable in case it is left empty

* fix issue jira-qualys criticality comparison

* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web (#108)

* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* Rework logging using the stdlib machinery (#116)

* Rework logging using the stdlib machinery
Use the verbose or debug flag to enable/disable logging.DEBUG
Remove the vprint function from all classes
Remove bcolors from all code
Cleanup [INFO], [ERROR], {success} and similar

* fix some errors my local linter missed but travis catched

* add coloredlogs and --fancy command line flag

* qualysapi dependency removal

* Qualysapi update (#118)

* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* delete qualysapi fork and added to requirements

* merge with testing

* Jira extras (#120)

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* initialize variable fullpath to avoid break

* fix get latest scan entry from db and ignore 'potential' not verified vulns

* added host resolv + cache to speed already resolved, jira logging

* make sure that vulnerability criticality appears as a label on ticket + automatic actions

* jira bulk report of scans, fix on nessus logging, jira time resolution and list all ticket reported assets

* added jira ticket data download + change default time window from 6 to 12 months

* small fixes

* jira logstash files

* fix variable confusion (thx Travis :)

* update readme (#121)

* Add ansible provisioning (#122)

* first ansible skeleton

* first commit of ansible installation of vulnwhisperer outside docker

* first ansible skeleton

* first commit of ansible installation of vulnwhisperer outside docker

* refactor the ansible role a bit

* update readme, add fail validation step to provision.yml and fix
typo when calling a logging funciton

* removing ansible from vulnwhisperer, creating a new repo for ansible deployment

* closed ticket metrics only get last 12 months tickets

* Update README.md

Fixing travis link

* Restoring custom qualys wrapper

* Restoring custom qualys wrapper

* Update README.md

* Created the dockerfile

* Updating dockerfile

* in a production system, it is not advisable to have git pulling repos from inside a docker image when there is a pypi repo.

* builds the vulnwhisperer image without any of the ELK configs.  It can also be used in the same directory as the main project

* reverted the qualys call
2018-12-14 15:23:54 +01:00
a5972cfacd V6 Dashboard (#131)
* updating my base to match original vulnwhisperer (#1)

* Create docker-compose.yml

* Update 9000_output_nessus.conf

* Added an argument for username and password, which takes precendece over nessus.  Fixed #5

* Update README.md

* Silence NoneType object

* Put in a check to make sure that the config file exists.  FIXES austin-taylor/VulnWhisperer#4

* remove leading and trailing spaces around all input switches. Fixes austi-taylor/VulnWhisperer#6

* Update README.md

* Allow for any directories to be monitored

* Addition of Qualys WebApp Processing

* Addition of Qualys WebApp Processing

* Fixed multiple bugs, cleaned up formatting, produces solid csv output for Qualys Web App scans

* Adding custom version of QualysAPI

* Field Cleanup

* Addition of submodules, update to connectors, base class start

* Addition of submodules, update to connectors, base class start

* Addition of submodules, update to connectors, base class start

* Refactored classes to be more modular, update to ini file and submodules

* Refactored classes to be more modular, update to ini file and submodules

* Removing commented code

* Addition of category class and special class for Qualys Scanning Reports. Also added additional enrichments to reports

* Column update for scans and N/A cleanup

* Fix for str casting

* Update README.md

* Update to README

* Update to README

* Update to README

* Update to requirements.txt

* Support for json output

* Database tracking for processed Qualys scans

* Database tracking for processed Qualys scans

* Bug fix for counter in Nessus and format fix for qualys

* Check for new records

* Update to count tracker

* Update to write path logic

* Better database handling

* Addition of VulnWhisperer-Qualys logstash files

* Addition of VulnWhisperer-Qualys logstash files

* Update to logstash template

* Updated dashboard

* Update to README

* Update to README

* Logo update

* Readme Update

* Readme Update

* Readme Update

* Adding name of scan and scan reference

* Plugin name converted to scan name

* Update to README

* Documentation update

* README Update

* README Update

* Update README.md

* Add free automated flake8 testing of pull requests

[Flake8](http://flake8.pycqa.org) tests can help to find Python syntax errors, undefined names, and other code quality issues.  [Travis CI](https://travis-ci.org) is a framework for running flake8 and other tests which is free for open source projects like this one.  The owner of the this repo would need to go to https://travis-ci.org/profile and flip the repository switch __on__ to enable free automated flake8 testing of each pull request.

* Testing build with no submodules

* flake8 --ignore=deps/qualysapi

* flake8 . --exclude=/deps/qualysapi

* Remove leading slash

* Add build status to README

* Travis Config update

* README Update

* README Update

* Create CNAME

* Set theme jekyll-theme-leap-day

* README Update

* Getting started steps

* Getting started steps

* Remind user to select section if using a config

* Update to readme

* Update to readme

* Update to readme

* Update to readme

* Update to README

* Update to README

* Update to example logstash config

* Update to qualys logstash conf to reflect example config

* Update to README

* Update to README

* Readme update

* Rename logstash-nessus-template.json to logstash-vulnwhisperer-template.json

* Update 1000_nessus_process_file.conf

* Delete LICENSE

* Create LICENSE

* Update to make nessus visualizations consistent with qualys

* Update to README

* Update to README

* Badge addition

* Badge addition

* Addition of OpenVAS Connector

* Addition of OpenVAS

* Update 9000_output_nessus.conf

* Delete 9000_output_nessus.conf

* Update 1000_nessus_process_file.conf

* Automatically create filepath and directory if it does not exist

* Addition of OpenVas -- ready for alpha

* Addition of OpenVas -- ready for alpha

* Allow template defined config form IDs

* Completion of OpenVAS module

* Completion of OpenVAS module

* Remove template format

* Addition of openvas logstash config

* Update setup.py

* Update README.md

* ELK Sample Install (#37)

Updated Readme.md to include a Sample ELK Install guide addressing multiple issues around ELK Cluster/Node Configuration.

* Update vulnwhisp.py

* VulnFramework Links (#39)

Quick update regarding issue #33

* Updating config to be consistent with conf files

*  Preserving newlines & carriage returns  (#48)

* Preserve newlines & carriage returns

* Convert '\n' & '\r' to newlines & carriage returns

* Removed no longer supported InsecureRequestWarning workaround. (#55)

* Removed no longer supported InsecureRequestWarning workaround.

* Add dependencies to README.md

* Update vulnwhisp.py

* Fix to apt-get install

* Nessus bugfixes (#68)

* Handle cases where no scans are present

* Prevent infinite login loop with incorrect creds

* Print actual config file path

* Don't overwrite Nessus Synopsis with Description

* Tenable.io support (#70)

* Basic tenable.io support

* Add tenable config section

* Use existing variable

* Fix indent

* Fix paren

* Use ternary syntax

* Update Logstash config for tenable.io

* Update README.md

* Update template to version 5.x (#73)

* Update template to Elasticsearch 5.x

* Update template to Elasticsearch 5.x

I think _all field is no longer needed from ES 5.x because of the search all field execution if _all is disabled

* Qualys Vulnerability Management integration (#74)

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Fix error: "Cannot convert non-finite values (NA or inf) to integer"

When trying to download the results of Qualys Vulnerability Management scans, the following error pops up:

[FAIL] - Could not process scan/xxxxxxxxxx.xxxxx - Cannot convert non-finite values (NA or inf) to integer

This error is due to pandas operating with the scan results json file, as the last element from the json doesn't fir with the rest of the response's scheme: that element is "target_distribution_across_scanner_appliances", which contains the scanners used and the IP ranges that each scanner went through.

Taking out the last line solves the issue.

Also adding the qualys_vuln scheme to the frameworks_example.ini

* Update README.md

* example.ini is frameworks_example.ini (#77)

* No need to specify section to run (#88)

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Fix error: "Cannot convert non-finite values (NA or inf) to integer"

When trying to download the results of Qualys Vulnerability Management scans, the following error pops up:

[FAIL] - Could not process scan/xxxxxxxxxx.xxxxx - Cannot convert non-finite values (NA or inf) to integer

This error is due to pandas operating with the scan results json file, as the last element from the json doesn't fir with the rest of the response's scheme: that element is "target_distribution_across_scanner_appliances", which contains the scanners used and the IP ranges that each scanner went through.

Taking out the last line solves the issue.

Also adding the qualys_vuln scheme to the frameworks_example.ini

* No need to specify section to run

Until now it vulnwhisperer was not running if a section was not specified,
but there is the variable "enabled" on each module config, so now it will
check which modules are enabled and run them sequentialy.

Made mainly in order to be able to automate with docker-compose instance,
as the docker with vulnwhisperer (https://github.com/HASecuritySolutions/docker_vulnwhisperer)
has that command run at the end.

* added to readme + detectify

* Silence requests warnings

* Docker-compose fully working with vulnwhisperer integrated (#90)

* ignore nessus requests warnings

* docker-compose fully working with vulnwhisperer integrated

* remove comments docker-compose

* documenting docker-compose

* Readme corrections

* fix after recheck everything works out of the box

* fix exits that break the no specified section execution mode

* fix docker qualysapi issue, updated README

* revert change on deps/qualysapi/qualysapi/util.py (no effect)

* temporarily changed Dockerfile link to the working one

* Update README.md

* Update README.md

* Fix docker-compose logstash config (#92)

* ignore nessus requests warnings

* docker-compose fully working with vulnwhisperer integrated

* remove comments docker-compose

* documenting docker-compose

* Readme corrections

* fix after recheck everything works out of the box

* fix exits that break the no specified section execution mode

* fix docker qualysapi issue, updated README

* revert change on deps/qualysapi/qualysapi/util.py (no effect)

* temporarily changed Dockerfile link to the working one

* fix docker-compose logstash config

* permissions needed for logstash container to work

* changing default path qualys, there are no folders

* Update 1000_vulnWhispererBaseVisuals.json

Update field to include keyword to prevent error: TypeError: "field" is a required parameter

* Update docker-compose.yml (#93)

increase file descriptors to allow elasticsearch to start.

* Update Slack link on README.md

* Update README.md

Added to README.md @pemontto as contributor

* Jira module fully working (#104)

* clean OS X .DS_Store files

* fix nessus end of line carriage, added JIRA args

* JIRA module fully working

* jira module working with nessus

* added check on already existing jira config, update README

* qualys_vm<->jira working, qualys_vm database entries with qualys_vm, improved checks

* JIRA module updates ticket's assets and comments update

* added JIRA auto-close function for resolved vulnerabitilies

* fix if components variable empty issue

* fix creation of new ticket after updating existing one

* final fixes, added extra line in template

* added vulnerability criticality as label in order to be able to filter

* Added jira section to config file and fail check for config variable (#105)

* clean OS X .DS_Store files

* fix nessus end of line carriage, added JIRA args

* JIRA module fully working

* jira module working with nessus

* added check on already existing jira config, update README

* qualys_vm<->jira working, qualys_vm database entries with qualys_vm, improved checks

* JIRA module updates ticket's assets and comments update

* added JIRA auto-close function for resolved vulnerabitilies

* fix if components variable empty issue

* fix creation of new ticket after updating existing one

* final fixes, added extra line in template

* added vulnerability criticality as label in order to be able to filter

* jira module gets now minimum criticality from config file

* added jira config to frameworks_example.ini

* fail check for config variable in case it is left empty

* fix issue jira-qualys criticality comparison

* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web (#108)

* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* Rework logging using the stdlib machinery (#116)

* Rework logging using the stdlib machinery
Use the verbose or debug flag to enable/disable logging.DEBUG
Remove the vprint function from all classes
Remove bcolors from all code
Cleanup [INFO], [ERROR], {success} and similar

* fix some errors my local linter missed but travis catched

* add coloredlogs and --fancy command line flag

* qualysapi dependency removal

* Qualysapi update (#118)

* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* delete qualysapi fork and added to requirements

* merge with testing

* Jira extras (#120)

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* initialize variable fullpath to avoid break

* fix get latest scan entry from db and ignore 'potential' not verified vulns

* added host resolv + cache to speed already resolved, jira logging

* make sure that vulnerability criticality appears as a label on ticket + automatic actions

* jira bulk report of scans, fix on nessus logging, jira time resolution and list all ticket reported assets

* added jira ticket data download + change default time window from 6 to 12 months

* small fixes

* jira logstash files

* fix variable confusion (thx Travis :)

* update readme (#121)

* Add ansible provisioning (#122)

* first ansible skeleton

* first commit of ansible installation of vulnwhisperer outside docker

* first ansible skeleton

* first commit of ansible installation of vulnwhisperer outside docker

* refactor the ansible role a bit

* update readme, add fail validation step to provision.yml and fix
typo when calling a logging funciton

* removing ansible from vulnwhisperer, creating a new repo for ansible deployment

* closed ticket metrics only get last 12 months tickets

* Update README.md

Fixing travis link

* Restoring custom qualys wrapper

* Restoring custom qualys wrapper

* Update README.md

* Updated the visualizations to support the 6.x ELK stack

* making the text message more generic

* removed visualizations that were not part of a dashboard

* Built a single file, since Kibana allows for that.  Created a new scripted value in the logstash-vulnwhisperer that will allow uniqu fingerprinting. Updated all visualizations to support the unqiue count of the scan_fingerprint. Fixes #130 Fixes #126 Fixes #111
2018-12-14 15:22:27 +01:00
ff8d078294 Update README.md 2018-12-04 16:33:18 -07:00
73bd289aa6 Restoring custom qualys wrapper 2018-12-04 16:23:41 -07:00
a63b19914c Restoring custom qualys wrapper 2018-12-04 16:23:28 -07:00
71227d6bd8 Update README.md
Fixing travis link
2018-12-04 15:56:32 -07:00
c88379dd2a closed ticket metrics only get last 12 months tickets 2018-11-16 09:38:18 +01:00
edabf8cda6 removing ansible from vulnwhisperer, creating a new repo for ansible deployment 2018-11-14 15:06:48 +01:00
3a09f60543 Add ansible provisioning (#122)
* first ansible skeleton

* first commit of ansible installation of vulnwhisperer outside docker

* first ansible skeleton

* first commit of ansible installation of vulnwhisperer outside docker

* refactor the ansible role a bit

* update readme, add fail validation step to provision.yml and fix
typo when calling a logging funciton
2018-11-14 10:14:12 +01:00
a8671a7303 update readme (#121) 2018-11-12 13:47:14 +01:00
8bd3c5cab9 Jira extras (#120)
* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* initialize variable fullpath to avoid break

* fix get latest scan entry from db and ignore 'potential' not verified vulns

* added host resolv + cache to speed already resolved, jira logging

* make sure that vulnerability criticality appears as a label on ticket + automatic actions

* jira bulk report of scans, fix on nessus logging, jira time resolution and list all ticket reported assets

* added jira ticket data download + change default time window from 6 to 12 months

* small fixes

* jira logstash files

* fix variable confusion (thx Travis :)
2018-11-08 09:24:24 +01:00
0b571799dc merging testing 2018-11-05 15:18:16 +01:00
cf879b4731 merge with testing 2018-11-05 15:16:22 +01:00
b3f7144f85 Qualysapi update (#118)
* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* delete qualysapi fork and added to requirements
2018-11-05 15:07:25 +01:00
0d5b6479ac qualysapi dependency removal 2018-11-05 15:06:29 +01:00
e3e416fe44 Rework logging using the stdlib machinery (#116)
* Rework logging using the stdlib machinery
Use the verbose or debug flag to enable/disable logging.DEBUG
Remove the vprint function from all classes
Remove bcolors from all code
Cleanup [INFO], [ERROR], {success} and similar

* fix some errors my local linter missed but travis catched

* add coloredlogs and --fancy command line flag
2018-11-04 05:39:27 -06:00
b7d6d6207f update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web (#108)
* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.
2018-10-18 04:39:08 -05:00
46955bff75 Merge pull request #109 from qmontal/master
fix issue jira-qualys criticality comparison
2018-10-17 14:20:32 +02:00
911b9910a8 fix issue jira-qualys criticality comparison 2018-10-17 14:17:49 +02:00
9383c12495 Added jira section to config file and fail check for config variable (#105)
* clean OS X .DS_Store files

* fix nessus end of line carriage, added JIRA args

* JIRA module fully working

* jira module working with nessus

* added check on already existing jira config, update README

* qualys_vm<->jira working, qualys_vm database entries with qualys_vm, improved checks

* JIRA module updates ticket's assets and comments update

* added JIRA auto-close function for resolved vulnerabitilies

* fix if components variable empty issue

* fix creation of new ticket after updating existing one

* final fixes, added extra line in template

* added vulnerability criticality as label in order to be able to filter

* jira module gets now minimum criticality from config file

* added jira config to frameworks_example.ini

* fail check for config variable in case it is left empty
2018-10-13 14:01:51 -05:00
4422db586d Jira module fully working (#104)
* clean OS X .DS_Store files

* fix nessus end of line carriage, added JIRA args

* JIRA module fully working

* jira module working with nessus

* added check on already existing jira config, update README

* qualys_vm<->jira working, qualys_vm database entries with qualys_vm, improved checks

* JIRA module updates ticket's assets and comments update

* added JIRA auto-close function for resolved vulnerabitilies

* fix if components variable empty issue

* fix creation of new ticket after updating existing one

* final fixes, added extra line in template

* added vulnerability criticality as label in order to be able to filter
2018-10-12 09:30:14 -05:00
13bb288217 Update README.md
Added to README.md @pemontto as contributor
2018-10-06 20:45:38 +02:00
e1a54fc414 Update Slack link on README.md 2018-10-03 09:08:13 +02:00
bbb0cf3434 Merge pull request #95 from rogierm/rogierm-patch-3
Update 1000_vulnWhispererBaseVisuals.json
2018-09-27 08:43:42 +02:00
078bd9559e Update docker-compose.yml (#93)
increase file descriptors to allow elasticsearch to start.
2018-09-04 01:58:36 -04:00
258f9ae4ca Update 1000_vulnWhispererBaseVisuals.json
Update field to include keyword to prevent error: TypeError: "field" is a required parameter
2018-09-03 00:40:23 +02:00
fc5f9b5b7c Fix docker-compose logstash config (#92)
* ignore nessus requests warnings

* docker-compose fully working with vulnwhisperer integrated

* remove comments docker-compose

* documenting docker-compose

* Readme corrections

* fix after recheck everything works out of the box

* fix exits that break the no specified section execution mode

* fix docker qualysapi issue, updated README

* revert change on deps/qualysapi/qualysapi/util.py (no effect)

* temporarily changed Dockerfile link to the working one

* fix docker-compose logstash config

* permissions needed for logstash container to work

* changing default path qualys, there are no folders
2018-08-20 09:20:58 -04:00
a159d5b06f Update README.md 2018-08-19 12:03:38 -04:00
7b4202de52 Update README.md 2018-08-18 14:29:23 -04:00
8336b72314 Docker-compose fully working with vulnwhisperer integrated (#90)
* ignore nessus requests warnings

* docker-compose fully working with vulnwhisperer integrated

* remove comments docker-compose

* documenting docker-compose

* Readme corrections

* fix after recheck everything works out of the box

* fix exits that break the no specified section execution mode

* fix docker qualysapi issue, updated README

* revert change on deps/qualysapi/qualysapi/util.py (no effect)

* temporarily changed Dockerfile link to the working one
2018-08-17 08:51:28 -04:00
5b879e13c7 Silence requests warnings 2018-08-14 06:23:18 -04:00
a84576b551 No need to specify section to run (#88)
* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Fix error: "Cannot convert non-finite values (NA or inf) to integer"

When trying to download the results of Qualys Vulnerability Management scans, the following error pops up:

[FAIL] - Could not process scan/xxxxxxxxxx.xxxxx - Cannot convert non-finite values (NA or inf) to integer

This error is due to pandas operating with the scan results json file, as the last element from the json doesn't fir with the rest of the response's scheme: that element is "target_distribution_across_scanner_appliances", which contains the scanners used and the IP ranges that each scanner went through.

Taking out the last line solves the issue.

Also adding the qualys_vuln scheme to the frameworks_example.ini

* No need to specify section to run

Until now it vulnwhisperer was not running if a section was not specified,
but there is the variable "enabled" on each module config, so now it will
check which modules are enabled and run them sequentialy.

Made mainly in order to be able to automate with docker-compose instance,
as the docker with vulnwhisperer (https://github.com/HASecuritySolutions/docker_vulnwhisperer)
has that command run at the end.

* added to readme + detectify
2018-08-09 16:39:57 -07:00
46be3c71ef example.ini is frameworks_example.ini (#77) 2018-07-06 22:18:26 -07:00
608a49d178 Update README.md 2018-07-05 13:47:22 -04:00
7f2c59f531 Qualys Vulnerability Management integration (#74)
* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Fix error: "Cannot convert non-finite values (NA or inf) to integer"

When trying to download the results of Qualys Vulnerability Management scans, the following error pops up:

[FAIL] - Could not process scan/xxxxxxxxxx.xxxxx - Cannot convert non-finite values (NA or inf) to integer

This error is due to pandas operating with the scan results json file, as the last element from the json doesn't fir with the rest of the response's scheme: that element is "target_distribution_across_scanner_appliances", which contains the scanners used and the IP ranges that each scanner went through.

Taking out the last line solves the issue.

Also adding the qualys_vuln scheme to the frameworks_example.ini
2018-07-05 10:34:02 -07:00
3ac9a8156a Update template to version 5.x (#73)
* Update template to Elasticsearch 5.x

* Update template to Elasticsearch 5.x

I think _all field is no longer needed from ES 5.x because of the search all field execution if _all is disabled
2018-06-30 13:25:29 -07:00
9a08acb2d6 Update README.md 2018-06-26 13:04:40 -04:00
38d2eec065 Tenable.io support (#70)
* Basic tenable.io support

* Add tenable config section

* Use existing variable

* Fix indent

* Fix paren

* Use ternary syntax

* Update Logstash config for tenable.io
2018-06-26 13:03:08 -04:00
9b10711d34 Nessus bugfixes (#68)
* Handle cases where no scans are present

* Prevent infinite login loop with incorrect creds

* Print actual config file path

* Don't overwrite Nessus Synopsis with Description
2018-06-13 02:56:06 -04:00
9049b1ff0f Fix to apt-get install 2018-06-04 20:23:17 -04:00
d1d679b12f Update vulnwhisp.py 2018-05-04 10:03:58 -04:00
8ca1c3540d Removed no longer supported InsecureRequestWarning workaround. (#55)
* Removed no longer supported InsecureRequestWarning workaround.

* Add dependencies to README.md
2018-04-17 13:27:23 -04:00
e4e9ed7f28 Preserving newlines & carriage returns (#48)
* Preserve newlines & carriage returns

* Convert '\n' & '\r' to newlines & carriage returns
2018-04-10 08:54:21 -04:00
0982e26197 Updating config to be consistent with conf files 2018-04-02 17:53:24 -04:00
9fc9af37f7 VulnFramework Links (#39)
Quick update regarding issue #33
2018-03-07 14:21:15 -05:00
3984c879cd Update vulnwhisp.py 2018-03-05 07:03:49 -05:00
f83a5d89a3 ELK Sample Install (#37)
Updated Readme.md to include a Sample ELK Install guide addressing multiple issues around ELK Cluster/Node Configuration.
2018-03-04 19:14:51 -05:00
1400cacfcb Update README.md 2018-03-04 17:18:34 -05:00
6f96536145 Update setup.py 2018-03-04 17:15:32 -05:00
4a60306bdd Addition of openvas logstash config 2018-03-04 16:06:53 -05:00
d509c03d68 Remove template format 2018-03-04 15:41:23 -05:00
f6745b00fd Completion of OpenVAS module 2018-03-04 15:06:09 -05:00
21b2a03b36 Completion of OpenVAS module 2018-03-04 14:33:18 -05:00
a658b7abab Allow template defined config form IDs 2018-03-04 08:43:35 -05:00
f21d3a3f64 Addition of OpenVas -- ready for alpha 2018-03-03 15:54:24 -05:00
53b0b27cb2 Addition of OpenVas -- ready for alpha 2018-03-03 15:53:23 -05:00
d8e813ff5a Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-02-25 21:15:54 -05:00
a0de072394 Automatically create filepath and directory if it does not exist 2018-02-25 21:15:50 -05:00
13dbc79b27 Update 1000_nessus_process_file.conf 2018-02-17 22:57:32 -05:00
42e72c36dd Delete 9000_output_nessus.conf 2018-02-17 22:30:16 -05:00
554b739146 Update 9000_output_nessus.conf 2018-02-17 22:29:41 -05:00
54337d3bfa Addition of OpenVAS 2018-02-11 16:07:50 -05:00
8b63aa4fbc Addition of OpenVAS Connector 2018-02-11 16:02:16 -05:00
5362d6f9e8 Badge addition 2018-01-31 10:12:47 -05:00
645e5707a4 Badge addition 2018-01-31 10:11:14 -05:00
03a2125dd1 Update to README 2018-01-31 10:04:39 -05:00
8e85eb0981 Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-01-31 09:51:55 -05:00
136cc3ac61 Update to README 2018-01-31 09:51:51 -05:00
0c6611711c Merge pull request #23 from HASecuritySolutions/master
HA Sync
2018-01-29 22:38:39 -05:00
f3eb2fbda1 Merge pull request #5 from austin-taylor/master
Fork Sync
2018-01-29 22:38:04 -05:00
124cbf2753 Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-01-29 22:35:55 -05:00
13a01fbfd0 Update to make nessus visualizations consistent with qualys 2018-01-29 22:35:45 -05:00
bbfe7ad71b Merge pull request #22 from austin-taylor/add-license-1
Create LICENSE
2018-01-23 12:07:01 -05:00
330e90c7a0 Create LICENSE 2018-01-23 12:06:48 -05:00
f9af977145 Delete LICENSE 2018-01-23 12:06:03 -05:00
1a2091ac54 Update 1000_nessus_process_file.conf 2018-01-05 09:43:57 -05:00
b2c230f43b Rename logstash-nessus-template.json to logstash-vulnwhisperer-template.json 2018-01-05 07:51:51 -05:00
cdaf743435 Readme update 2018-01-04 18:48:52 -05:00
59b688a117 Update to README 2018-01-04 18:06:43 -05:00
009ccc24f6 Update to README 2018-01-04 18:06:06 -05:00
3141dcabd2 Update to qualys logstash conf to reflect example config 2018-01-04 18:01:01 -05:00
02afd9c24d Update to example logstash config 2018-01-04 17:59:20 -05:00
d70238fbeb Update to README 2018-01-04 17:52:30 -05:00
36b028a78a Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-01-04 17:51:29 -05:00
16b04d7763 Update to README 2018-01-04 17:51:24 -05:00
4ea72650df Merge pull request #4 from austin-taylor/master
Fork Sync
2018-01-04 16:44:30 -05:00
a1b9ff6273 Update to readme 2018-01-04 13:53:38 -05:00
bbad599a73 Update to readme 2018-01-04 13:49:45 -05:00
882a4be275 Update to readme 2018-01-04 13:49:08 -05:00
2bf8c2be8b Update to readme 2018-01-04 13:36:19 -05:00
2b057f290b Remind user to select section if using a config 2018-01-03 18:33:14 -05:00
4359478e3d Getting started steps 2018-01-02 07:53:33 -05:00
ff50354bf9 Getting started steps 2018-01-02 07:10:57 -05:00
0ab53890ca Merge pull request #3 from austin-taylor/master
sync
2018-01-02 04:15:52 -05:00
ee6d61605b Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-01-02 03:12:15 -05:00
ada256cc46 README Update 2018-01-02 03:12:11 -05:00
8215f4e938 Set theme jekyll-theme-leap-day 2018-01-02 03:09:49 -05:00
30f966f354 Create CNAME 2018-01-02 02:59:41 -05:00
8af1ddd9e9 README Update 2018-01-02 02:51:34 -05:00
c3850247c9 Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-01-02 02:51:29 -05:00
745e4b3a0b README Update 2018-01-02 02:48:52 -05:00
c80383aaa6 Merge pull request #18 from cclauss/patch-1
Remove leading slash
2018-01-02 02:34:54 -05:00
e128d8c753 Travis Config update 2018-01-02 02:31:25 -05:00
66987810df Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-01-02 02:29:23 -05:00
6e500a4829 Add build status to README 2018-01-02 02:29:19 -05:00
92d6a7788c Remove leading slash 2018-01-02 08:26:15 +01:00
6ce3a254e4 Merge pull request #17 from cclauss/patch-1
flake8 . --exclude=/deps/qualysapi
2018-01-02 02:24:15 -05:00
9fe048fc5f flake8 . --exclude=/deps/qualysapi 2018-01-02 08:20:37 +01:00
67f9017f92 flake8 --ignore=deps/qualysapi 2018-01-02 08:17:03 +01:00
03d7954da9 Testing build with no submodules 2018-01-02 01:46:03 -05:00
ff02340e32 Merge pull request #16 from cclauss/patch-1
Add free automated flake8 testing of pull requests
2018-01-02 01:39:18 -05:00
45f8ea55d3 Add free automated flake8 testing of pull requests
[Flake8](http://flake8.pycqa.org) tests can help to find Python syntax errors, undefined names, and other code quality issues.  [Travis CI](https://travis-ci.org) is a framework for running flake8 and other tests which is free for open source projects like this one.  The owner of the this repo would need to go to https://travis-ci.org/profile and flip the repository switch __on__ to enable free automated flake8 testing of each pull request.
2018-01-02 07:31:24 +01:00
05608b29bb Update README.md 2018-01-01 14:58:42 -05:00
4d6ad51b50 README Update 2018-01-01 07:28:48 -05:00
b953e1d97b README Update 2018-01-01 07:28:16 -05:00
8f536ed2ac Documentation update 2017-12-31 07:04:57 -05:00
c5115fba00 Update to README 2017-12-31 06:02:48 -05:00
ce529dd4f9 Plugin name converted to scan name 2017-12-30 23:57:58 -05:00
3d34916e4c Adding name of scan and scan reference 2017-12-30 23:54:47 -05:00
690841c4df Readme Update 2017-12-30 23:06:09 -05:00
5f3b02aa10 Readme Update 2017-12-30 22:35:16 -05:00
646a5f94ba Readme Update 2017-12-30 22:33:36 -05:00
c33fbb256a Logo update 2017-12-30 22:32:47 -05:00
a2a15094b4 Update to README 2017-12-30 22:24:40 -05:00
2bd32fd9dc Update to README 2017-12-30 22:23:24 -05:00
5c7137a606 Updated dashboard 2017-12-30 22:18:15 -05:00
78d9a077f5 Merge pull request #2 from austin-taylor/master
Sync with master
2017-12-30 21:27:36 -05:00
732237ad5a Update to logstash template 2017-12-30 21:24:19 -05:00
4a78387ce6 Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2017-12-30 21:23:14 -05:00
64751c47dd Addition of VulnWhisperer-Qualys logstash files 2017-12-30 21:20:45 -05:00
bec9cdd4d0 Addition of VulnWhisperer-Qualys logstash files 2017-12-30 20:21:08 -05:00
d7fc63c952 Better database handling 2017-12-30 14:40:49 -05:00
de62400730 Update to write path logic 2017-12-30 14:39:59 -05:00
0ba3cdf579 Update to count tracker 2017-12-30 14:07:02 -05:00
e03860d087 Check for new records 2017-12-30 13:12:58 -05:00
dc7ad082be Bug fix for counter in Nessus and format fix for qualys 2017-12-30 11:27:19 -05:00
0cd2e28ccd Database tracking for processed Qualys scans 2017-12-30 11:21:10 -05:00
07a99eda54 Database tracking for processed Qualys scans 2017-12-30 11:20:31 -05:00
469f3fee81 Support for json output 2017-12-29 22:42:32 -05:00
dd3a8bb649 Merge pull request #1 from austin-taylor/master
Sync with master
2017-12-26 17:10:07 -05:00
97 changed files with 8008 additions and 4430 deletions

41
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,41 @@
---
name: Rapport de bug
about: Créez un rapport pour nous aider à nous améliorer
title: ''
labels: ''
assignees: ''
---
**Décrivez le bug**
Une description claire et concise de ce qu'est le bug.
**Module affecté**
Lequel des modules ne fonctionne pas comme prévu, par exemple, Nessus, Qualys WAS, Qualys VM, OpenVAS, ELK, Jira...
**Trace de débogage de VulnWhisperer**
Si possible, veuillez joindre la trace de débogage de l'exécution pour une enquête plus approfondie (exécuter avec l'option `-d`).
**Pour reproduire**
Étapes pour reproduire le comportement :
1. Allez à '...'
2. Cliquez sur '....'
3. Voir l'erreur
**Comportement attendu**
Une description claire et concise de ce à quoi vous vous attendiez.
**Captures d'écran**
Si applicable, ajoutez des captures d'écran pour aider à expliquer votre problème.
**Système sur lequel VulnWhisperer s'exécute (veuillez compléter les informations suivantes) :**
- OS : [ex. Ubuntu Server]
- Version : [ex. 18.04.2 LTS]
- Version de VulnWhisperer : [ex. 1.7.1]
**Contexte additionnel**
Ajoutez tout autre contexte sur le problème ici.
**Note importante**
Comme VulnWhisperer s'appuie sur ELK pour l'agrégation de données, il est attendu que vous ayez déjà une instance ELK ou les connaissances pour en déployer une.
Pour accélérer le déploiement, nous fournissons un fichier docker-compose à jour et testé qui déploie toute l'infrastructure nécessaire et nous supporterons son déploiement, mais nous ne donnerons pas de support pour les instances ELK.

11
.gitignore vendored
View File

@ -1,3 +1,11 @@
# Vulnwhisperer stuff
data/
logs/
elk6/vulnwhisperer.ini
resources/elk6/vulnwhisperer.ini
configs/frameworks_example.ini
tests/data
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
@ -100,3 +108,6 @@ ENV/
# mypy
.mypy_cache/
# Mac
.DS_Store

6
.gitmodules vendored
View File

@ -1,3 +1,3 @@
[submodule "qualysapi"]
path = deps/qualysapi
url = git@github.com:austin-taylor/qualysapi.git
[submodule "tests/data"]
path = tests/data
url = https://github.com/HASecuritySolutions/VulnWhisperer-tests.git

36
.travis.yml Normal file
View File

@ -0,0 +1,36 @@
group: travis_latest
language: python
cache: pip
python:
- 2.7
env:
- TEST_PATH=tests/data
services:
- docker
# - 3.6
#matrix:
# allow_failures:
# - python: 3.6 - Commenting out testing for Python 3.6 until ready
before_install:
- mkdir -p ./data/esdata1
- mkdir -p ./data/es_snapshots
- sudo chown -R 1000:1000 ./data/es*
- docker build -t vulnwhisperer-local .
- docker-compose -f docker-compose-test.yml up -d
install:
- pip install -r requirements.txt
- pip install flake8 # pytest # add another testing frameworks later
before_script:
# stop the build if there are Python syntax errors or undefined names
- flake8 . --count --exclude=deps/qualysapi --select=E901,E999,F821,F822,F823 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
- flake8 . --count --exit-zero --exclude=deps/qualysapi --max-complexity=10 --max-line-length=127 --statistics
script:
- python setup.py install
- bash tests/test-vuln_whisperer.sh
- bash tests/test-docker.sh
notifications:
on_success: change
on_failure: change # `always` will be the setting once code changes slow down

1
CNAME Normal file
View File

@ -0,0 +1 @@
www.vulnwhisperer.com

26
Dockerfile Normal file
View File

@ -0,0 +1,26 @@
FROM centos:7
MAINTAINER Justin Henderson justin@hasecuritysolutions.com
RUN yum update -y && \
yum install -y python python-devel git gcc && \
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && \
python get-pip.py
WORKDIR /opt/VulnWhisperer
COPY requirements.txt requirements.txt
COPY setup.py setup.py
COPY vulnwhisp/ vulnwhisp/
COPY bin/ bin/
COPY configs/frameworks_example.ini frameworks_example.ini
RUN python setup.py clean --all && \
pip install -r requirements.txt
WORKDIR /opt/VulnWhisperer
RUN python setup.py install
CMD vuln_whisperer -c /opt/VulnWhisperer/frameworks_example.ini

214
LICENSE
View File

@ -1,21 +1,201 @@
MIT License
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
Copyright (c) 2017 Austin Taylor
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
1. Definitions.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

172
README.md
View File

@ -1,78 +1,138 @@
<p align="center"><img src="https://github.com/austin-taylor/vulnwhisperer/blob/master/docs/source/vuln_whisperer_logo_s.png" width="400px"></p>
<p align="center"><img src="https://git.gudita.com/Cyberdefense/VulnWhisperer/raw/branch/master/docs/source/vuln_whisperer_logo_s.png" width="400px"></p>
<p align="center"> <i>Créez des <u><b>données exploitables</b></u> à partir de vos scans de vulnérabilités</i> </p>
<p align="center"> <i>Create <u><b>actionable data</b></u> from your vulnerability scans </i> </p>
<p align="center" style="width:400px"><img src="https://github.com/austin-taylor/vulnwhisperer/blob/master/docs/source/vulnwhisp_dashboard.jpg" style="width:400px"></p>
<p align="center" style="width:400px"><img src="https://git.gudita.com/Cyberdefense/VulnWhisperer/raw/branch/master/docs/source/vulnWhispererWebApplications.png" style="width:400px"></p>
VulnWhisperer is a vulnerability report aggregator. VulnWhisperer will pull all the reports
and create a file with a unique filename which is then fed into logstash. Logstash extracts data from the filename and tags all of the information inside the report (see logstash_vulnwhisp.conf file). Data is then shipped to elasticsearch to be indexed.
VulnWhisperer est un outil de gestion des vulnérabilités et un agrégateur de rapports. VulnWhisperer récupère tous les rapports des différents scanners de vulnérabilités et crée un fichier avec un nom unique pour chacun, utilisant ensuite ces données pour se synchroniser avec Jira et alimenter Logstash. Jira effectue une synchronisation complète en cycle fermé avec les données fournies par les scanners, tandis que Logstash indexe et étiquette toutes les informations contenues dans le rapport (voir les fichiers logstash dans `/resources/elk6/pipeline/`). Les données sont ensuite envoyées à ElasticSearch pour être indexées, et finissent dans un format visuel et consultable dans Kibana avec des tableaux de bord déjà définis.
VulnWhisperer est un projet open-source financé par la communauté. VulnWhisperer est actuellement fonctionnel mais nécessite une refonte de la documentation et une revue de code. Si vous souhaitez de l'aide, si vous êtes intéressé par de nouvelles fonctionnalités, ou si vous recherchez un support payant, veuillez nous contacter à **info@sahelcyber.com**.
Requirements
-------------
####
* ElasticStack
* Python 2.7
* Vulnerability Scanner
* Optional: Message broker such as Kafka or RabbitMQ
### Scanners de Vulnérabilités Supportés
Currently Supports
-------------
####
* Elasticsearch 2.x
* Python 2.7
* Nessus
* Qualys - Web Application Scanner
- [X] [Nessus (**v6**/**v7**/**v8**)](https://www.tenable.com/products/nessus/nessus-professional)
- [X] [Qualys Web Applications](https://www.qualys.com/apps/web-app-scanning/)
- [X] [Qualys Vulnerability Management](https://www.qualys.com/apps/vulnerability-management/)
- [X] [OpenVAS (**v7**/**v8**/**v9**)](http://www.openvas.org/)
- [X] [Tenable.io](https://www.tenable.com/products/tenable-io)
- [ ] [Detectify](https://detectify.com/)
- [ ] [Nexpose](https://www.rapid7.com/products/nexpose/)
- [ ] [Insight VM](https://www.rapid7.com/products/insightvm/)
- [ ] [NMAP](https://nmap.org/)
- [ ] [Burp Suite](https://portswigger.net/burp)
- [ ] [OWASP ZAP](https://www.zaproxy.org/)
- [ ] Et d'autres à venir
### Plateformes de Reporting Supportées
Setup
===============
- [X] [Elastic Stack (**v6**/**v7**)](https://www.elastic.co/elk-stack)
- [ ] [OpenSearch - Envisagé pour la prochaine mise à jour](https://opensearch.org/)
- [X] [Jira](https://www.atlassian.com/software/jira)
- [ ] [Splunk](https://www.splunk.com/)
```python
Install pip:
sudo <pkg-manager> install python-pip
sudo pip install --upgrade pip
## Démarrage
Manually install requirements:
sudo pip install pytz
sudo pip install pandas
1) Suivez les [prérequis d'installation](#installreq)
2) Remplissez la section que vous souhaitez traiter dans le fichier <a href="https://git.gudita.com/Cyberdefense/VulnWhisperer/src/branch/master/configs/frameworks_example.ini">frameworks_example.ini</a>
3) [JIRA] Si vous utilisez Jira, remplissez la configuration Jira dans le fichier de configuration mentionné ci-dessus.
3) [ELK] Modifiez les paramètres IP dans les <a href="https://git.gudita.com/Cyberdefense/VulnWhisperer/src/branch/master/resources/elk6/pipeline">fichiers Logstash pour correspondre à votre environnement</a> et importez-les dans votre répertoire de configuration logstash (par défaut `/etc/logstash/conf.d/`)
4) [ELK] Importez les <a href="https://git.gudita.com/Cyberdefense/VulnWhisperer/src/branch/master/resources/elk6/kibana.json">visualisations Kibana</a>
5) [Exécutez Vulnwhisperer](#run)
Using requirements file:
sudo pip install -r /path/to/VulnWhisperer/requirements.txt
> **Note importante concernant les liens du Wiki :** La migration de Gitea ne transfère pas toujours le Wiki d'un projet GitHub (qui est techniquement un dépôt séparé). Si les liens vers le Wiki (comme le guide de déploiement ELK) ne fonctionnent pas, vous devrez peut-être recréer ces pages manuellement dans l'onglet "Wiki" de votre dépôt sur Gitea.
cd /path/to/VulnWhisperer
sudo python setup.py install
```
Besoin d'aide ou juste envie de discuter ? Rejoignez notre [canal Slack](https://join.slack.com/t/vulnwhisperer/shared_invite/enQtNDQ5MzE4OTIyODU0LWQxZTcxYTY0MWUwYzA4MTlmMWZlYWY2Y2ZmM2EzNDFmNWVlOTM4MzNjYzI0YzdkMDA0YmQyYWRhZGI2NGUxNGI)
## Prérequis
* Python 2.7
* Un Scanner de Vulnérabilités
* Un Système de Reporting : Jira / ElasticStack 6.6
<a id="installreq"></a>
## Prérequis d'Installation - VulnWhisperer (peut nécessiter sudo)
**Installez les dépendances des paquets du système d'exploitation** (pour les distributions basées sur Debian, CentOS n'en a pas besoin)
```shell
sudo apt-get install zlib1g-dev libxml2-dev libxslt1-dev
(Optionnel) Utilisez un environnement virtuel python pour ne pas perturber les bibliothèques python de l'hôte
virtualenv venv # créera l'environnement virtuel python 2.7
source venv/bin/activate # démarre l'environnement, pip s'exécutera ici et devrait installer les bibliothèques sans sudo
deactivate # pour quitter l'environnement virtuel une fois que vous avez terminé
Installez les dépendances des bibliothèques python
pip install -r /chemin/vers/VulnWhisperer/requirements.txt
cd /chemin/vers/VulnWhisperer
python setup.py install
(Optionnel) Si vous utilisez un proxy, ajoutez l'URL du proxy comme variable d'environnement au PATH
export HTTP_PROXY=[http://exemple.com:8080](http://exemple.com:8080)
export HTTPS_PROXY=[http://exemple.com:8080](http://exemple.com:8080)
Vous êtes maintenant prêt à télécharger les scans.
Configuration
-----
Il y a quelques étapes de configuration pour mettre en place VulnWhisperer :
There are a few configuration steps to setting up VulnWhisperer:
* Configure Ini file
* Setup Logstash File
* Import ElasticSearch Templates
* Import Kibana Dashboards
Configurer le fichier Ini
Run
-----
To run, fill out the configuration file with your vulnerability scanner settings. Then you can execute from the command line.
```python
Configurer le fichier Logstash
vuln_whisperer -c configs/example.ini -s nessus
or
vuln_whisperer -c configs/example.ini -s qualys
Importer les modèles ElasticSearch
```
Next you'll need to import the visualizations into Kibana and setup your logstash config. A more thorough README is underway with setup instructions.
Importer les tableaux de bord Kibana
_For windows, you may need to type the full path of the binary in vulnWhisperer located in the bin directory._
Exécution
Pour exécuter, remplissez le fichier de configuration avec les paramètres de votre scanner de vulnérabilités. Ensuite, vous pouvez l'exécuter depuis la ligne de commande.
Credit
------
Big thank you to <a href="https://github.com/SMAPPER">Justin Henderson</a> for his contributions to vulnWhisperer!
# (optionnel : -F -> fournit une coloration "Fantaisie" des logs, utile pour la compréhension lors de l'exécution manuelle de VulnWhisperer)
vuln_whisperer -c configs/frameworks_example.ini -s nessus
# ou
vuln_whisperer -c configs/frameworks_example.ini -s qualys
AS SEEN ON TV
-------------
<p align="center" style="width:400px"><a href="https://twitter.com/MalwareJake/status/935654519471353856"><img src="https://github.com/austin-taylor/vulnwhisperer/blob/master/docs/source/as_seen_on_tv.png" style="width:400px"></a></p>
Si aucune section n'est spécifiée (ex. -s nessus), vulnwhisperer vérifiera dans le fichier de configuration les modules ayant la propriété enabled=true et les exécutera séquentiellement.
Docker-compose
ELK est un monde en soi, et pour les nouveaux venus sur la plateforme, cela nécessite des compétences de base sous Linux et généralement un peu de dépannage jusqu'à ce qu'il soit déployé et fonctionne comme prévu. Comme nous ne sommes pas en mesure de fournir un support pour les problèmes ELK de chaque utilisateur, nous avons mis en place un docker-compose qui inclut :
VulnWhisperer
Logstash 6.6
ElasticSearch 6.6
Kibana 6.6
Le docker-compose nécessite simplement de spécifier les chemins où les données de VulnWhisperer seront sauvegardées, et où se trouvent les fichiers de configuration. S'il est exécuté directement après un git clone, en ajoutant simplement la configuration du scanner au fichier de configuration de VulnWhisperer (/resources/elk6/vulnwhisperer.ini), il fonctionnera immédiatement.
Il se charge également de charger automatiquement les tableaux de bord et les visualisations Kibana via l'API, ce qui doit être fait manuellement autrement au démarrage de Kibana.
Pour plus d'informations sur le docker-compose, consultez le wiki docker-compose ou la FAQ.
Feuille de route
Notre feuille de route actuelle est la suivante :
[ ] Créer un standard de vulnérabilité
[ ] Mapper les résultats de chaque scanner au standard
[ ] Créer des directives de module de scanner pour une intégration facile de nouveaux scanners
[ ] Refactoriser le code pour réutiliser les fonctions et permettre une compatibilité totale entre les modules
[ ] Changer Nessus CSV en JSON
[ ] Adapter le Logstash unique au standard et aux tableaux de bord Kibana
[ ] Implémenter le scanner Detectify
[ ] Implémenter le reporting/tableaux de bord Splunk
En plus de cela, nous essayons de nous concentrer sur la correction des bugs dès que possible, ce qui peut retarder le développement. Nous accueillons également très volontiers les PR (Pull Requests), et une fois que nous aurons implémenté le nouveau standard, il sera très facile d'ajouter la compatibilité avec de nouveaux scanners.
Le standard de vulnérabilité sera initialement un nouveau JSON simple à un niveau avec toutes les informations correspondantes des différents scanners ayant des noms de variables standardisés, tout en conservant le reste des variables telles quelles.

1
_config.yml Normal file
View File

@ -0,0 +1 @@
theme: jekyll-theme-leap-day

View File

@ -4,10 +4,13 @@ __author__ = 'Austin Taylor'
from vulnwhisp.vulnwhisp import vulnWhisperer
from vulnwhisp.utils.cli import bcolors
from vulnwhisp.base.config import vwConfig
from vulnwhisp.test.mock import mockAPI
import os
import argparse
import sys
import logging
def isFileValid(parser, arg):
if not os.path.exists(arg):
@ -15,6 +18,7 @@ def isFileValid(parser, arg):
else:
return arg
def main():
parser = argparse.ArgumentParser(description=""" VulnWhisperer is designed to create actionable data from\
@ -23,36 +27,102 @@ def main():
help='Path of config file', type=lambda x: isFileValid(parser, x.strip()))
parser.add_argument('-s', '--section', dest='section', required=False,
help='Section in config')
parser.add_argument('--source', dest='source', required=False,
help='JIRA required only! Source scanner to report')
parser.add_argument('-n', '--scanname', dest='scanname', required=False,
help='JIRA required only! Scan name from scan to report')
parser.add_argument('-v', '--verbose', dest='verbose', action='store_true', default=True,
help='Prints status out to screen (defaults to True)')
parser.add_argument('-u', '--username', dest='username', required=False, default=None, type=lambda x: x.strip(), help='The NESSUS username')
parser.add_argument('-p', '--password', dest='password', required=False, default=None, type=lambda x: x.strip(), help='The NESSUS password')
parser.add_argument('-u', '--username', dest='username', required=False, default=None,
help='The NESSUS username', type=lambda x: x.strip())
parser.add_argument('-p', '--password', dest='password', required=False, default=None,
help='The NESSUS password', type=lambda x: x.strip())
parser.add_argument('-F', '--fancy', action='store_true',
help='Enable colourful logging output')
parser.add_argument('-d', '--debug', action='store_true',
help='Enable debugging messages')
parser.add_argument('--mock', action='store_true',
help='Enable mocked API responses')
parser.add_argument('--mock_dir', dest='mock_dir', required=False, default=None,
help='Path of test directory')
args = parser.parse_args()
vw = vulnWhisperer(config=args.config,
profile=args.section,
verbose=args.verbose,
username=args.username,
password=args.password)
# First setup logging
logging.basicConfig(
stream=sys.stdout,
#format only applies when not using -F flag for colouring
format='%(levelname)s:%(name)s:%(funcName)s:%(message)s',
level=logging.DEBUG if args.debug else logging.INFO
)
logger = logging.getLogger()
# we set up the logger to log as well to file
fh = logging.FileHandler('vulnwhisperer.log')
fh.setLevel(logging.DEBUG if args.debug else logging.INFO)
fh.setFormatter(logging.Formatter("%(asctime)s %(levelname)s %(name)s - %(funcName)s:%(message)s", "%Y-%m-%d %H:%M:%S"))
logger.addHandler(fh)
if args.fancy:
import coloredlogs
coloredlogs.install(level='DEBUG' if args.debug else 'INFO')
if args.mock:
mock_api = mockAPI(args.mock_dir, args.verbose)
mock_api.mock_endpoints()
exit_code = 0
vw.whisper_vulnerabilities()
'''
try:
if args.config and not args.section:
# this remains a print since we are in the main binary
print('WARNING: {warning}'.format(warning='No section was specified, vulnwhisperer will scrape enabled modules from config file. \
\nPlease specify a section using -s. \
\nExample vuln_whisperer -c config.ini -s nessus'))
logger.info('No section was specified, vulnwhisperer will scrape enabled modules from the config file.')
config = vwConfig(config_in=args.config)
enabled_sections = config.get_sections_with_attribute('enabled')
vw = vulnWhisperer(config=args.config,
profile=args.section,
verbose=args.verbose,
username=args.username,
password=args.password)
for section in enabled_sections:
try:
vw = vulnWhisperer(config=args.config,
profile=section,
verbose=args.verbose,
username=args.username,
password=args.password,
source=args.source,
scanname=args.scanname)
exit_code += vw.whisper_vulnerabilities()
except Exception as e:
logger.error("VulnWhisperer was unable to perform the processing on '{}'".format(args.source))
else:
logger.info('Running vulnwhisperer for section {}'.format(args.section))
vw = vulnWhisperer(config=args.config,
profile=args.section,
verbose=args.verbose,
username=args.username,
password=args.password,
source=args.source,
scanname=args.scanname)
exit_code += vw.whisper_vulnerabilities()
vw.whisper_vulnerabilities()
sys.exit(1)
close_logging_handlers(logger)
sys.exit(exit_code)
except Exception as e:
if args.verbose:
print('{red} ERROR: {error}{endc}'.format(red=bcolors.FAIL, error=e, endc=bcolors.ENDC))
# this will remain a print since we are in the main binary
logger.error('{}'.format(str(e)))
print('ERROR: {error}'.format(error=e))
# TODO: fix this to NOT be exit 2 unless in error
close_logging_handlers(logger)
sys.exit(2)
'''
close_logging_handlers(logger)
def close_logging_handlers(logger):
for handler in logger.handlers:
handler.close()
logger.removeFilter(handler)
if __name__ == '__main__':
main()
main()

View File

@ -2,38 +2,93 @@
enabled=true
hostname=localhost
port=8834
access_key=
secret_key=
username=nessus_username
password=nessus_password
write_path=/opt/vulnwhisp/nessus/
db_path=/opt/vulnwhisp/database
write_path=/opt/VulnWhisperer/data/nessus/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
[qualys]
[tenable]
enabled=true
hostname=cloud.tenable.com
port=443
access_key=
secret_key=
username=tenable.io_username
password=tenable.io_password
write_path=/opt/VulnWhisperer/data/tenable/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
[qualys_web]
#Reference https://www.qualys.com/docs/qualys-was-api-user-guide.pdf to find your API
enabled = true
hostname = qualysapi.qg2.apps.qualys.com
username = exampleuser
password = examplepass
write_path=/opt/vulnwhisp/qualys/
db_path=/opt/vulnwhisp/database
write_path=/opt/VulnWhisperer/data/qualys_web/
db_path=/opt/VulnWhisperer/data/database
verbose=true
# Set the maximum number of retries each connection should attempt.
#Note, this applies only to failed connections and timeouts, never to requests where the server returns a response.
max_retries = 10
# Template ID will need to be retrieved for each document. Please follow the reference guide above for instructions on how to get your template ID.
template_id = 126024
#[proxy]
; This section is optional. Leave it out if you're not using a proxy.
; You can use environmental variables as well: http://www.python-requests.org/en/latest/user/advanced/#proxies
[qualys_vuln]
#Reference https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf to find your API
enabled = true
hostname = qualysapi.qg2.apps.qualys.com
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys_vuln/
db_path=/opt/VulnWhisperer/data/database
verbose=true
; proxy_protocol set to https, if not specified.
#proxy_url = proxy.mycorp.com
[detectify]
#Reference https://developer.detectify.com/
enabled = false
hostname = api.detectify.com
#username variable used as apiKey
username = exampleuser
#password variable used as secretKey
password = examplepass
write_path =/opt/VulnWhisperer/data/detectify/
db_path = /opt/VulnWhisperer/data/database
verbose = true
; proxy_port will override any port specified in proxy_url
#proxy_port = 8080
[openvas]
enabled = false
hostname = localhost
port = 4000
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/openvas/
db_path=/opt/VulnWhisperer/data/database
verbose=true
[jira]
enabled = false
hostname = jira-host
username = username
password = password
write_path = /opt/VulnWhisperer/data/jira/
db_path = /opt/VulnWhisperer/data/database
verbose = true
dns_resolv = False
#Sample jira report scan, will automatically be created for existent scans
#[jira.qualys_vuln.test_scan]
#source = qualys_vuln
#scan_name = Test Scan
#jira_project = PROJECT
; if multiple components, separate by "," = None
#components =
; minimum criticality to report (low, medium, high or critical) = None
#min_critical_to_report = high
; proxy authentication
#proxy_username = proxyuser
#proxy_password = proxypass

94
configs/test.ini Executable file
View File

@ -0,0 +1,94 @@
[nessus]
enabled=true
hostname=nessus
port=443
access_key=
secret_key=
username=nessus_username
password=nessus_password
write_path=/opt/VulnWhisperer/data/nessus/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
[tenable]
enabled=true
hostname=tenable
port=443
access_key=
secret_key=
username=tenable.io_username
password=tenable.io_password
write_path=/opt/VulnWhisperer/data/tenable/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
[qualys_web]
#Reference https://www.qualys.com/docs/qualys-was-api-user-guide.pdf to find your API
enabled = false
hostname = qualys_web
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys_web/
db_path=/opt/VulnWhisperer/data/database
verbose=true
# Set the maximum number of retries each connection should attempt.
#Note, this applies only to failed connections and timeouts, never to requests where the server returns a response.
max_retries = 10
# Template ID will need to be retrieved for each document. Please follow the reference guide above for instructions on how to get your template ID.
template_id = 126024
[qualys_vuln]
#Reference https://www.qualys.com/docs/qualys-was-api-user-guide.pdf to find your API
enabled = true
hostname = qualys_vuln
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys_vuln/
db_path=/opt/VulnWhisperer/data/database
verbose=true
[detectify]
#Reference https://developer.detectify.com/
enabled = false
hostname = detectify
#username variable used as apiKey
username = exampleuser
#password variable used as secretKey
password = examplepass
write_path =/opt/VulnWhisperer/data/detectify/
db_path = /opt/VulnWhisperer/data/database
verbose = true
[openvas]
enabled = false
hostname = openvas
port = 4000
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/openvas/
db_path=/opt/VulnWhisperer/data/database
verbose=true
[jira]
enabled = false
hostname = jira-host
username = username
password = password
write_path = /opt/VulnWhisperer/data/jira/
db_path = /opt/VulnWhisperer/data/database
verbose = true
dns_resolv = False
#Sample jira report scan, will automatically be created for existent scans
#[jira.qualys_vuln.test_scan]
#source = qualys_vuln
#scan_name = Test Scan
#jira_project = PROJECT
; if multiple components, separate by "," = None
#components =
; minimum criticality to report (low, medium, high or critical) = None
#min_critical_to_report = high

View File

@ -1,47 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Mac
.DS_Store
# Authenticatin configuration
*.qcrc
config.qcrc
config.ini
# PyCharm
.idea
.qcrc.swp

View File

@ -1,2 +0,0 @@
include README.md
recursive-include examples *.py

View File

@ -1,107 +0,0 @@
qualysapi
=========
Python package, qualysapi, that makes calling any Qualys API very simple. Qualys API versions v1, v2, & WAS & AM (asset management) are all supported.
My focus was making the API super easy to use. The only parameters the user needs to provide is the call, and data (optional). It automates the following:
* Automatically identifies API version through the call requested.
* Automatically identifies url from the above step.
* Automatically identifies http method as POST or GET for the request per Qualys documentation.
Usage
=====
Check out the example scripts in the [/examples directory](https://github.com/paragbaxi/qualysapi/blob/master/examples/).
Example
-------
Detailed example found at [qualysapi-example.py](https://github.com/paragbaxi/qualysapi/blob/master/examples/qualysapi-example.py).
Sample example below.
```python
>>> import qualysapi
>>> a = qualysapi.connect()
QualysGuard Username: my_username
QualysGuard Password:
>>> print a.request('about.php')
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE ABOUT SYSTEM "https://qualysapi.qualys.com/about.dtd">
<ABOUT>
<API-VERSION MAJOR="1" MINOR="4" />
<WEB-VERSION>7.10.61-1</WEB-VERSION>
<SCANNER-VERSION>7.1.10-1</SCANNER-VERSION>
<VULNSIGS-VERSION>2.2.475-2</VULNSIGS-VERSION>
</ABOUT>
<!-- Generated for username="my_username" date="2013-07-03T10:31:57Z" -->
<!-- CONFIDENTIAL AND PROPRIETARY INFORMATION. Qualys provides the QualysGuard Service "As Is," without any warranty of any kind. Qualys makes no warranty that the information contained in this report is complete or error-free. Copyright 2013, Qualys, Inc. //-->
```
Installation
============
Use pip to install:
```Shell
pip install qualysapi
```
NOTE: If you would like to experiment without installing globally, look into 'virtualenv'.
Requirements
------------
* requests (http://docs.python-requests.org)
* lxml (http://lxml.de/)
Tested successfully on Python 2.7.
Configuration
=============
By default, the package will ask at the command prompt for username and password. By default, the package connects to the Qualys documented host (qualysapi.qualys.com).
You can override these settings and prevent yourself from typing credentials by doing any of the following:
1. By running the following Python, `qualysapi.connect(remember_me=True)`. This automatically generates a .qcrc file in your current working directory, scoping the configuration to that directory.
2. By running the following Python, `qualysapi.connect(remember_me_always=True)`. This automatically generates a .qcrc file in your home directory, scoping the configuratoin to all calls to qualysapi, regardless of the directory.
3. By creating a file called '.qcrc' (for Windows, the default filename is 'config.ini') in your home directory or directory of the Python script.
4. This supports multiple configuration files. Just add the filename in your call to qualysapi.connect('config.txt').
Example config file
-------------------
```INI
; Note, it should be possible to omit any of these entries.
[info]
hostname = qualysapi.serviceprovider.com
username = jerry
password = I<3Elaine
# Set the maximum number of retries each connection should attempt. Note, this applies only to failed connections and timeouts, never to requests where the server returns a response.
max_retries = 10
[proxy]
; This section is optional. Leave it out if you're not using a proxy.
; You can use environmental variables as well: http://www.python-requests.org/en/latest/user/advanced/#proxies
; proxy_protocol set to https, if not specified.
proxy_url = proxy.mycorp.com
; proxy_port will override any port specified in proxy_url
proxy_port = 8080
; proxy authentication
proxy_username = kramer
proxy_password = giddy up!
```
License
=======
Apache License, Version 2.0
http://www.apache.org/licenses/LICENSE-2.0.html
Acknowledgements
================
Special thank you to Colin Bell for qualysconnect.

View File

@ -1,12 +0,0 @@
3.5.0
- Retooled authentication.
3.4.0
- Allows choice of configuration filenames. Easy to support those with multiple Qualys accounts, and need to automate tasks.
3.3.0
- Remove curl capability. Requests 2.0 and latest urllib3 can handle https proxy.
- Workaround for audience that does not have lxml. Warning: cannot handle lxml.builder E objects for AM & WAS APIs.
3.0.0
Proxy support.

View File

@ -1 +0,0 @@
__author__ = 'pbaxi'

View File

@ -1,113 +0,0 @@
__author__ = 'Parag Baxi <parag.baxi@gmail.com>'
__license__ = 'Apache License 2.0'
import qualysapi
from lxml import objectify
from lxml.builder import E
# Setup connection to QualysGuard API.
qgc = qualysapi.connect('config.txt')
#
# API v1 call: Scan the New York & Las Vegas asset groups
# The call is our request's first parameter.
call = 'scan.php'
# The parameters to append to the url is our request's second parameter.
parameters = {'scan_title': 'Go big or go home', 'asset_groups': 'New York&Las Vegas', 'option': 'Initial+Options'}
# Note qualysapi will automatically convert spaces into plus signs for API v1 & v2.
# Let's call the API and store the result in xml_output.
xml_output = qgc.request(call, parameters, concurrent_scans_retries=2, concurrent_scans_retry_delay=600)
# concurrent_retries: Retry the call this many times if your subscription hits the concurrent scans limit.
# concurrent_retries: Delay in seconds between retrying when subscription hits the concurrent scans limit.
# Example XML response when this happens below:
# <?xml version="1.0" encoding="UTF-8"?>
# <ServiceResponse xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://localhost:50205/qps/rest/app//xsd/3.0/was/wasscan.xsd">
# <responseCode>INVALID_REQUEST</responseCode>
# <responseErrorDetails>
# <errorMessage>You have reached the maximum number of concurrent running scans (10) for your account</errorMessage>
# <errorResolution>Please wait until your previous scans have completed</errorResolution>
# </responseErrorDetails>
#
print(xml_output)
#
# API v1 call: Print out all IPs associated with asset group "Looneyville Texas".
# Note that the question mark at the end is optional.
call = 'asset_group_list.php?'
# We can still use strings for the data (not recommended).
parameters = 'title=Looneyville Texas'
# Let's call the API and store the result in xml_output.
xml_output = qgc.request(call, parameters)
# Let's objectify the xml_output string.
root = objectify.fromstring(xml_output)
# Print out the IPs.
print(root.ASSET_GROUP.SCANIPS.IP.text)
# Prints out:
# 10.0.0.102
#
# API v2 call: Print out DNS name for a range of IPs.
call = '/api/2.0/fo/asset/host/'
parameters = {'action': 'list', 'ips': '10.0.0.10-10.0.0.11'}
xml_output = qgc.request(call, parameters)
root = objectify.fromstring(xml_output)
# Iterate hosts and print out DNS name.
for host in root.RESPONSE.HOST_LIST.HOST:
print(host.IP.text, host.DNS.text)
# Prints out:
# 10.0.0.10 mydns1.qualys.com
# 10.0.0.11 mydns2.qualys.com
#
# API v3 WAS call: Print out number of webapps.
call = '/count/was/webapp'
# Note that this call does not have a payload so we don't send any data parameters.
xml_output = qgc.request(call)
root = objectify.fromstring(xml_output)
# Print out count of webapps.
print(root.count.text)
# Prints out:
# 89
#
# API v3 WAS call: Print out number of webapps containing title 'Supafly'.
call = '/count/was/webapp'
# We can send a string XML for the data.
parameters = '<ServiceRequest><filters><Criteria operator="CONTAINS" field="name">Supafly</Criteria></filters></ServiceRequest>'
xml_output = qgc.request(call, parameters)
root = objectify.fromstring(xml_output)
# Print out count of webapps.
print(root.count.text)
# Prints out:
# 3
#
# API v3 WAS call: Print out number of webapps containing title 'Lightsabertooth Tiger'.
call = '/count/was/webapp'
# We can also send an lxml.builder E object.
parameters = (
E.ServiceRequest(
E.filters(
E.Criteria('Lightsabertooth Tiger', field='name',operator='CONTAINS'))))
xml_output = qgc.request(call, parameters)
root = objectify.fromstring(xml_output)
# Print out count of webapps.
print(root.count.text)
# Prints out:
# 0
# Too bad, because that is an awesome webapp name!
#
# API v3 Asset Management call: Count tags.
call = '/count/am/tag'
xml_output = qgc.request(call)
root = objectify.fromstring(xml_output)
# We can use XPATH to find the count.
print(root.xpath('count')[0].text)
# Prints out:
# 840
#
# API v3 Asset Management call: Find asset by name.
call = '/search/am/tag'
parameters = '''<ServiceRequest>
<preferences>
<limitResults>10</limitResults>
</preferences>
<filters>
<Criteria field="name" operator="CONTAINS">PB</Criteria>
</filters>
</ServiceRequest>'''
xml_output = qgc.request(call, parameters)

View File

@ -1,42 +0,0 @@
#!/usr/bin/env python
import sys
import logging
import qualysapi
# Questions? See:
# https://bitbucket.org/uWaterloo_IST_ISS/python-qualysconnect
if __name__ == '__main__':
# Basic command line processing.
if len(sys.argv) != 2:
print('A single IPv4 address is expected as the only argument')
sys.exit(2)
# Set the MAXIMUM level of log messages displayed @ runtime.
logging.basicConfig(level=logging.INFO)
# Call helper that creates a connection w/ HTTP-Basic to QualysGuard API.
qgs=qualysapi.connect()
# Logging must be set after instanciation of connector class.
logger = logging.getLogger('qualysapi.connector')
logger.setLevel(logging.DEBUG)
# Log to sys.out.
logger_console = logging.StreamHandler()
logger_console.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
logging.getLogger(__name__).addHandler(logger)
# Formulate a request to the QualysGuard V1 API.
# docs @
# https://community.qualys.com/docs/DOC-1324
# http://www.qualys.com/docs/QualysGuard_API_User_Guide.pdf
#
# Old way still works:
# ret = qgs.request(1,'asset_search.php', "target_ips=%s&"%(sys.argv[1]))
# New way is cleaner:
ret = qgs.request(1,'asset_search.php', {'target_ips': sys.argv[1]})
print(ret)

View File

@ -1,37 +0,0 @@
#!/usr/bin/env python
import sys
import logging
import qualysapi
if __name__ == '__main__':
# Basic command line processing.
if len(sys.argv) != 3:
print('A report template and scan reference respectively are expected as the only arguments.')
sys.exit(2)
# Set the MAXIMUM level of log messages displayed @ runtime.
logging.basicConfig(level=logging.DEBUG)
# Call helper that creates a connection w/ HTTP-Basic to QualysGuard v1 API
qgs=qualysapi.connect()
# Logging must be set after instanciation of connector class.
logger = logging.getLogger('qualysapi.connector')
logger.setLevel(logging.DEBUG)
# Log to sys.out.
logger_console = logging.StreamHandler()
logger_console.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
logging.getLogger(__name__).addHandler(logger)
# Formulate a request to the QualysGuard V1 API
# docs @
# https://community.qualys.com/docs/DOC-1324
# http://www.qualys.com/docs/QualysGuard_API_User_Guide.pdf
#
ret = qgs.request('/api/2.0/fo/report',{'action': 'launch', 'report_refs': sys.argv[2], 'output_format': 'xml', 'template_id': sys.argv[1], 'report_type': 'Scan'})
print(ret)

View File

@ -1,43 +0,0 @@
#!/usr/bin/env python
import sys
import logging
import qualysapi
# Questions? See:
# https://bitbucket.org/uWaterloo_IST_ISS/python-qualysconnect
if __name__ == '__main__':
# Basic command line processing.
if len(sys.argv) != 2:
print('A single IPv4 address is expected as the only argument.')
sys.exit(2)
# Set the MAXIMUM level of log messages displayed @ runtime.
logging.basicConfig(level=logging.INFO)
# Call helper that creates a connection w/ HTTP-Basic to QualysGuard v1 API
qgs=qualysapi.connect()
# Logging must be set after instanciation of connector class.
logger = logging.getLogger('qualysapi.connector')
logger.setLevel(logging.DEBUG)
# Log to sys.out.
logger_console = logging.StreamHandler()
logger_console.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
logging.getLogger(__name__).addHandler(logger)
# Formulate a request to the QualysGuard V1 API
# docs @
# https://community.qualys.com/docs/DOC-1324
# http://www.qualys.com/docs/QualysGuard_API_User_Guide.pdf
#
# Old way still works:
# ret = qgs.request(2, "asset/host","?action=list&ips=%s&"%(sys.argv[1]))
# New way is cleaner:
ret = qgs.request('/api/2.0/fo/asset/host',{'action': 'list', 'ips': sys.argv[1]})
print(ret)

201
deps/qualysapi/license vendored
View File

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2017 Parag Baxi
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,10 +0,0 @@
# This is the version string assigned to the entire egg post
# setup.py install
# Ownership and Copyright Information.
from __future__ import absolute_import
__author__ = "Parag Baxi <parag.baxi@gmail.com>"
__copyright__ = "Copyright 2011-2013, Parag Baxi"
__license__ = "BSD-new"
from qualysapi.util import connect

View File

@ -1,181 +0,0 @@
from __future__ import absolute_import
from lxml import objectify
import qualysapi.api_objects
from qualysapi.api_objects import *
class QGActions(object):
def getHost(host):
call = '/api/2.0/fo/asset/host/'
parameters = {'action': 'list', 'ips': host, 'details': 'All'}
hostData = objectify.fromstring(self.request(call, parameters)).RESPONSE
try:
hostData = hostData.HOST_LIST.HOST
return Host(hostData.DNS, hostData.ID, hostData.IP, hostData.LAST_VULN_SCAN_DATETIME, hostData.NETBIOS, hostData.OS, hostData.TRACKING_METHOD)
except AttributeError:
return Host("", "", host, "never", "", "", "")
def getHostRange(self, start, end):
call = '/api/2.0/fo/asset/host/'
parameters = {'action': 'list', 'ips': start + '-' + end}
hostData = objectify.fromstring(self.request(call, parameters))
hostArray = []
for host in hostData.RESPONSE.HOST_LIST.HOST:
hostArray.append(Host(host.DNS, host.ID, host.IP, host.LAST_VULN_SCAN_DATETIME, host.NETBIOS, host.OS, host.TRACKING_METHOD))
return hostArray
def listAssetGroups(self, groupName=''):
call = 'asset_group_list.php'
if groupName == '':
agData = objectify.fromstring(self.request(call))
else:
agData = objectify.fromstring(self.request(call, 'title=' + groupName)).RESPONSE
groupsArray = []
scanipsArray = []
scandnsArray = []
scannersArray = []
for group in agData.ASSET_GROUP:
try:
for scanip in group.SCANIPS:
scanipsArray.append(scanip.IP)
except AttributeError:
scanipsArray = [] # No IPs defined to scan.
try:
for scanner in group.SCANNER_APPLIANCES.SCANNER_APPLIANCE:
scannersArray.append(scanner.SCANNER_APPLIANCE_NAME)
except AttributeError:
scannersArray = [] # No scanner appliances defined for this group.
try:
for dnsName in group.SCANDNS:
scandnsArray.append(dnsName.DNS)
except AttributeError:
scandnsArray = [] # No DNS names assigned to group.
groupsArray.append(AssetGroup(group.BUSINESS_IMPACT, group.ID, group.LAST_UPDATE, scanipsArray, scandnsArray, scannersArray, group.TITLE))
return groupsArray
def listReportTemplates(self):
call = 'report_template_list.php'
rtData = objectify.fromstring(self.request(call))
templatesArray = []
for template in rtData.REPORT_TEMPLATE:
templatesArray.append(ReportTemplate(template.GLOBAL, template.ID, template.LAST_UPDATE, template.TEMPLATE_TYPE, template.TITLE, template.TYPE, template.USER))
return templatesArray
def listReports(self, id=0):
call = '/api/2.0/fo/report'
if id == 0:
parameters = {'action': 'list'}
repData = objectify.fromstring(self.request(call, parameters)).RESPONSE
reportsArray = []
for report in repData.REPORT_LIST.REPORT:
reportsArray.append(Report(report.EXPIRATION_DATETIME, report.ID, report.LAUNCH_DATETIME, report.OUTPUT_FORMAT, report.SIZE, report.STATUS, report.TYPE, report.USER_LOGIN))
return reportsArray
else:
parameters = {'action': 'list', 'id': id}
repData = objectify.fromstring(self.request(call, parameters)).RESPONSE.REPORT_LIST.REPORT
return Report(repData.EXPIRATION_DATETIME, repData.ID, repData.LAUNCH_DATETIME, repData.OUTPUT_FORMAT, repData.SIZE, repData.STATUS, repData.TYPE, repData.USER_LOGIN)
def notScannedSince(self, days):
call = '/api/2.0/fo/asset/host/'
parameters = {'action': 'list', 'details': 'All'}
hostData = objectify.fromstring(self.request(call, parameters))
hostArray = []
today = datetime.date.today()
for host in hostData.RESPONSE.HOST_LIST.HOST:
last_scan = str(host.LAST_VULN_SCAN_DATETIME).split('T')[0]
last_scan = datetime.date(int(last_scan.split('-')[0]), int(last_scan.split('-')[1]), int(last_scan.split('-')[2]))
if (today - last_scan).days >= days:
hostArray.append(Host(host.DNS, host.ID, host.IP, host.LAST_VULN_SCAN_DATETIME, host.NETBIOS, host.OS, host.TRACKING_METHOD))
return hostArray
def addIP(self, ips, vmpc):
# 'ips' parameter accepts comma-separated list of IP addresses.
# 'vmpc' parameter accepts 'vm', 'pc', or 'both'. (Vulnerability Managment, Policy Compliance, or both)
call = '/api/2.0/fo/asset/ip/'
enablevm = 1
enablepc = 0
if vmpc == 'pc':
enablevm = 0
enablepc = 1
elif vmpc == 'both':
enablevm = 1
enablepc = 1
parameters = {'action': 'add', 'ips': ips, 'enable_vm': enablevm, 'enable_pc': enablepc}
self.request(call, parameters)
def listScans(self, launched_after="", state="", target="", type="", user_login=""):
# 'launched_after' parameter accepts a date in the format: YYYY-MM-DD
# 'state' parameter accepts "Running", "Paused", "Canceled", "Finished", "Error", "Queued", and "Loading".
# 'title' parameter accepts a string
# 'type' parameter accepts "On-Demand", and "Scheduled".
# 'user_login' parameter accepts a user name (string)
call = '/api/2.0/fo/scan/'
parameters = {'action': 'list', 'show_ags': 1, 'show_op': 1, 'show_status': 1}
if launched_after != "":
parameters['launched_after_datetime'] = launched_after
if state != "":
parameters['state'] = state
if target != "":
parameters['target'] = target
if type != "":
parameters['type'] = type
if user_login != "":
parameters['user_login'] = user_login
scanlist = objectify.fromstring(self.request(call, parameters))
scanArray = []
for scan in scanlist.RESPONSE.SCAN_LIST.SCAN:
try:
agList = []
for ag in scan.ASSET_GROUP_TITLE_LIST.ASSET_GROUP_TITLE:
agList.append(ag)
except AttributeError:
agList = []
scanArray.append(Scan(agList, scan.DURATION, scan.LAUNCH_DATETIME, scan.OPTION_PROFILE.TITLE, scan.PROCESSED, scan.REF, scan.STATUS, scan.TARGET, scan.TITLE, scan.TYPE, scan.USER_LOGIN))
return scanArray
def launchScan(self, title, option_title, iscanner_name, asset_groups="", ip=""):
# TODO: Add ability to scan by tag.
call = '/api/2.0/fo/scan/'
parameters = {'action': 'launch', 'scan_title': title, 'option_title': option_title, 'iscanner_name': iscanner_name, 'ip': ip, 'asset_groups': asset_groups}
if ip == "":
parameters.pop("ip")
if asset_groups == "":
parameters.pop("asset_groups")
scan_ref = objectify.fromstring(self.request(call, parameters)).RESPONSE.ITEM_LIST.ITEM[1].VALUE
call = '/api/2.0/fo/scan/'
parameters = {'action': 'list', 'scan_ref': scan_ref, 'show_status': 1, 'show_ags': 1, 'show_op': 1}
scan = objectify.fromstring(self.request(call, parameters)).RESPONSE.SCAN_LIST.SCAN
try:
agList = []
for ag in scan.ASSET_GROUP_TITLE_LIST.ASSET_GROUP_TITLE:
agList.append(ag)
except AttributeError:
agList = []
return Scan(agList, scan.DURATION, scan.LAUNCH_DATETIME, scan.OPTION_PROFILE.TITLE, scan.PROCESSED, scan.REF, scan.STATUS, scan.TARGET, scan.TITLE, scan.TYPE, scan.USER_LOGIN)

View File

@ -1,155 +0,0 @@
from __future__ import absolute_import
__author__ = 'pbaxi'
from collections import defaultdict
api_methods = defaultdict(set)
api_methods['1'] = set([
'about.php',
'action_log_report.php',
'asset_data_report.php',
'asset_domain.php',
'asset_domain_list.php',
'asset_group_delete.php',
'asset_group_list.php',
'asset_ip_list.php',
'asset_range_info.php',
'asset_search.php',
'get_host_info.php',
'ignore_vuln.php',
'iscanner_list.php',
'knowledgebase_download.php',
'map-2.php',
'map.php',
'map_report.php',
'map_report_list.php',
'password_change.php',
'scan.php',
'scan_cancel.php',
'scan_options.php',
'scan_report.php',
'scan_report_delete.php',
'scan_report_list.php',
'scan_running_list.php',
'scan_target_history.php',
'scheduled_scans.php',
'ticket_delete.php',
'ticket_edit.php',
'ticket_list.php',
'ticket_list_deleted.php',
'time_zone_code.php',
'user.php',
'user_list.php',
])
# API v1 POST methods.
api_methods['1 post'] = set([
'action_log_report.php',
'asset_group.php',
'asset_ip.php',
'ignore_vuln.php',
'knowledgebase_download.php',
'map-2.php',
'map.php',
'password_change.php',
'scan.php',
'scan_report.php',
'scan_target_history.php',
'scheduled_scans.php',
'ticket_delete.php',
'ticket_edit.php',
'ticket_list.php',
'ticket_list_deleted.php',
'user.php',
'user_list.php',
])
# API v2 methods (they're all POST).
api_methods['2'] = set([
'api/2.0/fo/appliance/',
'api/2.0/fo/asset/excluded_ip/',
'api/2.0/fo/asset/excluded_ip/history/',
'api/2.0/fo/asset/host/',
'api/2.0/fo/asset/host/cyberscope/',
'api/2.0/fo/asset/host/cyberscope/fdcc/policy/',
'api/2.0/fo/asset/host/cyberscope/fdcc/scan/',
'api/2.0/fo/asset/host/vm/detection/',
'api/2.0/fo/asset/ip/',
'api/2.0/fo/asset/ip/v4_v6/',
'api/2.0/fo/asset/vhost/',
'api/2.0/fo/auth/',
# 'api/2.0/fo/auth/{type}/', # Added below.
'api/2.0/fo/compliance/',
'api/2.0/fo/compliance/control',
'api/2.0/fo/compliance/fdcc/policy',
'api/2.0/fo/compliance/policy/',
'api/2.0/fo/compliance/posture/info/',
'api/2.0/fo/compliance/scap/arf/',
'api/2.0/fo/knowledge_base/vuln/',
'api/2.0/fo/report/',
'api/2.0/fo/report/scorecard/',
'api/2.0/fo/scan/',
'api/2.0/fo/scan/compliance/',
'api/2.0/fo/session/',
'api/2.0/fo/setup/restricted_ips/',
])
for auth_type in set([
'ibm_db2',
'ms_sql',
'oracle',
'oracle_listener',
'snmp',
'unix',
'windows',
]):
api_methods['2'].add('api/2.0/fo/auth/%s/' % auth_type)
# WAS GET methods when no POST data.
api_methods['was no data get'] = set([
'count/was/report',
'count/was/wasscan',
'count/was/wasscanschedule',
'count/was/webapp',
'download/was/report/',
'download/was/wasscan/',
])
# WAS GET methods.
api_methods['was get'] = set([
'download/was/report/',
'download/was/wasscan/',
'get/was/report/',
'get/was/wasscan/',
'get/was/wasscanschedule/',
'get/was/webapp/',
'status/was/report/',
'status/was/wasscan/',
])
# Asset Management GET methods.
api_methods['am get'] = set([
'count/am/asset',
'count/am/hostasset',
'count/am/tag',
'get/am/asset/',
'get/am/hostasset/',
'get/am/tag/',
])
# Asset Management v2 GET methods.
api_methods['am2 get'] = set([
'get/am/asset/',
'get/am/hostasset/',
'get/am/tag/',
'get/am/hostinstancevuln/',
'get/am/assetdataconnector/',
'get/am/awsassetdataconnector/',
'get/am/awsauthrecord/',
])
# Keep track of methods with ending slashes to autocorrect user when they forgot slash.
api_methods_with_trailing_slash = defaultdict(set)
for method_group in set(['1', '2', 'was', 'am', 'am2']):
for method in api_methods[method_group]:
if method[-1] == '/':
# Add applicable method with api_version preceding it.
# Example:
# WAS API has 'get/was/webapp/'.
# method_group = 'was get'
# method_group.split()[0] = 'was'
# Take off slash to match user provided method.
# api_methods_with_trailing_slash['was'] contains 'get/was/webapp'
api_methods_with_trailing_slash[method_group.split()[0]].add(method[:-1])

View File

@ -1,120 +0,0 @@
from __future__ import absolute_import
import datetime
from lxml import objectify
class Host(object):
def __init__(self, dns, id, ip, last_scan, netbios, os, tracking_method):
self.dns = str(dns)
self.id = int(id)
self.ip = str(ip)
last_scan = str(last_scan).replace('T', ' ').replace('Z', '').split(' ')
date = last_scan[0].split('-')
time = last_scan[1].split(':')
self.last_scan = datetime.datetime(int(date[0]), int(date[1]), int(date[2]), int(time[0]), int(time[1]), int(time[2]))
self.netbios = str(netbios)
self.os = str(os)
self.tracking_method = str(tracking_method)
class AssetGroup(object):
def __init__(self, business_impact, id, last_update, scanips, scandns, scanner_appliances, title):
self.business_impact = str(business_impact)
self.id = int(id)
self.last_update = str(last_update)
self.scanips = scanips
self.scandns = scandns
self.scanner_appliances = scanner_appliances
self.title = str(title)
def addAsset(conn, ip):
call = '/api/2.0/fo/asset/group/'
parameters = {'action': 'edit', 'id': self.id, 'add_ips': ip}
conn.request(call, parameters)
self.scanips.append(ip)
def setAssets(conn, ips):
call = '/api/2.0/fo/asset/group/'
parameters = {'action': 'edit', 'id': self.id, 'set_ips': ips}
conn.request(call, parameters)
class ReportTemplate(object):
def __init__(self, isGlobal, id, last_update, template_type, title, type, user):
self.isGlobal = int(isGlobal)
self.id = int(id)
self.last_update = str(last_update).replace('T', ' ').replace('Z', '').split(' ')
self.template_type = template_type
self.title = title
self.type = type
self.user = user.LOGIN
class Report(object):
def __init__(self, expiration_datetime, id, launch_datetime, output_format, size, status, type, user_login):
self.expiration_datetime = str(expiration_datetime).replace('T', ' ').replace('Z', '').split(' ')
self.id = int(id)
self.launch_datetime = str(launch_datetime).replace('T', ' ').replace('Z', '').split(' ')
self.output_format = output_format
self.size = size
self.status = status.STATE
self.type = type
self.user_login = user_login
def download(self, conn):
call = '/api/2.0/fo/report'
parameters = {'action': 'fetch', 'id': self.id}
if self.status == 'Finished':
return conn.request(call, parameters)
class Scan(object):
def __init__(self, assetgroups, duration, launch_datetime, option_profile, processed, ref, status, target, title, type, user_login):
self.assetgroups = assetgroups
self.duration = str(duration)
launch_datetime = str(launch_datetime).replace('T', ' ').replace('Z', '').split(' ')
date = launch_datetime[0].split('-')
time = launch_datetime[1].split(':')
self.launch_datetime = datetime.datetime(int(date[0]), int(date[1]), int(date[2]), int(time[0]), int(time[1]), int(time[2]))
self.option_profile = str(option_profile)
self.processed = int(processed)
self.ref = str(ref)
self.status = str(status.STATE)
self.target = str(target).split(', ')
self.title = str(title)
self.type = str(type)
self.user_login = str(user_login)
def cancel(self, conn):
cancelled_statuses = ['Cancelled', 'Finished', 'Error']
if any(self.status in s for s in cancelled_statuses):
raise ValueError("Scan cannot be cancelled because its status is " + self.status)
else:
call = '/api/2.0/fo/scan/'
parameters = {'action': 'cancel', 'scan_ref': self.ref}
conn.request(call, parameters)
parameters = {'action': 'list', 'scan_ref': self.ref, 'show_status': 1}
self.status = objectify.fromstring(conn.request(call, parameters)).RESPONSE.SCAN_LIST.SCAN.STATUS.STATE
def pause(self, conn):
if self.status != "Running":
raise ValueError("Scan cannot be paused because its status is " + self.status)
else:
call = '/api/2.0/fo/scan/'
parameters = {'action': 'pause', 'scan_ref': self.ref}
conn.request(call, parameters)
parameters = {'action': 'list', 'scan_ref': self.ref, 'show_status': 1}
self.status = objectify.fromstring(conn.request(call, parameters)).RESPONSE.SCAN_LIST.SCAN.STATUS.STATE
def resume(self, conn):
if self.status != "Paused":
raise ValueError("Scan cannot be resumed because its status is " + self.status)
else:
call = '/api/2.0/fo/scan/'
parameters = {'action': 'resume', 'scan_ref': self.ref}
conn.request(call, parameters)
parameters = {'action': 'list', 'scan_ref': self.ref, 'show_status': 1}
self.status = objectify.fromstring(conn.request(call, parameters)).RESPONSE.SCAN_LIST.SCAN.STATUS.STATE

View File

@ -1,221 +0,0 @@
""" Module providing a single class (QualysConnectConfig) that parses a config
file and provides the information required to build QualysGuard sessions.
"""
from __future__ import absolute_import
from __future__ import print_function
import os
import stat
import getpass
import logging
from six.moves import input
from six.moves.configparser import *
import qualysapi.settings as qcs
# Setup module level logging.
logger = logging.getLogger(__name__)
# try:
# from requests_ntlm import HttpNtlmAuth
# except ImportError, e:
# logger.warning('Warning: Cannot support NTML authentication.')
__author__ = "Parag Baxi <parag.baxi@gmail.com> & Colin Bell <colin.bell@uwaterloo.ca>"
__updated_by__ = "Austin Taylor <vulnWhisperer@austintaylor.io>"
__copyright__ = "Copyright 2011-2013, Parag Baxi & University of Waterloo"
__license__ = "BSD-new"
class QualysConnectConfig:
""" Class to create a ConfigParser and read user/password details
from an ini file.
"""
def __init__(self, filename=qcs.default_filename, remember_me=False, remember_me_always=False):
self._cfgfile = None
# Prioritize local directory filename.
# Check for file existence.
if os.path.exists(filename):
self._cfgfile = filename
elif os.path.exists(os.path.join(os.path.expanduser("~"), filename)):
# Set home path for file.
self._cfgfile = os.path.join(os.path.expanduser("~"), filename)
# create ConfigParser to combine defaults and input from config file.
self._cfgparse = ConfigParser(qcs.defaults)
if self._cfgfile:
self._cfgfile = os.path.realpath(self._cfgfile)
mode = stat.S_IMODE(os.stat(self._cfgfile)[stat.ST_MODE])
# apply bitmask to current mode to check ONLY user access permissions.
if (mode & (stat.S_IRWXG | stat.S_IRWXO)) != 0:
logging.warning('%s permissions allows more than user access.' % (filename,))
self._cfgparse.read(self._cfgfile)
# if 'info' doesn't exist, create the section.
if not self._cfgparse.has_section('qualys'):
self._cfgparse.add_section('qualys')
# Use default hostname (if one isn't provided).
if not self._cfgparse.has_option('qualys', 'hostname'):
if self._cfgparse.has_option('DEFAULT', 'hostname'):
hostname = self._cfgparse.get('DEFAULT', 'hostname')
self._cfgparse.set('qualys', 'hostname', hostname)
else:
raise Exception("No 'hostname' set. QualysConnect does not know who to connect to.")
# Use default max_retries (if one isn't provided).
if not self._cfgparse.has_option('qualys', 'max_retries'):
self.max_retries = qcs.defaults['max_retries']
else:
self.max_retries = self._cfgparse.get('qualys', 'max_retries')
try:
self.max_retries = int(self.max_retries)
except Exception:
logger.error('Value max_retries must be an integer.')
print('Value max_retries must be an integer.')
exit(1)
self._cfgparse.set('qualys', 'max_retries', str(self.max_retries))
self.max_retries = int(self.max_retries)
#Get template ID... user will need to set this to pull back CSV reports
if not self._cfgparse.has_option('qualys', 'template_id'):
self.report_template_id = qcs.defaults['template_id']
else:
self.report_template_id = self._cfgparse.get('qualys', 'template_id')
try:
self.report_template_id = int(self.report_template_id)
except Exception:
logger.error('Report Template ID Must be set and be an integer')
print('Value template ID must be an integer.')
exit(1)
self._cfgparse.set('qualys', 'template_id', str(self.report_template_id))
self.report_template_id = int(self.report_template_id)
# Proxy support
proxy_config = proxy_url = proxy_protocol = proxy_port = proxy_username = proxy_password = None
# User requires proxy?
if self._cfgparse.has_option('proxy', 'proxy_url'):
proxy_url = self._cfgparse.get('proxy', 'proxy_url')
# Remove protocol prefix from url if included.
for prefix in ('http://', 'https://'):
if proxy_url.startswith(prefix):
proxy_protocol = prefix
proxy_url = proxy_url[len(prefix):]
# Default proxy protocol is http.
if not proxy_protocol:
proxy_protocol = 'https://'
# Check for proxy port request.
if ':' in proxy_url:
# Proxy port already specified in url.
# Set proxy port.
proxy_port = proxy_url[proxy_url.index(':') + 1:]
# Remove proxy port from proxy url.
proxy_url = proxy_url[:proxy_url.index(':')]
if self._cfgparse.has_option('proxy', 'proxy_port'):
# Proxy requires specific port.
if proxy_port:
# Warn that a proxy port was already specified in the url.
proxy_port_url = proxy_port
proxy_port = self._cfgparse.get('proxy', 'proxy_port')
logger.warning('Proxy port from url overwritten by specified proxy_port from config:')
logger.warning('%s --> %s' % (proxy_port_url, proxy_port))
else:
proxy_port = self._cfgparse.get('proxy', 'proxy_port')
if not proxy_port:
# No proxy port specified.
if proxy_protocol == 'http://':
# Use default HTTP Proxy port.
proxy_port = '8080'
else:
# Use default HTTPS Proxy port.
proxy_port = '443'
# Check for proxy authentication request.
if self._cfgparse.has_option('proxy', 'proxy_username'):
# Proxy requires username & password.
proxy_username = self._cfgparse.get('proxy', 'proxy_username')
proxy_password = self._cfgparse.get('proxy', 'proxy_password')
# Not sure if this use case below is valid.
# # Support proxy with username and empty password.
# try:
# proxy_password = self._cfgparse.get('proxy','proxy_password')
# except NoOptionError, e:
# # Set empty password.
# proxy_password = ''
# Sample proxy config:f
# 'http://user:pass@10.10.1.10:3128'
if proxy_url:
# Proxy requested.
proxy_config = proxy_url
if proxy_port:
# Proxy port requested.
proxy_config += ':' + proxy_port
if proxy_username:
# Proxy authentication requested.
proxy_config = proxy_username + ':' + proxy_password + '@' + proxy_config
# Prefix by proxy protocol.
proxy_config = proxy_protocol + proxy_config
# Set up proxy if applicable.
if proxy_config:
self.proxies = {'https': proxy_config}
else:
self.proxies = None
# ask username (if one doesn't exist)
if not self._cfgparse.has_option('qualys', 'username'):
username = input('QualysGuard Username: ')
self._cfgparse.set('qualys', 'username', username)
# ask password (if one doesn't exist)
if not self._cfgparse.has_option('qualys', 'password'):
password = getpass.getpass('QualysGuard Password: ')
self._cfgparse.set('qualys', 'password', password)
logging.debug(self._cfgparse.items('qualys'))
if remember_me or remember_me_always:
# Let's create that config file for next time...
# Where to store this?
if remember_me:
# Store in current working directory.
config_path = filename
if remember_me_always:
# Store in home directory.
config_path = os.path.expanduser("~")
if not os.path.exists(config_path):
# Write file only if it doesn't already exists.
# http://stackoverflow.com/questions/5624359/write-file-with-specific-permissions-in-python
mode = stat.S_IRUSR | stat.S_IWUSR # This is 0o600 in octal and 384 in decimal.
umask_original = os.umask(0)
try:
config_file = os.fdopen(os.open(config_path, os.O_WRONLY | os.O_CREAT, mode), 'w')
finally:
os.umask(umask_original)
# Add the settings to the structure of the file, and lets write it out...
self._cfgparse.write(config_file)
config_file.close()
def get_config_filename(self):
return self._cfgfile
def get_config(self):
return self._cfgparse
def get_auth(self):
''' Returns username from the configfile. '''
return (self._cfgparse.get('qualys', 'username'), self._cfgparse.get('qualys', 'password'))
def get_hostname(self):
''' Returns hostname. '''
return self._cfgparse.get('qualys', 'hostname')
def get_template_id(self):
return self._cfgparse.get('qualys','template_id')

View File

@ -1,363 +0,0 @@
from __future__ import absolute_import
from __future__ import print_function
__author__ = 'Parag Baxi <parag.baxi@gmail.com>'
__copyright__ = 'Copyright 2013, Parag Baxi'
__license__ = 'Apache License 2.0'
""" Module that contains classes for setting up connections to QualysGuard API
and requesting data from it.
"""
import logging
import time
try:
from urllib.parse import urlparse
except ImportError:
from urlparse import urlparse
from collections import defaultdict
import requests
import qualysapi.version
import qualysapi.api_methods
import qualysapi.api_actions
import qualysapi.api_actions as api_actions
# Setup module level logging.
logger = logging.getLogger(__name__)
try:
from lxml import etree
except ImportError as e:
logger.warning(
'Warning: Cannot consume lxml.builder E objects without lxml. Send XML strings for AM & WAS API calls.')
class QGConnector(api_actions.QGActions):
""" Qualys Connection class which allows requests to the QualysGuard API using HTTP-Basic Authentication (over SSL).
"""
def __init__(self, auth, server='qualysapi.qualys.com', proxies=None, max_retries=3):
# Read username & password from file, if possible.
self.auth = auth
# Remember QualysGuard API server.
self.server = server
# Remember rate limits per call.
self.rate_limit_remaining = defaultdict(int)
# api_methods: Define method algorithm in a dict of set.
# Naming convention: api_methods[api_version optional_blah] due to api_methods_with_trailing_slash testing.
self.api_methods = qualysapi.api_methods.api_methods
#
# Keep track of methods with ending slashes to autocorrect user when they forgot slash.
self.api_methods_with_trailing_slash = qualysapi.api_methods.api_methods_with_trailing_slash
self.proxies = proxies
logger.debug('proxies = \n%s' % proxies)
# Set up requests max_retries.
logger.debug('max_retries = \n%s' % max_retries)
self.session = requests.Session()
http_max_retries = requests.adapters.HTTPAdapter(max_retries=max_retries)
https_max_retries = requests.adapters.HTTPAdapter(max_retries=max_retries)
self.session.mount('http://', http_max_retries)
self.session.mount('https://', https_max_retries)
def __call__(self):
return self
def format_api_version(self, api_version):
""" Return QualysGuard API version for api_version specified.
"""
# Convert to int.
if type(api_version) == str:
api_version = api_version.lower()
if api_version[0] == 'v' and api_version[1].isdigit():
# Remove first 'v' in case the user typed 'v1' or 'v2', etc.
api_version = api_version[1:]
# Check for input matching Qualys modules.
if api_version in ('asset management', 'assets', 'tag', 'tagging', 'tags'):
# Convert to Asset Management API.
api_version = 'am'
elif api_version in ('am2'):
# Convert to Asset Management API v2
api_version = 'am2'
elif api_version in ('webapp', 'web application scanning', 'webapp scanning'):
# Convert to WAS API.
api_version = 'was'
elif api_version in ('pol', 'pc'):
# Convert PC module to API number 2.
api_version = 2
else:
api_version = int(api_version)
return api_version
def which_api_version(self, api_call):
""" Return QualysGuard API version for api_call specified.
"""
# Leverage patterns of calls to API methods.
if api_call.endswith('.php'):
# API v1.
return 1
elif api_call.startswith('api/2.0/'):
# API v2.
return 2
elif '/am/' in api_call:
# Asset Management API.
return 'am'
elif '/was/' in api_call:
# WAS API.
return 'was'
return False
def url_api_version(self, api_version):
""" Return base API url string for the QualysGuard api_version and server.
"""
# Set base url depending on API version.
if api_version == 1:
# QualysGuard API v1 url.
url = "https://%s/msp/" % (self.server,)
elif api_version == 2:
# QualysGuard API v2 url.
url = "https://%s/" % (self.server,)
elif api_version == 'was':
# QualysGuard REST v3 API url (Portal API).
url = "https://%s/qps/rest/3.0/" % (self.server,)
elif api_version == 'am':
# QualysGuard REST v1 API url (Portal API).
url = "https://%s/qps/rest/1.0/" % (self.server,)
elif api_version == 'am2':
# QualysGuard REST v1 API url (Portal API).
url = "https://%s/qps/rest/2.0/" % (self.server,)
else:
raise Exception("Unknown QualysGuard API Version Number (%s)" % (api_version,))
logger.debug("Base url =\n%s" % (url))
return url
def format_http_method(self, api_version, api_call, data):
""" Return QualysGuard API http method, with POST preferred..
"""
# Define get methods for automatic http request methodology.
#
# All API v2 requests are POST methods.
if api_version == 2:
return 'post'
elif api_version == 1:
if api_call in self.api_methods['1 post']:
return 'post'
else:
return 'get'
elif api_version == 'was':
# WAS API call.
# Because WAS API enables user to GET API resources in URI, let's chop off the resource.
# '/download/was/report/18823' --> '/download/was/report/'
api_call_endpoint = api_call[:api_call.rfind('/') + 1]
if api_call_endpoint in self.api_methods['was get']:
return 'get'
# Post calls with no payload will result in HTTPError: 415 Client Error: Unsupported Media Type.
if data is None:
# No post data. Some calls change to GET with no post data.
if api_call_endpoint in self.api_methods['was no data get']:
return 'get'
else:
return 'post'
else:
# Call with post data.
return 'post'
else:
# Asset Management API call.
if api_call in self.api_methods['am get']:
return 'get'
else:
return 'post'
def preformat_call(self, api_call):
""" Return properly formatted QualysGuard API call.
"""
# Remove possible starting slashes or trailing question marks in call.
api_call_formatted = api_call.lstrip('/')
api_call_formatted = api_call_formatted.rstrip('?')
if api_call != api_call_formatted:
# Show difference
logger.debug('api_call post strip =\n%s' % api_call_formatted)
return api_call_formatted
def format_call(self, api_version, api_call):
""" Return properly formatted QualysGuard API call according to api_version etiquette.
"""
# Remove possible starting slashes or trailing question marks in call.
api_call = api_call.lstrip('/')
api_call = api_call.rstrip('?')
logger.debug('api_call post strip =\n%s' % api_call)
# Make sure call always ends in slash for API v2 calls.
if (api_version == 2 and api_call[-1] != '/'):
# Add slash.
logger.debug('Adding "/" to api_call.')
api_call += '/'
if api_call in self.api_methods_with_trailing_slash[api_version]:
# Add slash.
logger.debug('Adding "/" to api_call.')
api_call += '/'
return api_call
def format_payload(self, api_version, data):
""" Return appropriate QualysGuard API call.
"""
# Check if payload is for API v1 or API v2.
if (api_version in (1, 2)):
# Check if string type.
if type(data) == str:
# Convert to dictionary.
logger.debug('Converting string to dict:\n%s' % data)
# Remove possible starting question mark & ending ampersands.
data = data.lstrip('?')
data = data.rstrip('&')
# Convert to dictionary.
#data = urllib.parse.parse_qs(data)
data = urlparse(data)
logger.debug('Converted:\n%s' % str(data))
elif api_version in ('am', 'was', 'am2'):
if type(data) == etree._Element:
logger.debug('Converting lxml.builder.E to string')
data = etree.tostring(data)
logger.debug('Converted:\n%s' % data)
return data
def request(self, api_call, data=None, api_version=None, http_method=None, concurrent_scans_retries=0,
concurrent_scans_retry_delay=0):
""" Return QualysGuard API response.
"""
logger.debug('api_call =\n%s' % api_call)
logger.debug('api_version =\n%s' % api_version)
logger.debug('data %s =\n %s' % (type(data), str(data)))
logger.debug('http_method =\n%s' % http_method)
logger.debug('concurrent_scans_retries =\n%s' % str(concurrent_scans_retries))
logger.debug('concurrent_scans_retry_delay =\n%s' % str(concurrent_scans_retry_delay))
concurrent_scans_retries = int(concurrent_scans_retries)
concurrent_scans_retry_delay = int(concurrent_scans_retry_delay)
#
# Determine API version.
# Preformat call.
api_call = self.preformat_call(api_call)
if api_version:
# API version specified, format API version inputted.
api_version = self.format_api_version(api_version)
else:
# API version not specified, determine automatically.
api_version = self.which_api_version(api_call)
#
# Set up base url.
url = self.url_api_version(api_version)
#
# Set up headers.
headers = {"X-Requested-With": "QualysAPI (python) v%s - VulnWhisperer" % (qualysapi.version.__version__,)}
logger.debug('headers =\n%s' % (str(headers)))
# Portal API takes in XML text, requiring custom header.
if api_version in ('am', 'was', 'am2'):
headers['Content-type'] = 'text/xml'
#
# Set up http request method, if not specified.
if not http_method:
http_method = self.format_http_method(api_version, api_call, data)
logger.debug('http_method =\n%s' % http_method)
#
# Format API call.
api_call = self.format_call(api_version, api_call)
logger.debug('api_call =\n%s' % (api_call))
# Append api_call to url.
url += api_call
#
# Format data, if applicable.
if data is not None:
data = self.format_payload(api_version, data)
# Make request at least once (more if concurrent_retry is enabled).
retries = 0
#
# set a warning threshold for the rate limit
rate_warn_threshold = 10
while retries <= concurrent_scans_retries:
# Make request.
logger.debug('url =\n%s' % (str(url)))
logger.debug('data =\n%s' % (str(data)))
logger.debug('headers =\n%s' % (str(headers)))
if http_method == 'get':
# GET
logger.debug('GET request.')
request = self.session.get(url, params=data, auth=self.auth, headers=headers, proxies=self.proxies)
else:
# POST
logger.debug('POST request.')
# Make POST request.
request = self.session.post(url, data=data, auth=self.auth, headers=headers, proxies=self.proxies)
logger.debug('response headers =\n%s' % (str(request.headers)))
#
# Remember how many times left user can make against api_call.
try:
self.rate_limit_remaining[api_call] = int(request.headers['x-ratelimit-remaining'])
logger.debug('rate limit for api_call, %s = %s' % (api_call, self.rate_limit_remaining[api_call]))
if (self.rate_limit_remaining[api_call] > rate_warn_threshold):
logger.debug('rate limit for api_call, %s = %s' % (api_call, self.rate_limit_remaining[api_call]))
elif (self.rate_limit_remaining[api_call] <= rate_warn_threshold) and (self.rate_limit_remaining[api_call] > 0):
logger.warning('Rate limit is about to being reached (remaining api calls = %s)' % self.rate_limit_remaining[api_call])
elif self.rate_limit_remaining[api_call] <= 0:
logger.critical('ATTENTION! RATE LIMIT HAS BEEN REACHED (remaining api calls = %s)!' % self.rate_limit_remaining[api_call])
except KeyError as e:
# Likely a bad api_call.
logger.debug(e)
pass
except TypeError as e:
# Likely an asset search api_call.
logger.debug(e)
pass
# Response received.
response = str(request.content)
logger.debug('response text =\n%s' % (response))
# Keep track of how many retries.
retries += 1
# Check for concurrent scans limit.
if not ('<responseCode>INVALID_REQUEST</responseCode>' in response and
'<errorMessage>You have reached the maximum number of concurrent running scans' in response and
'<errorResolution>Please wait until your previous scans have completed</errorResolution>' in response):
# Did not hit concurrent scan limit.
break
else:
# Hit concurrent scan limit.
logger.critical(response)
# If trying again, delay next try by concurrent_scans_retry_delay.
if retries <= concurrent_scans_retries:
logger.warning('Waiting %d seconds until next try.' % concurrent_scans_retry_delay)
time.sleep(concurrent_scans_retry_delay)
# Inform user of how many retries.
logger.critical('Retry #%d' % retries)
else:
# Ran out of retries. Let user know.
print('Alert! Ran out of concurrent_scans_retries!')
logger.critical('Alert! Ran out of concurrent_scans_retries!')
return False
# Check to see if there was an error.
try:
request.raise_for_status()
except requests.HTTPError as e:
# Error
print('Error! Received a 4XX client error or 5XX server error response.')
print('Content = \n', response)
logger.error('Content = \n%s' % response)
print('Headers = \n', request.headers)
logger.error('Headers = \n%s' % str(request.headers))
request.raise_for_status()
if '<RETURN status="FAILED" number="2007">' in response:
print('Error! Your IP address is not in the list of secure IPs. Manager must include this IP (QualysGuard VM > Users > Security).')
print('Content = \n', response)
logger.error('Content = \n%s' % response)
print('Headers = \n', request.headers)
logger.error('Headers = \n%s' % str(request.headers))
return False
return response

View File

@ -1,290 +0,0 @@
# File for 3rd party contributions.
from __future__ import absolute_import
from __future__ import print_function
import six
from six.moves import range
__author__ = 'Parag Baxi <parag.baxi@gmail.com>'
__license__ = 'Apache License 2.0'
import logging
import time
import types
import unicodedata
from collections import defaultdict
from lxml import etree, objectify
# Set module level logger.
logger = logging.getLogger(__name__)
def generate_vm_report(self, report_details, startup_delay=60, polling_delay=30, max_checks=10):
''' Spool and download QualysGuard VM report.
startup_delay: Time in seconds to wait before initially checking.
polling_delay: Time in seconds to wait between checks.
max_checks: Maximum number of times to check for report spooling completion.
'''
# Merge parameters.
report_details['action'] = 'launch'
logger.debug(report_details)
xml_output = qualysapi_instance.request(2, 'report', report_details)
report_id = etree.XML(xml_output).find('.//VALUE').text
logger.debug('report_id: %s' % (report_id))
# Wait for report to finish spooling.
# Maximum number of times to check for report. About 10 minutes.
MAX_CHECKS = 10
logger.info('Report sent to spooler. Checking for report in %s seconds.' % (startup_delay))
time.sleep(startup_delay)
for n in range(0, max_checks):
# Check to see if report is done.
xml_output = qualysapi_instance.request(2, 'report', {'action': 'list', 'id': report_id})
tag_status = etree.XML(xml_output).findtext(".//STATE")
logger.debug('tag_status: %s' % (tag_status))
tag_status = etree.XML(xml_output).findtext(".//STATE")
logger.debug('tag_status: %s' % (tag_status))
if tag_status is not None:
# Report is showing up in the Report Center.
if tag_status == 'Finished':
# Report creation complete.
break
# Report not finished, wait.
logger.info('Report still spooling. Trying again in %s seconds.' % (polling_delay))
time.sleep(polling_delay)
# We now have to fetch the report. Use the report id.
report_xml = qualysapi_instance.request(2, 'report', {'action': 'fetch', 'id': report_id})
return report_xml
def qg_html_to_ascii(qg_html_text):
"""Convert and return QualysGuard's quasi HTML text to ASCII text."""
text = qg_html_text
# Handle tagged line breaks (<p>, <br>)
text = re.sub(r'(?i)<br>[ ]*', '\n', text)
text = re.sub(r'(?i)<p>[ ]*', '\n', text)
# Remove consecutive line breaks
text = re.sub(r"^\s+", "", text, flags=re.MULTILINE)
# Remove empty lines at the end.
text = re.sub('[\n]+$', '$', text)
# Store anchor tags href attribute
links = list(lxml.html.iterlinks(text))
# Remove anchor tags
html_element = lxml.html.fromstring(text)
# Convert anchor tags to "link_text (link: link_url )".
logging.debug('Converting anchor tags...')
text = html_element.text_content().encode('ascii', 'ignore')
# Convert each link.
for l in links:
# Find and replace each link.
link_text = l[0].text_content().encode('ascii', 'ignore').strip()
link_url = l[2].strip()
# Replacing link_text
if link_text != link_url:
# Link text is different, most likely a description.
text = string.replace(text, link_text, '%s (link: %s )' % (link_text, link_url))
else:
# Link text is the same as the href. No need to duplicate link.
text = string.replace(text, link_text, '%s' % (link_url))
logging.debug('Done.')
return text
def qg_parse_informational_qids(xml_report):
"""Return vulnerabilities of severity 1 and 2 levels due to a restriction of
QualysGuard's inability to report them in the internal ticketing system.
"""
# asset_group's vulnerability data map:
# {'qid_number': {
# # CSV info
# 'hosts': [{'ip': '10.28.0.1', 'dns': 'hostname', 'netbios': 'blah', 'vuln_id': 'remediation_ticket_number'}, {'ip': '10.28.0.3', 'dns': 'hostname2', 'netbios': '', 'vuln_id': 'remediation_ticket_number'}, ...],
# 'solution': '',
# 'impact': '',
# 'threat': '',
# 'severity': '',
# }
# 'qid_number2': ...
# }
# Add all vulnerabilities to list of dictionaries.
# Use defaultdict in case a new QID is encountered.
info_vulns = defaultdict(dict)
# Parse vulnerabilities in xml string.
tree = objectify.fromstring(xml_report)
# Write IP, DNS, & Result into each QID CSV file.
logging.debug('Parsing report...')
# TODO: Check against c_args.max to prevent creating CSV content for QIDs that we won't use.
for host in tree.HOST_LIST.HOST:
# Extract possible extra hostname information.
try:
netbios = unicodedata.normalize('NFKD', six.text_type(host.NETBIOS)).encode('ascii', 'ignore').strip()
except AttributeError:
netbios = ''
try:
dns = unicodedata.normalize('NFKD', six.text_type(host.DNS)).encode('ascii', 'ignore').strip()
except AttributeError:
dns = ''
ip = unicodedata.normalize('NFKD', six.text_type(host.IP)).encode('ascii', 'ignore').strip()
# Extract vulnerabilities host is affected by.
for vuln in host.VULN_INFO_LIST.VULN_INFO:
try:
result = unicodedata.normalize('NFKD', six.text_type(vuln.RESULT)).encode('ascii', 'ignore').strip()
except AttributeError:
result = ''
qid = unicodedata.normalize('NFKD', six.text_type(vuln.QID)).encode('ascii', 'ignore').strip()
# Attempt to add host to QID's list of affected hosts.
try:
info_vulns[qid]['hosts'].append({'ip': '%s' % (ip),
'dns': '%s' % (dns),
'netbios': '%s' % (netbios),
'vuln_id': '',
# Informational QIDs do not have vuln_id numbers. This is a flag to write the CSV file.
'result': '%s' % (result), })
except KeyError:
# New QID.
logging.debug('New QID found: %s' % (qid))
info_vulns[qid]['hosts'] = []
info_vulns[qid]['hosts'].append({'ip': '%s' % (ip),
'dns': '%s' % (dns),
'netbios': '%s' % (netbios),
'vuln_id': '',
# Informational QIDs do not have vuln_id numbers. This is a flag to write the CSV file.
'result': '%s' % (result), })
# All vulnerabilities added.
# Add all vulnerabilty information.
for vuln_details in tree.GLOSSARY.VULN_DETAILS_LIST.VULN_DETAILS:
qid = unicodedata.normalize('NFKD', six.text_type(vuln_details.QID)).encode('ascii', 'ignore').strip()
info_vulns[qid]['title'] = unicodedata.normalize('NFKD', six.text_type(vuln_details.TITLE)).encode('ascii',
'ignore').strip()
info_vulns[qid]['severity'] = unicodedata.normalize('NFKD', six.text_type(vuln_details.SEVERITY)).encode('ascii',
'ignore').strip()
info_vulns[qid]['solution'] = qg_html_to_ascii(
unicodedata.normalize('NFKD', six.text_type(vuln_details.SOLUTION)).encode('ascii', 'ignore').strip())
info_vulns[qid]['threat'] = qg_html_to_ascii(
unicodedata.normalize('NFKD', six.text_type(vuln_details.THREAT)).encode('ascii', 'ignore').strip())
info_vulns[qid]['impact'] = qg_html_to_ascii(
unicodedata.normalize('NFKD', six.text_type(vuln_details.IMPACT)).encode('ascii', 'ignore').strip())
# Ready to report informational vulnerabilities.
return info_vulns
# TODO: Implement required function qg_remediation_tickets(asset_group, status, qids)
# TODO: Remove static 'report_template' value. Parameterize and document required report template.
def qg_ticket_list(asset_group, severity, qids=None):
"""Return dictionary of each vulnerability reported against asset_group of severity."""
global asset_group_details
# All vulnerabilities imported to list of dictionaries.
vulns = qg_remediation_tickets(asset_group, 'OPEN', qids) # vulns now holds all open remediation tickets.
if not vulns:
# No tickets to report.
return False
#
# Sort the vulnerabilities in order of prevalence -- number of hosts affected.
vulns = OrderedDict(sorted(list(vulns.items()), key=lambda t: len(t[1]['hosts'])))
logging.debug('vulns sorted = %s' % (vulns))
#
# Remove QIDs that have duplicate patches.
#
# Read in patch report.
# TODO: Allow for lookup of report_template.
# Report template is Patch report "Sev 5 confirmed patchable".
logging.debug('Retrieving patch report from QualysGuard.')
print('Retrieving patch report from QualysGuard.')
report_template = '1063695'
# Call QualysGuard for patch report.
csv_output = qg_command(2, 'report', {'action': 'launch', 'output_format': 'csv',
'asset_group_ids': asset_group_details['qg_asset_group_id'],
'template_id': report_template,
'report_title': 'QGIR Patch %s' % (asset_group)})
logging.debug('csv_output =')
logging.debug(csv_output)
logging.debug('Improving remediation efficiency by removing unneeded, redundant patches.')
print('Improving remediation efficiency by removing unneeded, redundant patches.')
# Find the line for Patches by Host data.
logging.debug('Header found at %s.' % (csv_output.find('Patch QID, IP, DNS, NetBIOS, OS, Vulnerability Count')))
starting_pos = csv_output.find('Patch QID, IP, DNS, NetBIOS, OS, Vulnerability Count') + 52
logging.debug('starting_pos = %s' % str(starting_pos))
# Data resides between line ending in 'Vulnerability Count' and a blank line.
patches_by_host = csv_output[starting_pos:csv_output[starting_pos:].find(
'Host Vulnerabilities Fixed by Patch') + starting_pos - 3]
logging.debug('patches_by_host =')
logging.debug(patches_by_host)
# Read in string patches_by_host csv to a dictionary.
f = patches_by_host.split(os.linesep)
reader = csv.DictReader(f, ['Patch QID', 'IP', 'DNS', 'NetBIOS', 'OS', 'Vulnerability Count'], delimiter=',')
# Mark Patch QIDs that fix multiple vulnerabilities with associated IP addresses.
redundant_qids = defaultdict(list)
for row in reader:
if int(row['Vulnerability Count']) > 1:
# Add to list of redundant QIDs.
redundant_qids[row['Patch QID']].append(row['IP'])
logging.debug('%s, %s, %s, %s' % (
row['Patch QID'],
row['IP'],
int(row['Vulnerability Count']),
redundant_qids[row['Patch QID']]))
# Log for debugging.
logging.debug('len(redundant_qids) = %s, redundant_qids =' % (len(redundant_qids)))
for patch_qid in list(redundant_qids.keys()):
logging.debug('%s, %s' % (str(patch_qid), str(redundant_qids[patch_qid])))
# Extract redundant QIDs with associated IP addresses.
# Find the line for Patches by Host data.
starting_pos = csv_output.find('Patch QID, IP, QID, Severity, Type, Title, Instance, Last Detected') + 66
# Data resides between line ending in 'Vulnerability Count' and end of string.
host_vulnerabilities_fixed_by_patch = csv_output[starting_pos:]
# Read in string host_vulnerabilities_fixed_by_patch csv to a dictionary.
f = host_vulnerabilities_fixed_by_patch.split(os.linesep)
reader = csv.DictReader(f, ['Patch QID', 'IP', 'QID', 'Severity', 'Type', 'Title', 'Instance', 'Last Detected'],
delimiter=',')
# Remove IP addresses associated with redundant QIDs.
qids_to_remove = defaultdict(list)
for row in reader:
# If the row's IP address's Patch QID was found to have multiple vulnerabilities...
if len(redundant_qids[row['Patch QID']]) > 0 and redundant_qids[row['Patch QID']].count(row['IP']) > 0:
# Add the QID column to the list of dictionaries {QID: [IP address, IP address, ...], QID2: [IP address], ...}
qids_to_remove[row['QID']].append(row['IP'])
# Log for debugging.
logging.debug('len(qids_to_remove) = %s, qids_to_remove =' % (len(qids_to_remove)))
for a_qid in list(qids_to_remove.keys()):
logging.debug('%s, %s' % (str(a_qid), str(qids_to_remove[a_qid])))
#
# Diff vulns against qids_to_remove and against open incidents.
#
vulns_length = len(vulns)
# Iterate over list of keys rather than original dictionary as some keys may be deleted changing the size of the dictionary.
for a_qid in list(vulns.keys()):
# Debug log original qid's hosts.
logging.debug('Before diffing vulns[%s] =' % (a_qid))
logging.debug(vulns[a_qid]['hosts'])
# Pop each host.
# The [:] returns a "slice" of x, which happens to contain all its elements, and is thus effectively a copy of x.
for host in vulns[a_qid]['hosts'][:]:
# If the QID for the host is a dupe or if a there is an open Reaction incident.
if qids_to_remove[a_qid].count(host['ip']) > 0 or reaction_open_issue(host['vuln_id']):
# Remove the host from the QID's list of target hosts.
logging.debug('Removing remediation ticket %s.' % (host['vuln_id']))
vulns[a_qid]['hosts'].remove(host)
else:
# Do not remove this vuln
logging.debug('Will report remediation %s.' % (host['vuln_id']))
# Debug log diff'd qid's hosts.
logging.debug('After diffing vulns[%s]=' % (a_qid))
logging.debug(vulns[a_qid]['hosts'])
# If there are no more hosts left to patch for the qid.
if len(vulns[a_qid]['hosts']) == 0:
# Remove the QID.
logging.debug('Deleting vulns[%s].' % (a_qid))
del vulns[a_qid]
# Diff completed
if not vulns_length == len(vulns):
print('A count of %s vulnerabilities have been consolidated to %s vulnerabilities, a reduction of %s%%.' % (
int(vulns_length),
int(len(vulns)),
int(round((int(vulns_length) - int(len(vulns))) / float(vulns_length) * 100))))
# Return vulns to report.
logging.debug('vulns =')
logging.debug(vulns)
return vulns

View File

@ -1,21 +0,0 @@
''' Module to hold global settings reused throughout qualysapi. '''
from __future__ import absolute_import
__author__ = "Colin Bell <colin.bell@uwaterloo.ca>"
__copyright__ = "Copyright 2011-2013, University of Waterloo"
__license__ = "BSD-new"
import os
global defaults
global default_filename
if os.name == 'nt':
default_filename = "config.ini"
else:
default_filename = ".qcrc"
defaults = {'hostname': 'qualysapi.qualys.com',
'max_retries': '3',
'template_id': '00000'}

View File

@ -1,29 +0,0 @@
""" A set of utility functions for QualysConnect module. """
from __future__ import absolute_import
import logging
import qualysapi.config as qcconf
import qualysapi.connector as qcconn
import qualysapi.settings as qcs
__author__ = "Parag Baxi <parag.baxi@gmail.com> & Colin Bell <colin.bell@uwaterloo.ca>"
__copyright__ = "Copyright 2011-2013, Parag Baxi & University of Waterloo"
__license__ = 'Apache License 2.0'
# Set module level logger.
logger = logging.getLogger(__name__)
def connect(config_file=qcs.default_filename, remember_me=False, remember_me_always=False):
""" Return a QGAPIConnect object for v1 API pulling settings from config
file.
"""
# Retrieve login credentials.
conf = qcconf.QualysConnectConfig(filename=config_file, remember_me=remember_me,
remember_me_always=remember_me_always)
connect = qcconn.QGConnector(conf.get_auth(),
conf.get_hostname(),
conf.proxies,
conf.max_retries)
logger.info("Finished building connector.")
return connect

View File

@ -1,3 +0,0 @@
__author__ = 'Austin Taylor'
__pkgname__ = 'qualysapi'
__version__ = '4.1.0'

View File

@ -1,51 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
import os
import setuptools
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
__author__ = 'Austin Taylor <vulnWhisperer@austintaylor.io>'
__copyright__ = 'Copyright 2017, Austin Taylor'
__license__ = 'BSD-new'
# Make pyflakes happy.
__pkgname__ = None
__version__ = None
exec(compile(open('qualysapi/version.py').read(), 'qualysapi/version.py', 'exec'))
# A utility function to read the README file into the long_description field.
def read(fname):
""" Takes a filename and returns the contents of said file relative to
the current directory.
"""
return open(os.path.join(os.path.dirname(__file__), fname)).read()
setup(name=__pkgname__,
version=__version__,
author='Austin Taylor',
author_email='vulnWhisperer@austintaylor.io',
description='QualysGuard(R) Qualys API Package modified for VulnWhisperer',
license='BSD-new',
keywords='Qualys QualysGuard API helper network security',
url='https://github.com/austin-taylor/qualysapi',
package_dir={'': '.'},
#packages=setuptools.find_packages(),
packages=['qualysapi',],
# package_data={'qualysapi':['LICENSE']},
# scripts=['src/scripts/qhostinfo.py', 'src/scripts/qscanhist.py', 'src/scripts/qreports.py'],
long_description=read('README.md'),
classifiers=[
'Development Status :: 5 - Production/Stable',
'Topic :: Utilities',
'License :: OSI Approved :: Apache Software License',
'Intended Audience :: Developers',
],
install_requires=[
'requests',
],
)

97
docker-compose-test.yml Normal file
View File

@ -0,0 +1,97 @@
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
container_name: elasticsearch
environment:
- cluster.name=vulnwhisperer
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- xpack.security.enabled=false
- cluster.routing.allocation.disk.threshold_enabled=false
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 8g
volumes:
- ./data/esdata1:/usr/share/elasticsearch/data
- ./data/es_snapshots:/snapshots
ports:
- 9200:9200
#restart: always
networks:
esnet:
aliases:
- elasticsearch.local
kibana:
image: docker.elastic.co/kibana/kibana:6.6.0
container_name: kibana
environment:
SERVER_NAME: kibana
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
networks:
esnet:
aliases:
- kibana.local
kibana-config:
image: alpine
container_name: kibana-config
volumes:
- ./resources/elk6/init_kibana.sh:/opt/init_kibana.sh
- ./resources/elk6/kibana_APIonly.json:/opt/kibana_APIonly.json
- ./resources/elk6/logstash-vulnwhisperer-template.json:/opt/index-template.json
command: sh -c "apk add --no-cache curl bash && chmod +x /opt/init_kibana.sh && chmod +r /opt/kibana_APIonly.json && cd /opt/ && /bin/bash /opt/init_kibana.sh" # /opt/kibana_APIonly.json"
networks:
esnet:
aliases:
- kibana-config.local
logstash:
image: docker.elastic.co/logstash/logstash:6.6.0
container_name: logstash
volumes:
- ./resources/elk6/pipeline/:/usr/share/logstash/pipeline
- ./data/vulnwhisperer/:/opt/VulnWhisperer/data
# - ./resources/elk6/logstash.yml:/usr/share/logstash/config/logstash.yml
environment:
- xpack.monitoring.enabled=false
depends_on:
- elasticsearch
ports:
- 9600:9600
networks:
esnet:
aliases:
- logstash.local
vulnwhisperer:
# image: hasecuritysolutions/vulnwhisperer:latest
image: vulnwhisperer-local
container_name: vulnwhisperer
entrypoint: [
"vuln_whisperer",
"-F",
"-c",
"/opt/VulnWhisperer/vulnwhisperer.ini",
"--mock",
"--mock_dir",
"/tests/data"
]
volumes:
- ./data/vulnwhisperer/:/opt/VulnWhisperer/data
# - ./resources/elk6/vulnwhisperer.ini:/opt/VulnWhisperer/vulnwhisperer.ini
- ./configs/test.ini:/opt/VulnWhisperer/vulnwhisperer.ini
- ./tests/data/:/tests/data
network_mode: host
networks:
esnet:

86
docker-compose.v6.yml Normal file
View File

@ -0,0 +1,86 @@
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
container_name: elasticsearch
environment:
- cluster.name=vulnwhisperer
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 8g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
#restart: always
networks:
esnet:
aliases:
- elasticsearch.local
kibana:
image: docker.elastic.co/kibana/kibana:6.6.0
container_name: kibana
environment:
SERVER_NAME: kibana
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
networks:
esnet:
aliases:
- kibana.local
kibana-config:
image: alpine
container_name: kibana-config
volumes:
- ./resources/elk6/init_kibana.sh:/opt/init_kibana.sh
- ./resources/elk6/kibana_APIonly.json:/opt/kibana_APIonly.json
- ./resources/elk6/logstash-vulnwhisperer-template.json:/opt/index-template.json
command: sh -c "apk add --no-cache curl bash && chmod +x /opt/init_kibana.sh && chmod +r /opt/kibana_APIonly.json && cd /opt/ && /bin/bash /opt/init_kibana.sh" # /opt/kibana_APIonly.json"
networks:
esnet:
aliases:
- kibana-config.local
logstash:
image: docker.elastic.co/logstash/logstash:6.6.0
container_name: logstash
volumes:
- ./resources/elk6/pipeline/:/usr/share/logstash/pipeline
- ./data/:/opt/VulnWhisperer/data
#- ./resources/elk6/logstash.yml:/usr/share/logstash/config/logstash.yml
environment:
- xpack.monitoring.enabled=false
depends_on:
- elasticsearch
networks:
esnet:
aliases:
- logstash.local
vulnwhisperer:
image: hasecuritysolutions/vulnwhisperer:latest
container_name: vulnwhisperer
entrypoint: [
"vuln_whisperer",
"-c",
"/opt/VulnWhisperer/vulnwhisperer.ini"
]
volumes:
- ./data/:/opt/VulnWhisperer/data
- ./resources/elk6/vulnwhisperer.ini:/opt/VulnWhisperer/vulnwhisperer.ini
network_mode: host
volumes:
esdata1:
driver: local
networks:
esnet:

View File

@ -1,40 +0,0 @@
version: '2'
services:
vulnwhisp_es1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.2
container_name: vulnwhisp_es1
environment:
- cluster.name=vulnwhisperer
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 19200:9200
networks:
- esnet
vulnwhisp_ks1:
image: docker.elastic.co/kibana/kibana:5.6.2
environment:
SERVER_NAME: vulnwhisp_ks1
ELASTICSEARCH_URL: http://vulnwhisp_es1:9200
ports:
- 15601:5601
networks:
- esnet
vulnwhisp_ls1:
image: docker.elastic.co/logstash/logstash:5.6.2
networks:
- esnet
volumes:
esdata1:
driver: local
networks:
esnet:

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 449 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

View File

@ -1,244 +0,0 @@
{
"order": 0,
"template": "logstash-nessus-*",
"settings": {
"index": {
"routing": {
"allocation": {
"total_shards_per_node": "2"
}
},
"mapping": {
"total_fields": {
"limit": "3000"
}
},
"refresh_interval": "5s",
"number_of_shards": "1",
"number_of_replicas": "1"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"message_field": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string"
},
"match_mapping_type": "string",
"match": "message"
}
},
{
"string_fields": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string",
"fields": {
"raw": {
"ignore_above": 256,
"index": "not_analyzed",
"type": "string",
"doc_values": true
}
}
},
"match_mapping_type": "string",
"match": "*"
}
},
{
"ip_address_fields": {
"mapping": {
"type": "ip"
},
"match": "*_ip"
}
},
{
"ipv6_address_fields": {
"mapping": {
"index": "not_analyzed",
"type": "string"
},
"match": "*_ipv6"
}
},
{
"float_fields": {
"mapping": {
"type": "float",
"doc_values": true
},
"match_mapping_type": "float",
"match": "*"
}
},
{
"double_fields": {
"mapping": {
"type": "double",
"doc_values": true
},
"match_mapping_type": "double",
"match": "*"
}
},
{
"byte_fields": {
"mapping": {
"type": "byte",
"doc_values": true
},
"match_mapping_type": "byte",
"match": "*"
}
},
{
"short_fields": {
"mapping": {
"type": "short",
"doc_values": true
},
"match_mapping_type": "short",
"match": "*"
}
},
{
"integer_fields": {
"mapping": {
"type": "integer",
"doc_values": true
},
"match_mapping_type": "integer",
"match": "*"
}
},
{
"long_fields": {
"mapping": {
"type": "long",
"doc_values": true
},
"match_mapping_type": "long",
"match": "*"
}
},
{
"date_fields": {
"mapping": {
"type": "date",
"doc_values": true
},
"match_mapping_type": "date",
"match": "*"
}
},
{
"geo_point_fields": {
"mapping": {
"type": "geo_point",
"doc_values": true
},
"match_mapping_type": "geo_point",
"match": "*"
}
}
],
"_all": {
"omit_norms": true,
"enabled": true
},
"properties": {
"plugin_id": {
"type": "integer"
},
"last_updated": {
"type": "date",
"doc_values": true
},
"geoip": {
"dynamic": true,
"type": "object",
"properties": {
"ip": {
"type": "ip",
"doc_values": true
},
"latitude": {
"type": "float",
"doc_values": true
},
"location": {
"type": "geo_point",
"doc_values": true
},
"longitude": {
"type": "float",
"doc_values": true
}
}
},
"risk_score": {
"type": "float"
},
"source": {
"index": "not_analyzed",
"type": "string"
},
"synopsis": {
"index": "not_analyzed",
"type": "string"
},
"see_also": {
"index": "not_analyzed",
"type": "string"
},
"@timestamp": {
"type": "date",
"doc_values": true
},
"cve": {
"index": "not_analyzed",
"type": "string"
},
"solution": {
"index": "not_analyzed",
"type": "string"
},
"port": {
"type": "integer"
},
"host": {
"type": "ip"
},
"@version": {
"index": "not_analyzed",
"type": "string",
"doc_values": true
},
"risk": {
"index": "not_analyzed",
"type": "string"
},
"assign_ip": {
"type": "ip"
},
"cvss": {
"type": "float"
}
}
}
},
"aliases": {}
}

View File

@ -1,548 +0,0 @@
[
{
"_id": "7e7fbc90-3df2-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "Nessus-PluginID",
"visState": "{\"title\":\"Nessus-PluginID\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"plugin_id.raw\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "c786bc20-3df4-11e7-a3dd-33f478b7be91",
"_type": "visualization",
"_source": {
"title": "Nessus-RiskPie",
"visState": "{\"aggs\":[{\"enabled\":true,\"id\":\"1\",\"params\":{},\"schema\":\"metric\",\"type\":\"count\"},{\"enabled\":true,\"id\":\"2\",\"params\":{\"field\":\"risk.raw\",\"order\":\"desc\",\"orderBy\":\"1\",\"size\":50},\"schema\":\"segment\",\"type\":\"terms\"},{\"enabled\":true,\"id\":\"3\",\"params\":{\"field\":\"name.raw\",\"order\":\"desc\",\"orderBy\":\"1\",\"size\":50},\"schema\":\"segment\",\"type\":\"terms\"},{\"enabled\":true,\"id\":\"4\",\"params\":{\"field\":\"synopsis.raw\",\"order\":\"desc\",\"orderBy\":\"1\",\"size\":50},\"schema\":\"segment\",\"type\":\"terms\"},{\"enabled\":true,\"id\":\"5\",\"params\":{\"field\":\"host\",\"order\":\"desc\",\"orderBy\":\"1\",\"size\":50},\"schema\":\"segment\",\"type\":\"terms\"}],\"listeners\":{},\"params\":{\"addLegend\":true,\"addTooltip\":true,\"isDonut\":true,\"legendPosition\":\"right\"},\"title\":\"Nessus-RiskPie\",\"type\":\"pie\"}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"!(None)\"}},\"filter\":[]}"
}
}
},
{
"_id": "5a3c0340-3eb3-11e7-a192-93f36fbd9d05",
"_type": "visualization",
"_source": {
"title": "Nessus-CVSSHeatmap",
"visState": "{\"title\":\"Nessus-CVSSHeatmap\",\"type\":\"heatmap\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"enableHover\":false,\"legendPosition\":\"right\",\"times\":[],\"colorsNumber\":4,\"colorSchema\":\"Yellow to Red\",\"setColorRange\":false,\"colorsRange\":[],\"invertColors\":false,\"percentageMode\":false,\"valueAxes\":[{\"show\":false,\"id\":\"ValueAxis-1\",\"type\":\"value\",\"scale\":{\"type\":\"linear\",\"defaultYExtents\":false},\"labels\":{\"show\":false,\"rotate\":0,\"color\":\"#555\"}}]},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"host\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\"}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"cvss\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"_term\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 3500\":\"rgb(255,255,204)\",\"3500 - 7000\":\"rgb(254,217,118)\",\"7000 - 10500\":\"rgb(253,141,60)\",\"10500 - 14000\":\"rgb(227,27,28)\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "60418690-3eb1-11e7-90cb-918f9cb01e3d",
"_type": "visualization",
"_source": {
"title": "Nessus-TopPorts",
"visState": "{\"title\":\"Nessus-TopPorts\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"port\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "983687e0-3df2-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "Nessus-Protocol",
"visState": "{\"title\":\"Nessus-Protocol\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"protocol.raw\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Protocol\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "995e2280-3df3-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "Nessus-Host",
"visState": "{\"title\":\"Nessus-Host\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"host\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Host IP\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "87338510-3df2-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "Nessus-PluginOutput",
"visState": "{\"title\":\"Nessus-PluginOutput\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"plugin_output.raw\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Plugin Output\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "068d4bc0-3df3-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "Nessus-SeeAlso",
"visState": "{\"title\":\"Nessus-SeeAlso\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"see_also.raw\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"See Also\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "1de9e550-3df1-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "Nessus-Description",
"visState": "{\"title\":\"Nessus-Description\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"description.raw\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Description\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "1e59fa50-3df3-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "Nessus-Synopsis",
"visState": "{\"title\":\"Nessus-Synopsis\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"synopsis.raw\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Synopsis\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "13c7d4e0-3df3-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "Nessus-Solution",
"visState": "{\"title\":\"Nessus-Solution\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"solution.raw\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Solution\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "69765d50-3f5e-11e7-98cc-d924fd28047d",
"_type": "visualization",
"_source": {
"title": "Nessus-CVE",
"visState": "{\"title\":\"Nessus-CVE\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"cve.raw\",\"size\":10,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"CVE ID\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"!(nan)\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "852816e0-3eb1-11e7-90cb-918f9cb01e3d",
"_type": "visualization",
"_source": {
"title": "Nessus-CVSS",
"visState": "{\"title\":\"Nessus-CVSS\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"cvss\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"CVSS Score\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"cardinality\",\"schema\":\"metric\",\"params\":{\"field\":\"host\",\"customLabel\":\"# of Hosts\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "099a3820-3f68-11e7-a6bd-e764d950e506",
"_type": "visualization",
"_source": {
"title": "Timelion Nessus Example",
"visState": "{\"type\":\"timelion\",\"title\":\"Timelion Nessus Example\",\"params\":{\"expression\":\".es(index=logstash-nessus-*,q=risk:high).label(\\\"Current High Risk\\\"),.es(index=logstash-nessus-*,q=risk:high,offset=-1y).label(\\\"Last 1 Year High Risk\\\"),.es(index=logstash-nessus-*,q=risk:medium).label(\\\"Current Medium Risk\\\"),.es(index=logstash-nessus-*,q=risk:medium,offset=-1y).label(\\\"Last 1 Year Medium Risk\\\")\",\"interval\":\"auto\"}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
}
}
},
{
"_id": "297df800-3f7e-11e7-bd24-6903e3283192",
"_type": "visualization",
"_source": {
"title": "Nessus - Plugin Name",
"visState": "{\"title\":\"Nessus - Plugin Name\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"plugin_name.raw\",\"size\":10,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Plugin Name\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "de1a5f40-3f85-11e7-97f9-3777d794626d",
"_type": "visualization",
"_source": {
"title": "Nessus - ScanName",
"visState": "{\"title\":\"Nessus - ScanName\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"scan_name.raw\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Scan Name\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "ecbb99c0-3f84-11e7-97f9-3777d794626d",
"_type": "visualization",
"_source": {
"title": "Nessus - Total",
"visState": "{\"title\":\"Nessus - Total\",\"type\":\"metric\",\"params\":{\"handleNoResults\":true,\"fontSize\":60},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Total\"}}],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "471a3580-3f6b-11e7-88e7-df1abe6547fb",
"_type": "visualization",
"_source": {
"title": "Nessus - Vulnerabilities by Tag",
"visState": "{\"title\":\"Nessus - Vulnerabilities by Tag\",\"type\":\"table\",\"params\":{\"perPage\":3,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"3\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"bucket\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"tags:has_hipaa_data\",\"analyze_wildcard\":true}}},\"label\":\"Systems with HIPAA data\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"tags:pci_asset\",\"analyze_wildcard\":true}}},\"label\":\"PCI Systems\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"tags:hipaa_asset\",\"analyze_wildcard\":true}}},\"label\":\"HIPAA Systems\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "35b6d320-3f7f-11e7-bd24-6903e3283192",
"_type": "visualization",
"_source": {
"title": "Nessus - Residual Risk",
"visState": "{\"title\":\"Nessus - Residual Risk\",\"type\":\"table\",\"params\":{\"perPage\":15,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"risk_score\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Risk Number\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "a9225930-3df2-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "Nessus-Risk",
"visState": "{\"title\":\"Nessus-Risk\",\"type\":\"table\",\"params\":{\"perPage\":4,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"risk\",\"size\":10,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Risk Severity\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "2f979030-44b9-11e7-a818-f5f80dfc3590",
"_type": "visualization",
"_source": {
"title": "Nessus - ScanBarChart",
"visState": "{\"aggs\":[{\"enabled\":true,\"id\":\"1\",\"params\":{},\"schema\":\"metric\",\"type\":\"count\"},{\"enabled\":true,\"id\":\"2\",\"params\":{\"customLabel\":\"Scan Name\",\"field\":\"scan_name.raw\",\"order\":\"desc\",\"orderBy\":\"1\",\"size\":10},\"schema\":\"segment\",\"type\":\"terms\"}],\"listeners\":{},\"params\":{\"addLegend\":true,\"addTimeMarker\":false,\"addTooltip\":true,\"defaultYExtents\":false,\"legendPosition\":\"right\",\"mode\":\"stacked\",\"scale\":\"linear\",\"setYExtents\":false,\"times\":[]},\"title\":\"Nessus - ScanBarChart\",\"type\":\"histogram\"}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "67d432e0-44ec-11e7-a05f-d9719b331a27",
"_type": "visualization",
"_source": {
"title": "Nessus - TL-Critical Risk",
"visState": "{\"title\":\"Nessus - TL-Critical Risk\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-nessus-*',q='(risk_score:>=9 AND risk_score:<=10)').label(\\\"Original\\\"),.es(index='logstash-nessus-*',q='(risk_score:>=9 AND risk_score:<=10)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-nessus-*',q='(risk_score:>=9 AND risk_score:<=10)').subtract(.es(index='logstash-nessus-*',q='(risk_score:>=9 AND risk_score:<=10)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "a91b9fe0-44ec-11e7-a05f-d9719b331a27",
"_type": "visualization",
"_source": {
"title": "Nessus - TL-Medium Risk",
"visState": "{\"title\":\"Nessus - TL-Medium Risk\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-nessus-*',q='(risk_score:>=4 AND risk_score:<7)').label(\\\"Original\\\"),.es(index='logstash-nessus-*',q='(risk_score:>=4 AND risk_score:<7)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-nessus-*',q='(risk_score:>=4 AND risk_score:<7)').subtract(.es(index='logstash-nessus-*',q='(risk_score:>=4 AND risk_score:<7)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "8d9592d0-44ec-11e7-a05f-d9719b331a27",
"_type": "visualization",
"_source": {
"title": "Nessus - TL-High Risk",
"visState": "{\"title\":\"Nessus - TL-High Risk\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-nessus-*',q='(risk_score:>=7 AND risk_score:<9)').label(\\\"Original\\\"),.es(index='logstash-nessus-*',q='(risk_score:>=7 AND risk_score:<9)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-nessus-*',q='(risk_score:>=7 AND risk_score:<9)').subtract(.es(index='logstash-nessus-*',q='(risk_score:>=7 AND risk_score:<9)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "a2d66660-44ec-11e7-a05f-d9719b331a27",
"_type": "visualization",
"_source": {
"title": "Nessus - TL-Low Risk",
"visState": "{\"title\":\"Nessus - TL-Low Risk\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-nessus-*',q='(risk_score:>0 AND risk_score:<4)').label(\\\"Original\\\"),.es(index='logstash-nessus-*',q='(risk_score:>0 AND risk_score:<4)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-nessus-*',q='(risk_score:>0 AND risk_score:<4)').subtract(.es(index='logstash-nessus-*',q='(risk_score:>0 AND risk_score:<4)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "fb6eb020-49ab-11e7-8f8c-57ad64ec48a6",
"_type": "visualization",
"_source": {
"title": "Nessus - Critical Risk Score for Tagged Assets",
"visState": "{\"title\":\"Nessus - Critical Risk Score for Tagged Assets\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index=logstash-nessus-*,q='risk_score:>9 AND tags:hipaa_asset').label(\\\"HIPAA Assets\\\"),.es(index=logstash-nessus-*,q='risk_score:>9 AND tags:pci_asset').label(\\\"PCI Systems\\\"),.es(index=logstash-nessus-*,q='risk_score:>9 AND tags:has_hipaa_data').label(\\\"Has HIPAA Data\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "80158c90-57c1-11e7-b484-a970fc9d150a",
"_type": "visualization",
"_source": {
"title": "Nessus - HIPAA TL",
"visState": "{\"type\":\"timelion\",\"title\":\"Nessus - HIPAA TL\",\"params\":{\"expression\":\".es(index=logstash-nessus-*,q='risk_score:>9 AND tags:pci_asset').label(\\\"PCI Assets\\\"),.es(index=logstash-nessus-*,q='risk_score:>9 AND tags:has_hipaa_data').label(\\\"Has HIPAA Data\\\"),.es(index=logstash-nessus-*,q='risk_score:>9 AND tags:hipaa_asset').label(\\\"HIPAA Assets\\\")\",\"interval\":\"auto\"}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
}
}
},
{
"_id": "a6508640-897a-11e7-bbc0-33592ce0be1e",
"_type": "visualization",
"_source": {
"title": "Nessus - Critical Assets Aggregated",
"visState": "{\"title\":\"Nessus - Critical Assets Aggregated\",\"type\":\"heatmap\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"enableHover\":true,\"legendPosition\":\"right\",\"times\":[],\"colorsNumber\":4,\"colorSchema\":\"Green to Red\",\"setColorRange\":true,\"colorsRange\":[{\"from\":0,\"to\":3},{\"from\":3,\"to\":7},{\"from\":7,\"to\":9},{\"from\":9,\"to\":11}],\"invertColors\":false,\"percentageMode\":false,\"valueAxes\":[{\"show\":false,\"id\":\"ValueAxis-1\",\"type\":\"value\",\"scale\":{\"type\":\"linear\",\"defaultYExtents\":false},\"labels\":{\"show\":true,\"rotate\":0,\"color\":\"white\"}}]},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"risk_score\",\"customLabel\":\"Residual Risk Score\"}},{\"id\":\"3\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{},\"customLabel\":\"Date\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"host\",\"size\":10,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Critical Asset IP\"}},{\"id\":\"5\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"split\",\"params\":{\"field\":\"plugin_name.raw\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"row\":true}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"0 - 3\":\"#7EB26D\",\"3 - 7\":\"#EAB839\",\"7 - 9\":\"#EF843C\",\"8 - 10\":\"#BF1B00\",\"9 - 11\":\"#BF1B00\"},\"defaultColors\":{\"0 - 3\":\"rgb(0,104,55)\",\"3 - 7\":\"rgb(135,203,103)\",\"7 - 9\":\"rgb(255,255,190)\",\"9 - 11\":\"rgb(249,142,82)\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[{\"$state\":{\"store\":\"appState\"},\"meta\":{\"alias\":\"Critical Asset\",\"disabled\":false,\"index\":\"logstash-nessus-*\",\"key\":\"tags\",\"negate\":false,\"type\":\"phrase\",\"value\":\"critical_asset\"},\"query\":{\"match\":{\"tags\":{\"query\":\"critical_asset\",\"type\":\"phrase\"}}}}]}"
}
}
},
{
"_id": "465c5820-8977-11e7-857e-e1d56b17746d",
"_type": "visualization",
"_source": {
"title": "Nessus - Critical Assets",
"visState": "{\"title\":\"Nessus - Critical Assets\",\"type\":\"heatmap\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"enableHover\":true,\"legendPosition\":\"right\",\"times\":[],\"colorsNumber\":4,\"colorSchema\":\"Green to Red\",\"setColorRange\":true,\"colorsRange\":[{\"from\":0,\"to\":3},{\"from\":3,\"to\":7},{\"from\":7,\"to\":9},{\"from\":9,\"to\":11}],\"invertColors\":false,\"percentageMode\":false,\"valueAxes\":[{\"show\":false,\"id\":\"ValueAxis-1\",\"type\":\"value\",\"scale\":{\"type\":\"linear\",\"defaultYExtents\":false},\"labels\":{\"show\":true,\"rotate\":0,\"color\":\"white\"}}]},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"risk_score\",\"customLabel\":\"Residual Risk Score\"}},{\"id\":\"2\",\"enabled\":false,\"type\":\"terms\",\"schema\":\"split\",\"params\":{\"field\":\"host\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"row\":true}},{\"id\":\"3\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{},\"customLabel\":\"Date\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"host\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Critical Asset IP\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 3\":\"rgb(0,104,55)\",\"3 - 7\":\"rgb(135,203,103)\",\"7 - 9\":\"rgb(255,255,190)\",\"9 - 11\":\"rgb(249,142,82)\"},\"colors\":{\"8 - 10\":\"#BF1B00\",\"9 - 11\":\"#BF1B00\",\"7 - 9\":\"#EF843C\",\"3 - 7\":\"#EAB839\",\"0 - 3\":\"#7EB26D\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[{\"meta\":{\"index\":\"logstash-nessus-*\",\"negate\":false,\"disabled\":false,\"alias\":\"Critical Asset\",\"type\":\"phrase\",\"key\":\"tags\",\"value\":\"critical_asset\"},\"query\":{\"match\":{\"tags\":{\"query\":\"critical_asset\",\"type\":\"phrase\"}}},\"$state\":{\"store\":\"appState\"}}]}"
}
}
},
{
"_id": "56f0f5f0-3ebe-11e7-a192-93f36fbd9d05",
"_type": "visualization",
"_source": {
"title": "Nessus-RiskOverTime",
"visState": "{\"aggs\":[{\"enabled\":true,\"id\":\"1\",\"params\":{},\"schema\":\"metric\",\"type\":\"count\"},{\"enabled\":true,\"id\":\"2\",\"params\":{\"customInterval\":\"2h\",\"extended_bounds\":{},\"field\":\"@timestamp\",\"interval\":\"auto\",\"min_doc_count\":1},\"schema\":\"segment\",\"type\":\"date_histogram\"},{\"enabled\":true,\"id\":\"3\",\"params\":{\"field\":\"risk\",\"order\":\"desc\",\"orderBy\":\"1\",\"size\":5},\"schema\":\"group\",\"type\":\"terms\"}],\"listeners\":{},\"params\":{\"addLegend\":true,\"addTimeMarker\":false,\"addTooltip\":true,\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"labels\":{\"show\":true,\"truncate\":100},\"position\":\"bottom\",\"scale\":{\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{},\"type\":\"category\"}],\"defaultYExtents\":false,\"drawLinesBetweenPoints\":true,\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"},\"valueAxis\":\"ValueAxis-1\"},\"interpolate\":\"linear\",\"legendPosition\":\"right\",\"orderBucketsBySum\":false,\"radiusRatio\":9,\"scale\":\"linear\",\"seriesParams\":[{\"data\":{\"id\":\"1\",\"label\":\"Count\"},\"drawLinesBetweenPoints\":true,\"interpolate\":\"linear\",\"mode\":\"normal\",\"show\":\"true\",\"showCircles\":true,\"type\":\"line\",\"valueAxis\":\"ValueAxis-1\"}],\"setYExtents\":false,\"showCircles\":true,\"times\":[],\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"labels\":{\"filter\":false,\"rotate\":0,\"show\":true,\"truncate\":100},\"name\":\"LeftAxis-1\",\"position\":\"left\",\"scale\":{\"mode\":\"normal\",\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{\"text\":\"Count\"},\"type\":\"value\"}]},\"title\":\"Nessus-RiskOverTime\",\"type\":\"line\"}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"Critical\":\"#E24D42\",\"High\":\"#E0752D\",\"Low\":\"#7EB26D\",\"Medium\":\"#F2C96D\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "479deab0-8a39-11e7-a58a-9bfcb3761a3d",
"_type": "visualization",
"_source": {
"title": "Nessus - TL - TaggedAssetsPluginNames",
"visState": "{\"title\":\"Nessus - TL - TaggedAssetsPluginNames\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-nessus-*', q='tags:critical_asset OR tags:hipaa_asset OR tags:pci_asset', split=\\\"plugin_name.raw:10\\\").bars(width=4).label(regex=\\\".*:(.+)>.*\\\",label=\\\"$1\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "84f5c370-8a38-11e7-a58a-9bfcb3761a3d",
"_type": "visualization",
"_source": {
"title": "Nessus - TL - CriticalAssetsPluginNames",
"visState": "{\"title\":\"Nessus - TL - CriticalAssetsPluginNames\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-nessus-*', q='tags:critical_asset', split=\\\"plugin_name.raw:10\\\").bars(width=4).label(regex=\\\".*:(.+)>.*\\\",label=\\\"$1\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "307cdae0-8a38-11e7-a58a-9bfcb3761a3d",
"_type": "visualization",
"_source": {
"title": "Nessus - TL - PluginNames",
"visState": "{\"title\":\"Nessus - TL - PluginNames\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-nessus-*', split=\\\"plugin_name.raw:25\\\").bars(width=4).label(regex=\\\".*:(.+)>.*\\\",label=\\\"$1\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "d048c220-80b3-11e7-8790-73b60225f736",
"_type": "visualization",
"_source": {
"title": "Nessus - Risk: High",
"visState": "{\"title\":\"Nessus - Risk: High\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":1000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":true}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"High Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:High\",\"analyze_wildcard\":true}}},\"label\":\"\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#EF843C\",\"0 - 1000\":\"#E0752D\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "c1361da0-80b3-11e7-8790-73b60225f736",
"_type": "visualization",
"_source": {
"title": "Nessus - Risk: Medium",
"visState": "{\"title\":\"Nessus - Risk: Medium\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":false}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Medium Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:Medium\",\"analyze_wildcard\":true}}},\"label\":\"Medium Risk\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#EAB839\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "e46ff7f0-897d-11e7-934b-67cec0a7da65",
"_type": "visualization",
"_source": {
"title": "Nessus - Risk: Low",
"visState": "{\"title\":\"Nessus - Risk: Low\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":false}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Low Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:Low\",\"analyze_wildcard\":true}}},\"label\":\"Low Risk\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#629E51\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "db55bce0-80b3-11e7-8790-73b60225f736",
"_type": "visualization",
"_source": {
"title": "Nessus - Risk: Critical",
"visState": "{\"title\":\"Nessus - Risk: Critical\",\"type\":\"goal\",\"params\":{\"addLegend\":true,\"addTooltip\":true,\"gauge\":{\"autoExtend\":false,\"backStyle\":\"Full\",\"colorSchema\":\"Green to Red\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"gaugeColorMode\":\"Background\",\"gaugeStyle\":\"Full\",\"gaugeType\":\"Metric\",\"invertColors\":false,\"labels\":{\"color\":\"black\",\"show\":false},\"orientation\":\"vertical\",\"percentageMode\":false,\"scale\":{\"color\":\"#333\",\"labels\":false,\"show\":true,\"width\":2},\"style\":{\"bgColor\":true,\"bgFill\":\"white\",\"fontSize\":\"34\",\"labelColor\":false,\"subText\":\"Risk\"},\"type\":\"simple\",\"useRanges\":false,\"verticalSplit\":false},\"type\":\"gauge\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Critical Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:Critical\",\"analyze_wildcard\":true}}},\"label\":\"Critical\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"0 - 10000\":\"#BF1B00\"},\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "b2f2adb0-897f-11e7-a2d2-c57bca21b3aa",
"_type": "visualization",
"_source": {
"title": "Nessus - Risk: Total",
"visState": "{\"title\":\"Nessus - Risk: Total\",\"type\":\"goal\",\"params\":{\"addLegend\":true,\"addTooltip\":true,\"gauge\":{\"autoExtend\":false,\"backStyle\":\"Full\",\"colorSchema\":\"Green to Red\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"gaugeColorMode\":\"Background\",\"gaugeStyle\":\"Full\",\"gaugeType\":\"Metric\",\"invertColors\":false,\"labels\":{\"color\":\"black\",\"show\":false},\"orientation\":\"vertical\",\"percentageMode\":false,\"scale\":{\"color\":\"#333\",\"labels\":false,\"show\":true,\"width\":2},\"style\":{\"bgColor\":true,\"bgFill\":\"white\",\"fontSize\":\"34\",\"labelColor\":false,\"subText\":\"Risk\"},\"type\":\"simple\",\"useRanges\":false,\"verticalSplit\":false},\"type\":\"gauge\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Total\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}}},\"label\":\"Critical\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"0 - 10000\":\"#64B0C8\"},\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "5093c620-44e9-11e7-8014-ede06a7e69f8",
"_type": "visualization",
"_source": {
"title": "Nessus - Mitigation Readme",
"visState": "{\"title\":\"Nessus - Mitigation Readme\",\"type\":\"markdown\",\"params\":{\"markdown\":\"** Legend **\\n\\n* [Common Vulnerability Scoring System (CVSS)](https://nvd.nist.gov/vuln-metrics/cvss) is the NIST vulnerability scoring system\\n* Risk Number is residual risk score calculated from CVSS, which is adjusted to be specific to Heartland which accounts for services not in use such as Java and Flash\\n* Vulnerabilities by Tag are systems tagged with HIPAA and PCI identification.\\n\\n\\n** Workflow **\\n* Select 10.0 under Risk Number to identify Critical Vulnerabilities. \\n* For more information about a CVE, scroll down and click the CVE link.\\n* To filter by tags, use one of the following filters:\\n** tags:has_hipaa_data, tags:pci_asset, tags:hipaa_asset, tags:critical_asset**\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
}
]

View File

@ -1,139 +0,0 @@
# Author: Austin Taylor and Justin Henderson
# Email: email@austintaylor.io
# Last Update: 12/20/2017
# Version 0.3
# Description: Take in nessus reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/vulnwhisp/scans/**/*"
start_position => "beginning"
tags => "nessus"
type => "nessus"
}
}
filter {
if "nessus" in [tags]{
mutate {
gsub => [
"message", "\|\|\|", " ",
"message", "\t\t", " ",
"message", " ", " ",
"message", " ", " ",
"message", " ", " "
]
}
csv {
columns => ["plugin_id", "cve", "cvss", "risk", "host", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output"]
separator => ","
source => "message"
}
grok {
match => { "path" => "(?<scan_name>[a-zA-Z0-9_.\-]+)_%{INT:scan_id}_%{INT:history_id}_%{INT:last_updated}.csv$" }
tag_on_failure => []
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => ["last_updated"]
}
if [risk] == "None" {
mutate { add_field => { "risk_number" => 0 }}
}
if [risk] == "Low" {
mutate { add_field => { "risk_number" => 1 }}
}
if [risk] == "Medium" {
mutate { add_field => { "risk_number" => 2 }}
}
if [risk] == "High" {
mutate { add_field => { "risk_number" => 3 }}
}
if [risk] == "Critical" {
mutate { add_field => { "risk_number" => 4 }}
}
if [cve] == "nan" {
mutate { remove_field => [ "cve" ] }
}
if [see_also] == "nan" {
mutate { remove_field => [ "see_also" ] }
}
if [description] == "nan" {
mutate { remove_field => [ "description" ] }
}
if [plugin_output] == "nan" {
mutate { remove_field => [ "plugin_output" ] }
}
if [synopsis] == "nan" {
mutate { remove_field => [ "synopsis" ] }
}
mutate {
remove_field => [ "message" ]
add_field => { "risk_score" => "%{cvss}" }
}
mutate {
convert => { "risk_score" => "float" }
}
# Compensating controls - adjust risk_score
# Adobe and Java are not allowed to run in browser unless whitelisted
# Therefore, lower score by dividing by 3 (score is subjective to risk)
#Modify and uncomment when ready to use
#if [risk_score] != 0 {
# if [plugin_name] =~ "Adobe" and [risk_score] > 6 or [plugin_name] =~ "Java" and [risk_score] > 6 {
# ruby {
# code => "event.set('risk_score', event.get('risk_score') / 3)"
# }
# mutate {
# add_field => { "compensating_control" => "Adobe and Flash removed from browsers unless whitelisted site." }
# }
# }
#}
# Add tags for reporting based on assets or criticality
#if [host] == "192.168.0.1" or [host] == "192.168.0.50" or [host] =~ "^192\.168\.10\." or [host] =~ "^42.42.42." {
# mutate {
# add_tag => [ "critical_asset" ]
# }
#}
#if [host] =~ "^192\.168\.[45][0-9][0-9]\.1$" or [host] =~ "^192.168\.[50]\.[0-9]{1,2}\.1$"{
# mutate {
# add_tag => [ "has_hipaa_data" ]
# }
#}
#if [host] =~ "^192\.168\.[45][0-9][0-9]\." {
# mutate {
# add_tag => [ "hipaa_asset" ]
# }
#}
#if [host] =~ "^192\.168\.5\." {
# mutate {
# add_tag => [ "pci_asset" ]
# }
#}
#if [host] =~ "^10\.0\.50\." {
# mutate {
# add_tag => [ "web_servers" ]
# }
#}
}
}
output {
if "nessus" in [tags] or [type] == "nessus" {
#stdout { codec => rubydebug }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "logstash-nessus-%{+YYYY.MM}"
}
}
}

View File

@ -1,14 +0,0 @@
# Author: Austin Taylor
# Email: email@austintaylor.io
# Last Update: 05/21/2017
# Creates logstash-nessus
output {
if "nessus" in [tags] or [type] == "nessus" {
#stdout { codec => rubydebug }
elasticsearch {
hosts => "localhost:9200"
index => "logstash-nessus-%{+YYYY.MM}"
}
}
}

View File

@ -1,6 +1,12 @@
pandas==0.20.3
setuptools==0.9.8
setuptools==40.4.3
pytz==2017.2
Requests==2.18.3
qualysapi==4.1.0
lxml==4.1.1
Requests==2.20.0
lxml==4.6.5
future-fstrings
bs4
jira
bottle
coloredlogs
qualysapi==6.0.0
httpretty

View File

@ -0,0 +1,72 @@
version: '2'
services:
vulnwhisp-es1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.2
container_name: vulnwhisp-es1
environment:
- cluster.name=vulnwhisperer
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 8g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
environment:
- xpack.security.enabled=false
#restart: always
networks:
esnet:
aliases:
- vulnwhisp-es1.local
vulnwhisp-ks1:
image: docker.elastic.co/kibana/kibana:5.6.2
environment:
SERVER_NAME: vulnwhisp-ks1
ELASTICSEARCH_URL: http://vulnwhisp-es1:9200
ports:
- 5601:5601
depends_on:
- vulnwhisp-es1
networks:
esnet:
aliases:
- vulnwhisp-ks1.local
vulnwhisp-ls1:
image: docker.elastic.co/logstash/logstash:5.6.2
container_name: vulnwhisp-ls1
volumes:
- ./docker/1000_nessus_process_file.conf:/usr/share/logstash/pipeline/1000_nessus_process_file.conf
- ./docker/2000_qualys_web_scans.conf:/usr/share/logstash/pipeline/2000_qualys_web_scans.conf
- ./docker/3000_openvas.conf:/usr/share/logstash/pipeline/3000_openvas.conf
- ./docker/4000_jira.conf:/usr/share/logstash/pipeline/4000_jira.conf
- ./docker/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./data/:/opt/VulnWhisperer/data
environment:
- xpack.monitoring.enabled=false
depends_on:
- vulnwhisp-es1
networks:
esnet:
aliases:
- vulnwhisp-ls1.local
vulnwhisp-vulnwhisperer:
image: hasecuritysolutions/vulnwhisperer:latest
container_name: vulnwhisp-vulnwhisperer
volumes:
- ./data/:/opt/VulnWhisperer/data
- ./configs/frameworks_example.ini:/opt/VulnWhisperer/frameworks_example.ini
network_mode: host
volumes:
esdata1:
driver: local
networks:
esnet:

View File

@ -0,0 +1,220 @@
# Author: Austin Taylor and Justin Henderson
# Email: email@austintaylor.io
# Last Update: 12/20/2017
# Version 0.3
# Description: Take in nessus reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/data/nessus/**/*"
start_position => "beginning"
tags => "nessus"
type => "nessus"
}
file {
path => "/opt/VulnWhisperer/data/tenable/*.csv"
start_position => "beginning"
tags => "tenable"
type => "tenable"
}
}
filter {
if "nessus" in [tags] or "tenable" in [tags] {
# Drop the header column
if [message] =~ "^Plugin ID" { drop {} }
csv {
# columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output"]
columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output", "asset_uuid", "vulnerability_state", "ip", "fqdn", "netbios", "operating_system", "mac_address", "plugin_family", "cvss_base", "cvss_temporal", "cvss_temporal_vector", "cvss_vector", "cvss3_base", "cvss3_temporal", "cvss3_temporal_vector", "cvss_vector", "system_type", "host_start", "host_end"]
separator => ","
source => "message"
}
ruby {
code => "if event.get('description')
event.set('description', event.get('description').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('synopsis')
event.set('synopsis', event.get('synopsis').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('solution')
event.set('solution', event.get('solution').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('see_also')
event.set('see_also', event.get('see_also').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('plugin_output')
event.set('plugin_output', event.get('plugin_output').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end"
}
#If using filebeats as your source, you will need to replace the "path" field to "source"
grok {
match => { "path" => "(?<scan_name>[a-zA-Z0-9_.\-]+)_%{INT:scan_id}_%{INT:history_id}_%{INT:last_updated}.csv$" }
tag_on_failure => []
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => ["last_updated"]
}
if [risk] == "None" {
mutate { add_field => { "risk_number" => 0 }}
}
if [risk] == "Low" {
mutate { add_field => { "risk_number" => 1 }}
}
if [risk] == "Medium" {
mutate { add_field => { "risk_number" => 2 }}
}
if [risk] == "High" {
mutate { add_field => { "risk_number" => 3 }}
}
if [risk] == "Critical" {
mutate { add_field => { "risk_number" => 4 }}
}
if ![cve] or [cve] == "nan" {
mutate { remove_field => [ "cve" ] }
}
if ![cvss] or [cvss] == "nan" {
mutate { remove_field => [ "cvss" ] }
}
if ![cvss_base] or [cvss_base] == "nan" {
mutate { remove_field => [ "cvss_base" ] }
}
if ![cvss_temporal] or [cvss_temporal] == "nan" {
mutate { remove_field => [ "cvss_temporal" ] }
}
if ![cvss_temporal_vector] or [cvss_temporal_vector] == "nan" {
mutate { remove_field => [ "cvss_temporal_vector" ] }
}
if ![cvss_vector] or [cvss_vector] == "nan" {
mutate { remove_field => [ "cvss_vector" ] }
}
if ![cvss3_base] or [cvss3_base] == "nan" {
mutate { remove_field => [ "cvss3_base" ] }
}
if ![cvss3_temporal] or [cvss3_temporal] == "nan" {
mutate { remove_field => [ "cvss3_temporal" ] }
}
if ![cvss3_temporal_vector] or [cvss3_temporal_vector] == "nan" {
mutate { remove_field => [ "cvss3_temporal_vector" ] }
}
if ![description] or [description] == "nan" {
mutate { remove_field => [ "description" ] }
}
if ![mac_address] or [mac_address] == "nan" {
mutate { remove_field => [ "mac_address" ] }
}
if ![netbios] or [netbios] == "nan" {
mutate { remove_field => [ "netbios" ] }
}
if ![operating_system] or [operating_system] == "nan" {
mutate { remove_field => [ "operating_system" ] }
}
if ![plugin_output] or [plugin_output] == "nan" {
mutate { remove_field => [ "plugin_output" ] }
}
if ![see_also] or [see_also] == "nan" {
mutate { remove_field => [ "see_also" ] }
}
if ![synopsis] or [synopsis] == "nan" {
mutate { remove_field => [ "synopsis" ] }
}
if ![system_type] or [system_type] == "nan" {
mutate { remove_field => [ "system_type" ] }
}
mutate {
remove_field => [ "message" ]
add_field => { "risk_score" => "%{cvss}" }
}
mutate {
convert => { "risk_score" => "float" }
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
# Compensating controls - adjust risk_score
# Adobe and Java are not allowed to run in browser unless whitelisted
# Therefore, lower score by dividing by 3 (score is subjective to risk)
#Modify and uncomment when ready to use
#if [risk_score] != 0 {
# if [plugin_name] =~ "Adobe" and [risk_score] > 6 or [plugin_name] =~ "Java" and [risk_score] > 6 {
# ruby {
# code => "event.set('risk_score', event.get('risk_score') / 3)"
# }
# mutate {
# add_field => { "compensating_control" => "Adobe and Flash removed from browsers unless whitelisted site." }
# }
# }
#}
# Add tags for reporting based on assets or criticality
if [asset] == "dc01" or [asset] == "dc02" or [asset] == "pki01" or [asset] == "192.168.0.54" or [asset] =~ "^192\.168\.0\." or [asset] =~ "^42.42.42." {
mutate {
add_tag => [ "critical_asset" ]
}
}
#if [asset] =~ "^192\.168\.[45][0-9][0-9]\.1$" or [asset] =~ "^192.168\.[50]\.[0-9]{1,2}\.1$"{
# mutate {
# add_tag => [ "has_hipaa_data" ]
# }
#}
#if [asset] =~ "^192\.168\.[45][0-9][0-9]\." {
# mutate {
# add_tag => [ "hipaa_asset" ]
# }
#}
if [asset] =~ "^hr" {
mutate {
add_tag => [ "pci_asset" ]
}
}
#if [asset] =~ "^10\.0\.50\." {
# mutate {
# add_tag => [ "web_servers" ]
# }
#}
}
}
output {
if "nessus" in [tags] or "tenable" in [tags] or [type] in [ "nessus", "tenable" ] {
# stdout { codec => rubydebug }
elasticsearch {
hosts => [ "vulnwhisp-es1.local:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,153 @@
# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 12/30/2017
# Version 0.3
# Description: Take in qualys web scan reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/data/qualys/*.json"
type => json
codec => json
start_position => "beginning"
tags => [ "qualys" ]
}
}
filter {
if "qualys" in [tags] {
grok {
match => { "path" => [ "(?<tags>qualys_vuln)_scan_%{DATA}_%{INT:last_updated}.json$", "(?<tags>qualys_web)_%{INT:app_id}_%{INT:last_updated}.json$" ] }
tag_on_failure => []
}
mutate {
replace => [ "message", "%{message}" ]
#gsub => [
# "message", "\|\|\|", " ",
# "message", "\t\t", " ",
# "message", " ", " ",
# "message", " ", " ",
# "message", " ", " ",
# "message", "nan", " ",
# "message",'\n',''
#]
}
if "qualys_web" in [tags] {
mutate {
add_field => { "asset" => "%{web_application_name}" }
add_field => { "risk_score" => "%{cvss}" }
}
} else if "qualys_vuln" in [tags] {
mutate {
add_field => { "asset" => "%{ip}" }
add_field => { "risk_score" => "%{cvss}" }
}
}
if [risk] == "1" {
mutate { add_field => { "risk_number" => 0 }}
mutate { replace => { "risk" => "info" }}
}
if [risk] == "2" {
mutate { add_field => { "risk_number" => 1 }}
mutate { replace => { "risk" => "low" }}
}
if [risk] == "3" {
mutate { add_field => { "risk_number" => 2 }}
mutate { replace => { "risk" => "medium" }}
}
if [risk] == "4" {
mutate { add_field => { "risk_number" => 3 }}
mutate { replace => { "risk" => "high" }}
}
if [risk] == "5" {
mutate { add_field => { "risk_number" => 4 }}
mutate { replace => { "risk" => "critical" }}
}
mutate {
remove_field => "message"
}
if [first_time_detected] {
date {
match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_detected"
}
}
if [first_time_tested] {
date {
match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_tested"
}
}
if [last_time_detected] {
date {
match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_detected"
}
}
if [last_time_tested] {
date {
match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_tested"
}
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => "last_updated"
}
mutate {
convert => { "plugin_id" => "integer"}
convert => { "id" => "integer"}
convert => { "risk_number" => "integer"}
convert => { "risk_score" => "float"}
convert => { "total_times_detected" => "integer"}
convert => { "cvss_temporal" => "float"}
convert => { "cvss" => "float"}
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
if [asset] =~ "\.yourdomain\.(com|net)$" {
mutate {
add_tag => [ "critical_asset" ]
}
}
}
}
output {
if "qualys" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "vulnwhisp-es1.local:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,146 @@
# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 03/04/2018
# Version 0.3
# Description: Take in Openvas web scan reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/data/openvas/*.json"
type => json
codec => json
start_position => "beginning"
tags => [ "openvas_scan", "openvas" ]
}
}
filter {
if "openvas_scan" in [tags] {
mutate {
replace => [ "message", "%{message}" ]
gsub => [
"message", "\|\|\|", " ",
"message", "\t\t", " ",
"message", " ", " ",
"message", " ", " ",
"message", " ", " ",
"message", "nan", " ",
"message",'\n',''
]
}
grok {
match => { "path" => "openvas_scan_%{DATA:scan_id}_%{INT:last_updated}.json$" }
tag_on_failure => []
}
mutate {
add_field => { "risk_score" => "%{cvss}" }
}
if [risk] == "1" {
mutate { add_field => { "risk_number" => 0 }}
mutate { replace => { "risk" => "info" }}
}
if [risk] == "2" {
mutate { add_field => { "risk_number" => 1 }}
mutate { replace => { "risk" => "low" }}
}
if [risk] == "3" {
mutate { add_field => { "risk_number" => 2 }}
mutate { replace => { "risk" => "medium" }}
}
if [risk] == "4" {
mutate { add_field => { "risk_number" => 3 }}
mutate { replace => { "risk" => "high" }}
}
if [risk] == "5" {
mutate { add_field => { "risk_number" => 4 }}
mutate { replace => { "risk" => "critical" }}
}
mutate {
remove_field => "message"
}
if [first_time_detected] {
date {
match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_detected"
}
}
if [first_time_tested] {
date {
match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_tested"
}
}
if [last_time_detected] {
date {
match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_detected"
}
}
if [last_time_tested] {
date {
match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_tested"
}
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => "last_updated"
}
mutate {
convert => { "plugin_id" => "integer"}
convert => { "id" => "integer"}
convert => { "risk_number" => "integer"}
convert => { "risk_score" => "float"}
convert => { "total_times_detected" => "integer"}
convert => { "cvss_temporal" => "float"}
convert => { "cvss" => "float"}
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
# Add your critical assets by subnet or by hostname. Comment this field out if you don't want to tag any, but the asset panel will break.
if [asset] =~ "^10\.0\.100\." {
mutate {
add_tag => [ "critical_asset" ]
}
}
}
}
output {
if "openvas" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "vulnwhisp-es1.local:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,21 @@
# Description: Take in jira tickets from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/Vulnwhisperer/jira/*.json"
type => json
codec => json
start_position => "beginning"
tags => [ "jira" ]
}
}
output {
if "jira" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "vulnwhisp-es1.local:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,5 @@
path.config: /usr/share/logstash/pipeline/
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.elasticsearch.url: vulnwhisp-es1.local:9200
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.enabled: false

View File

@ -0,0 +1,122 @@
{
"order": 0,
"template": "logstash-vulnwhisperer-*",
"settings": {
"index": {
"routing": {
"allocation": {
"total_shards_per_node": "2"
}
},
"mapping": {
"total_fields": {
"limit": "3000"
}
},
"refresh_interval": "5s",
"number_of_shards": "1",
"number_of_replicas": "0"
}
},
"mappings": {
"_default_": {
"_all": {
"enabled": false
},
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}
},
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
],
"properties": {
"plugin_id": {
"type": "float"
},
"last_updated": {
"type": "date"
},
"geoip": {
"dynamic": true,
"type": "object",
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "float"
}
}
},
"risk_score": {
"type": "float"
},
"source": {
"type": "keyword"
},
"synopsis": {
"type": "keyword"
},
"see_also": {
"type": "keyword"
},
"@timestamp": {
"type": "date"
},
"cve": {
"type": "keyword"
},
"solution": {
"type": "keyword"
},
"port": {
"type": "integer"
},
"host": {
"type": "text"
},
"@version": {
"type": "keyword"
},
"risk": {
"type": "keyword"
},
"assign_ip": {
"type": "ip"
},
"cvss": {
"type": "float"
}
}
}
},
"aliases": {}
}

View File

@ -0,0 +1,450 @@
[
{
"_id": "80158c90-57c1-11e7-b484-a970fc9d150a",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - HIPAA TL",
"visState": "{\"type\":\"timelion\",\"title\":\"VulnWhisperer - HIPAA TL\",\"params\":{\"expression\":\".es(index=logstash-vulnwhisperer-*,q='risk_score:>9 AND tags:pci_asset').label(\\\"PCI Assets\\\"),.es(index=logstash-vulnwhisperer-*,q='risk_score:>9 AND tags:has_hipaa_data').label(\\\"Has HIPAA Data\\\"),.es(index=logstash-vulnwhisperer-*,q='risk_score:>9 AND tags:hipaa_asset').label(\\\"HIPAA Assets\\\")\",\"interval\":\"auto\"}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
}
}
},
{
"_id": "479deab0-8a39-11e7-a58a-9bfcb3761a3d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL - TaggedAssetsPluginNames",
"visState": "{\"title\":\"VulnWhisperer - TL - TaggedAssetsPluginNames\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*', q='tags:critical_asset OR tags:hipaa_asset OR tags:pci_asset', split=\\\"plugin_name.keyword:10\\\").bars(width=4).label(regex=\\\".*:(.+)>.*\\\",label=\\\"$1\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "84f5c370-8a38-11e7-a58a-9bfcb3761a3d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL - CriticalAssetsPluginNames",
"visState": "{\"title\":\"VulnWhisperer - TL - CriticalAssetsPluginNames\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*', q='tags:critical_asset', split=\\\"plugin_name.keyword:10\\\").bars(width=4).label(regex=\\\".*:(.+)>.*\\\",label=\\\"$1\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "307cdae0-8a38-11e7-a58a-9bfcb3761a3d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL - PluginNames",
"visState": "{\"title\":\"VulnWhisperer - TL - PluginNames\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*', split=\\\"plugin_name.keyword:25\\\").bars(width=4).label(regex=\\\".*:(.+)>.*\\\",label=\\\"$1\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "5093c620-44e9-11e7-8014-ede06a7e69f8",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Mitigation Readme",
"visState": "{\"title\":\"VulnWhisperer - Mitigation Readme\",\"type\":\"markdown\",\"params\":{\"markdown\":\"** Legend **\\n\\n* [Common Vulnerability Scoring System (CVSS)](https://nvd.nist.gov/vuln-metrics/cvss) is the NIST vulnerability scoring system\\n* Risk Number is residual risk score calculated from CVSS, which is adjusted to be specific to the netowrk owner, which accounts for services not in use such as Java and Flash\\n* Vulnerabilities by Tag are systems tagged with HIPAA and PCI identification.\\n\\n\\n** Workflow **\\n* Select 10.0 under Risk Number to identify Critical Vulnerabilities. \\n* For more information about a CVE, scroll down and click the CVE link.\\n* To filter by tags, use one of the following filters:\\n** tags:has_hipaa_data, tags:pci_asset, tags:hipaa_asset, tags:critical_asset**\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "7e7fbc90-3df2-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-PluginID",
"visState": "{\"title\":\"VulnWhisperer-PluginID\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"plugin_id\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "5a3c0340-3eb3-11e7-a192-93f36fbd9d05",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-CVSSHeatmap",
"visState": "{\"title\":\"VulnWhisperer-CVSSHeatmap\",\"type\":\"heatmap\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"enableHover\":false,\"legendPosition\":\"right\",\"times\":[],\"colorsNumber\":4,\"colorSchema\":\"Yellow to Red\",\"setColorRange\":false,\"colorsRange\":[],\"invertColors\":false,\"percentageMode\":false,\"valueAxes\":[{\"show\":false,\"id\":\"ValueAxis-1\",\"type\":\"value\",\"scale\":{\"type\":\"linear\",\"defaultYExtents\":false},\"labels\":{\"show\":false,\"rotate\":0,\"color\":\"#555\"}}]},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"host.keyword\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\"}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"cvss.keyword\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"_term\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 3500\":\"rgb(255,255,204)\",\"3500 - 7000\":\"rgb(254,217,118)\",\"7000 - 10500\":\"rgb(253,141,60)\",\"10500 - 14000\":\"rgb(227,27,28)\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "1de9e550-3df1-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-Description",
"visState": "{\"title\":\"VulnWhisperer-Description\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"description.keyword\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Description\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "13c7d4e0-3df3-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-Solution",
"visState": "{\"title\":\"VulnWhisperer-Solution\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"solution.keyword\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Solution\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "297df800-3f7e-11e7-bd24-6903e3283192",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Plugin Name",
"visState": "{\"title\":\"VulnWhisperer - Plugin Name\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"plugin_name.keyword\",\"size\":10,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Plugin Name\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "de1a5f40-3f85-11e7-97f9-3777d794626d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - ScanName",
"visState": "{\"title\":\"VulnWhisperer - ScanName\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"scan_name.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Scan Name\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "ecbb99c0-3f84-11e7-97f9-3777d794626d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Total",
"visState": "{\"title\":\"VulnWhisperer - Total\",\"type\":\"metric\",\"params\":{\"handleNoResults\":true,\"fontSize\":60},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Total\"}}],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "471a3580-3f6b-11e7-88e7-df1abe6547fb",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Vulnerabilities by Tag",
"visState": "{\"title\":\"VulnWhisperer - Vulnerabilities by Tag\",\"type\":\"table\",\"params\":{\"perPage\":3,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"3\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"bucket\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"tags:has_hipaa_data\",\"analyze_wildcard\":true}}},\"label\":\"Systems with HIPAA data\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"tags:pci_asset\",\"analyze_wildcard\":true}}},\"label\":\"PCI Systems\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"tags:hipaa_asset\",\"analyze_wildcard\":true}}},\"label\":\"HIPAA Systems\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "35b6d320-3f7f-11e7-bd24-6903e3283192",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Residual Risk",
"visState": "{\"title\":\"VulnWhisperer - Residual Risk\",\"type\":\"table\",\"params\":{\"perPage\":15,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"risk_score\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Risk Number\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "a9225930-3df2-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-Risk",
"visState": "{\"title\":\"VulnWhisperer-Risk\",\"type\":\"table\",\"params\":{\"perPage\":4,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"risk\",\"size\":10,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Risk Severity\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "2f979030-44b9-11e7-a818-f5f80dfc3590",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - ScanBarChart",
"visState": "{\"aggs\":[{\"enabled\":true,\"id\":\"1\",\"params\":{},\"schema\":\"metric\",\"type\":\"count\"},{\"enabled\":true,\"id\":\"2\",\"params\":{\"customLabel\":\"Scan Name\",\"field\":\"plugin_name.keyword\",\"order\":\"desc\",\"orderBy\":\"1\",\"size\":10},\"schema\":\"segment\",\"type\":\"terms\"}],\"listeners\":{},\"params\":{\"addLegend\":true,\"addTimeMarker\":false,\"addTooltip\":true,\"defaultYExtents\":false,\"legendPosition\":\"right\",\"mode\":\"stacked\",\"scale\":\"linear\",\"setYExtents\":false,\"times\":[]},\"title\":\"VulnWhisperer - ScanBarChart\",\"type\":\"histogram\"}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "a6508640-897a-11e7-bbc0-33592ce0be1e",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Critical Assets Aggregated",
"visState": "{\"title\":\"VulnWhisperer - Critical Assets Aggregated\",\"type\":\"heatmap\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"enableHover\":true,\"legendPosition\":\"right\",\"times\":[],\"colorsNumber\":4,\"colorSchema\":\"Green to Red\",\"setColorRange\":true,\"colorsRange\":[{\"from\":0,\"to\":3},{\"from\":3,\"to\":7},{\"from\":7,\"to\":9},{\"from\":9,\"to\":11}],\"invertColors\":false,\"percentageMode\":false,\"valueAxes\":[{\"show\":false,\"id\":\"ValueAxis-1\",\"type\":\"value\",\"scale\":{\"type\":\"linear\",\"defaultYExtents\":false},\"labels\":{\"show\":true,\"rotate\":0,\"color\":\"white\"}}]},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"risk_score\",\"customLabel\":\"Residual Risk Score\"}},{\"id\":\"3\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{},\"customLabel\":\"Date\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"host\",\"size\":10,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Critical Asset IP\"}},{\"id\":\"5\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"split\",\"params\":{\"field\":\"plugin_name.keyword\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"row\":true}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"0 - 3\":\"#7EB26D\",\"3 - 7\":\"#EAB839\",\"7 - 9\":\"#EF843C\",\"8 - 10\":\"#BF1B00\",\"9 - 11\":\"#BF1B00\"},\"defaultColors\":{\"0 - 3\":\"rgb(0,104,55)\",\"3 - 7\":\"rgb(135,203,103)\",\"7 - 9\":\"rgb(255,255,190)\",\"9 - 11\":\"rgb(249,142,82)\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[{\"$state\":{\"store\":\"appState\"},\"meta\":{\"alias\":\"Critical Asset\",\"disabled\":false,\"index\":\"logstash-vulnwhisperer-*\",\"key\":\"tags\",\"negate\":false,\"type\":\"phrase\",\"value\":\"critical_asset\"},\"query\":{\"match\":{\"tags\":{\"query\":\"critical_asset\",\"type\":\"phrase\"}}}}]}"
}
}
},
{
"_id": "099a3820-3f68-11e7-a6bd-e764d950e506",
"_type": "visualization",
"_source": {
"title": "Timelion VulnWhisperer Example",
"visState": "{\"type\":\"timelion\",\"title\":\"Timelion VulnWhisperer Example\",\"params\":{\"expression\":\".es(index=logstash-vulnwhisperer-*,q=risk:high).label(\\\"Current High Risk\\\"),.es(index=logstash-vulnwhisperer-*,q=risk:high,offset=-1y).label(\\\"Last 1 Year High Risk\\\"),.es(index=logstash-vulnwhisperer-*,q=risk:medium).label(\\\"Current Medium Risk\\\"),.es(index=logstash-vulnwhisperer-*,q=risk:medium,offset=-1y).label(\\\"Last 1 Year Medium Risk\\\")\",\"interval\":\"auto\"}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
}
}
},
{
"_id": "67d432e0-44ec-11e7-a05f-d9719b331a27",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL-Critical Risk",
"visState": "{\"title\":\"VulnWhisperer - TL-Critical Risk\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*',q='(risk_score:>=9 AND risk_score:<=10)').label(\\\"Original\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=9 AND risk_score:<=10)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=9 AND risk_score:<=10)').subtract(.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=9 AND risk_score:<=10)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "a91b9fe0-44ec-11e7-a05f-d9719b331a27",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL-Medium Risk",
"visState": "{\"title\":\"VulnWhisperer - TL-Medium Risk\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*',q='(risk_score:>=4 AND risk_score:<7)').label(\\\"Original\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=4 AND risk_score:<7)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=4 AND risk_score:<7)').subtract(.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=4 AND risk_score:<7)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "8d9592d0-44ec-11e7-a05f-d9719b331a27",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL-High Risk",
"visState": "{\"title\":\"VulnWhisperer - TL-High Risk\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*',q='(risk_score:>=7 AND risk_score:<9)').label(\\\"Original\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=7 AND risk_score:<9)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=7 AND risk_score:<9)').subtract(.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=7 AND risk_score:<9)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "a2d66660-44ec-11e7-a05f-d9719b331a27",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL-Low Risk",
"visState": "{\"title\":\"VulnWhisperer - TL-Low Risk\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*',q='(risk_score:>0 AND risk_score:<4)').label(\\\"Original\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>0 AND risk_score:<4)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>0 AND risk_score:<4)').subtract(.es(index='logstash-vulnwhisperer-*',q='(risk_score:>0 AND risk_score:<4)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "fb6eb020-49ab-11e7-8f8c-57ad64ec48a6",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Critical Risk Score for Tagged Assets",
"visState": "{\"title\":\"VulnWhisperer - Critical Risk Score for Tagged Assets\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index=logstash-vulnwhisperer-*,q='risk_score:>9 AND tags:hipaa_asset').label(\\\"HIPAA Assets\\\"),.es(index=logstash-vulnwhisperer-*,q='risk_score:>9 AND tags:pci_asset').label(\\\"PCI Systems\\\"),.es(index=logstash-vulnwhisperer-*,q='risk_score:>9 AND tags:has_hipaa_data').label(\\\"Has HIPAA Data\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "b2f2adb0-897f-11e7-a2d2-c57bca21b3aa",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Total",
"visState": "{\"title\":\"VulnWhisperer - Risk: Total\",\"type\":\"goal\",\"params\":{\"addLegend\":true,\"addTooltip\":true,\"gauge\":{\"autoExtend\":false,\"backStyle\":\"Full\",\"colorSchema\":\"Green to Red\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"gaugeColorMode\":\"Background\",\"gaugeStyle\":\"Full\",\"gaugeType\":\"Metric\",\"invertColors\":false,\"labels\":{\"color\":\"black\",\"show\":false},\"orientation\":\"vertical\",\"percentageMode\":false,\"scale\":{\"color\":\"#333\",\"labels\":false,\"show\":true,\"width\":2},\"style\":{\"bgColor\":true,\"bgFill\":\"white\",\"fontSize\":\"34\",\"labelColor\":false,\"subText\":\"Risk\"},\"type\":\"simple\",\"useRanges\":false,\"verticalSplit\":false},\"type\":\"gauge\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Total\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}}},\"label\":\"Critical\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"0 - 10000\":\"#64B0C8\"},\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "465c5820-8977-11e7-857e-e1d56b17746d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Critical Assets",
"visState": "{\"title\":\"VulnWhisperer - Critical Assets\",\"type\":\"heatmap\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"enableHover\":true,\"legendPosition\":\"right\",\"times\":[],\"colorsNumber\":4,\"colorSchema\":\"Green to Red\",\"setColorRange\":true,\"colorsRange\":[{\"from\":0,\"to\":3},{\"from\":3,\"to\":7},{\"from\":7,\"to\":9},{\"from\":9,\"to\":11}],\"invertColors\":false,\"percentageMode\":false,\"valueAxes\":[{\"show\":false,\"id\":\"ValueAxis-1\",\"type\":\"value\",\"scale\":{\"type\":\"linear\",\"defaultYExtents\":false},\"labels\":{\"show\":false,\"rotate\":0,\"color\":\"white\"}}],\"type\":\"heatmap\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"risk_score\",\"customLabel\":\"Residual Risk Score\"}},{\"id\":\"2\",\"enabled\":false,\"type\":\"terms\",\"schema\":\"split\",\"params\":{\"field\":\"risk_score\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"row\":true}},{\"id\":\"3\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{},\"customLabel\":\"Date\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"asset.keyword\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Critical Asset\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 3\":\"rgb(0,104,55)\",\"3 - 7\":\"rgb(135,203,103)\",\"7 - 9\":\"rgb(255,255,190)\",\"9 - 11\":\"rgb(249,142,82)\"},\"colors\":{\"8 - 10\":\"#BF1B00\",\"9 - 11\":\"#BF1B00\",\"7 - 9\":\"#EF843C\",\"3 - 7\":\"#EAB839\",\"0 - 3\":\"#7EB26D\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[{\"meta\":{\"index\":\"logstash-vulnwhisperer-*\",\"negate\":false,\"disabled\":false,\"alias\":\"Critical Asset\",\"type\":\"phrase\",\"key\":\"tags\",\"value\":\"critical_asset\"},\"query\":{\"match\":{\"tags\":{\"query\":\"critical_asset\",\"type\":\"phrase\"}}},\"$state\":{\"store\":\"appState\"}}]}"
}
}
},
{
"_id": "852816e0-3eb1-11e7-90cb-918f9cb01e3d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-CVSS",
"visState": "{\"title\":\"VulnWhisperer-CVSS\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"cvss.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"CVSS Score\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"cardinality\",\"schema\":\"metric\",\"params\":{\"field\":\"asset.keyword\",\"customLabel\":\"# of Assets\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "d048c220-80b3-11e7-8790-73b60225f736",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: High",
"visState": "{\"title\":\"VulnWhisperer - Risk: High\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":1000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":true}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"High Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:high\"}}},\"label\":\"\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#EF843C\",\"0 - 1000\":\"#E0752D\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "db55bce0-80b3-11e7-8790-73b60225f736",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Critical",
"visState": "{\"title\":\"VulnWhisperer - Risk: Critical\",\"type\":\"goal\",\"params\":{\"addLegend\":true,\"addTooltip\":true,\"gauge\":{\"autoExtend\":false,\"backStyle\":\"Full\",\"colorSchema\":\"Green to Red\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"gaugeColorMode\":\"Background\",\"gaugeStyle\":\"Full\",\"gaugeType\":\"Metric\",\"invertColors\":false,\"labels\":{\"color\":\"black\",\"show\":false},\"orientation\":\"vertical\",\"percentageMode\":false,\"scale\":{\"color\":\"#333\",\"labels\":false,\"show\":true,\"width\":2},\"style\":{\"bgColor\":true,\"bgFill\":\"white\",\"fontSize\":\"34\",\"labelColor\":false,\"subText\":\"Risk\"},\"type\":\"simple\",\"useRanges\":false,\"verticalSplit\":false},\"type\":\"gauge\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Critical Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:critical\"}}},\"label\":\"Critical\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"0 - 10000\":\"#BF1B00\"},\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "56f0f5f0-3ebe-11e7-a192-93f36fbd9d05",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-RiskOverTime",
"visState": "{\"title\":\"VulnWhisperer-RiskOverTime\",\"type\":\"line\",\"params\":{\"addLegend\":true,\"addTimeMarker\":false,\"addTooltip\":true,\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"labels\":{\"show\":true,\"truncate\":100},\"position\":\"bottom\",\"scale\":{\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{\"text\":\"@timestamp per 12 hours\"},\"type\":\"category\"}],\"defaultYExtents\":false,\"drawLinesBetweenPoints\":true,\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"},\"valueAxis\":\"ValueAxis-1\"},\"interpolate\":\"linear\",\"legendPosition\":\"right\",\"orderBucketsBySum\":false,\"radiusRatio\":9,\"scale\":\"linear\",\"seriesParams\":[{\"data\":{\"id\":\"1\",\"label\":\"Count\"},\"drawLinesBetweenPoints\":true,\"interpolate\":\"linear\",\"mode\":\"normal\",\"show\":\"true\",\"showCircles\":true,\"type\":\"line\",\"valueAxis\":\"ValueAxis-1\"}],\"setYExtents\":false,\"showCircles\":true,\"times\":[],\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"labels\":{\"filter\":false,\"rotate\":0,\"show\":true,\"truncate\":100},\"name\":\"LeftAxis-1\",\"position\":\"left\",\"scale\":{\"mode\":\"normal\",\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{\"text\":\"Count\"},\"type\":\"value\"}],\"type\":\"line\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:info\"}}},\"label\":\"Info\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:low\"}}},\"label\":\"Low\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:medium\"}}},\"label\":\"Medium\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:high\"}}},\"label\":\"High\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:critical\"}}},\"label\":\"Critical\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"Critical\":\"#962D82\",\"High\":\"#BF1B00\",\"Low\":\"#629E51\",\"Medium\":\"#EAB839\",\"Info\":\"#65C5DB\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "c1361da0-80b3-11e7-8790-73b60225f736",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Medium",
"visState": "{\"title\":\"VulnWhisperer - Risk: Medium\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":false},\"isDisplayWarning\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Medium Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:medium\"}}},\"label\":\"Medium Risk\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#EAB839\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "e46ff7f0-897d-11e7-934b-67cec0a7da65",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Low",
"visState": "{\"title\":\"VulnWhisperer - Risk: Low\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":false}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Low Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:low\"}}},\"label\":\"Low Risk\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#629E51\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "995e2280-3df3-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-Asset",
"visState": "{\"title\":\"VulnWhisperer-Asset\",\"type\":\"table\",\"params\":{\"perPage\":15,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"asset.keyword\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Asset\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
}
]

View File

@ -1,49 +1,42 @@
[
{
"_id": "5dba30c0-3df3-11e7-a44e-c79ca8efb780",
"_type": "dashboard",
"_source": {
"title": "Nessus - Risk Mitigation",
"hits": 0,
"description": "",
"panelsJSON": "[{\"col\":11,\"id\":\"995e2280-3df3-11e7-a44e-c79ca8efb780\",\"panelIndex\":20,\"row\":7,\"size_x\":2,\"size_y\":6,\"type\":\"visualization\"},{\"col\":1,\"id\":\"852816e0-3eb1-11e7-90cb-918f9cb01e3d\",\"panelIndex\":21,\"row\":8,\"size_x\":3,\"size_y\":5,\"type\":\"visualization\"},{\"col\":4,\"id\":\"297df800-3f7e-11e7-bd24-6903e3283192\",\"panelIndex\":27,\"row\":8,\"size_x\":3,\"size_y\":5,\"type\":\"visualization\"},{\"col\":9,\"id\":\"35b6d320-3f7f-11e7-bd24-6903e3283192\",\"panelIndex\":28,\"row\":7,\"size_x\":2,\"size_y\":6,\"type\":\"visualization\"},{\"col\":1,\"id\":\"471a3580-3f6b-11e7-88e7-df1abe6547fb\",\"panelIndex\":30,\"row\":4,\"size_x\":3,\"size_y\":2,\"type\":\"visualization\"},{\"col\":7,\"id\":\"de1a5f40-3f85-11e7-97f9-3777d794626d\",\"panelIndex\":31,\"row\":8,\"size_x\":2,\"size_y\":5,\"type\":\"visualization\"},{\"col\":9,\"id\":\"5093c620-44e9-11e7-8014-ede06a7e69f8\",\"panelIndex\":37,\"row\":4,\"size_x\":4,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"columns\":[\"host\",\"risk\",\"risk_score\",\"cve\",\"plugin_name\",\"solution\",\"plugin_output\"],\"id\":\"54648700-3f74-11e7-852e-69207a3d0726\",\"panelIndex\":38,\"row\":13,\"size_x\":12,\"size_y\":6,\"sort\":[\"@timestamp\",\"desc\"],\"type\":\"search\"},{\"col\":1,\"id\":\"fb6eb020-49ab-11e7-8f8c-57ad64ec48a6\",\"panelIndex\":39,\"row\":6,\"size_x\":3,\"size_y\":2,\"type\":\"visualization\"},{\"col\":4,\"id\":\"465c5820-8977-11e7-857e-e1d56b17746d\",\"panelIndex\":40,\"row\":4,\"size_x\":5,\"size_y\":4,\"type\":\"visualization\"},{\"col\":7,\"id\":\"db55bce0-80b3-11e7-8790-73b60225f736\",\"panelIndex\":41,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"e46ff7f0-897d-11e7-934b-67cec0a7da65\",\"panelIndex\":42,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":5,\"id\":\"d048c220-80b3-11e7-8790-73b60225f736\",\"panelIndex\":43,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":3,\"id\":\"c1361da0-80b3-11e7-8790-73b60225f736\",\"panelIndex\":44,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":9,\"id\":\"b2f2adb0-897f-11e7-a2d2-c57bca21b3aa\",\"panelIndex\":45,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"size_x\":2,\"size_y\":3,\"panelIndex\":46,\"type\":\"visualization\",\"id\":\"56f0f5f0-3ebe-11e7-a192-93f36fbd9d05\",\"col\":11,\"row\":1}]",
"optionsJSON": "{\"darkTheme\":false}",
"uiStateJSON": "{\"P-11\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-2\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-20\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-21\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}},\"P-27\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-28\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}},\"P-3\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"asc\"}}}},\"P-30\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-31\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-40\":{\"vis\":{\"defaultColors\":{\"0 - 3\":\"rgb(0,104,55)\",\"3 - 7\":\"rgb(135,203,103)\",\"7 - 9\":\"rgb(255,255,190)\",\"9 - 11\":\"rgb(249,142,82)\"}}},\"P-41\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-42\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-43\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-44\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-5\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-6\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-8\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-45\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-46\":{\"vis\":{\"legendOpen\":false}}}",
"version": 1,
"timeRestore": true,
"timeTo": "now",
"timeFrom": "now-30d",
"refreshInterval": {
"display": "Off",
"pause": false,
"value": 0
},
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[{\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}}}],\"highlightAll\":true,\"version\":true}"
}
}
},
{
"_id": "72051530-448e-11e7-a818-f5f80dfc3590",
"_type": "dashboard",
"_source": {
"title": "Nessus - Reporting",
"title": "VulnWhisperer - Reporting",
"hits": 0,
"description": "",
"panelsJSON": "[{\"col\":1,\"id\":\"2f979030-44b9-11e7-a818-f5f80dfc3590\",\"panelIndex\":5,\"row\":12,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"col\":1,\"id\":\"8d9592d0-44ec-11e7-a05f-d9719b331a27\",\"panelIndex\":12,\"row\":8,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"col\":7,\"id\":\"67d432e0-44ec-11e7-a05f-d9719b331a27\",\"panelIndex\":14,\"row\":4,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"col\":10,\"id\":\"297df800-3f7e-11e7-bd24-6903e3283192\",\"panelIndex\":15,\"row\":8,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":7,\"id\":\"471a3580-3f6b-11e7-88e7-df1abe6547fb\",\"panelIndex\":20,\"row\":8,\"size_x\":3,\"size_y\":2,\"type\":\"visualization\"},{\"col\":11,\"id\":\"995e2280-3df3-11e7-a44e-c79ca8efb780\",\"panelIndex\":22,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":9,\"id\":\"b2f2adb0-897f-11e7-a2d2-c57bca21b3aa\",\"panelIndex\":23,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"db55bce0-80b3-11e7-8790-73b60225f736\",\"panelIndex\":25,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":5,\"id\":\"d048c220-80b3-11e7-8790-73b60225f736\",\"panelIndex\":26,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"e46ff7f0-897d-11e7-934b-67cec0a7da65\",\"panelIndex\":27,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":3,\"id\":\"c1361da0-80b3-11e7-8790-73b60225f736\",\"panelIndex\":28,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"size_x\":6,\"size_y\":4,\"panelIndex\":29,\"type\":\"visualization\",\"id\":\"479deab0-8a39-11e7-a58a-9bfcb3761a3d\",\"col\":1,\"row\":4}]",
"panelsJSON": "[{\"col\":1,\"id\":\"2f979030-44b9-11e7-a818-f5f80dfc3590\",\"panelIndex\":5,\"row\":12,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"col\":1,\"id\":\"8d9592d0-44ec-11e7-a05f-d9719b331a27\",\"panelIndex\":12,\"row\":8,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"col\":7,\"id\":\"67d432e0-44ec-11e7-a05f-d9719b331a27\",\"panelIndex\":14,\"row\":4,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"col\":10,\"id\":\"297df800-3f7e-11e7-bd24-6903e3283192\",\"panelIndex\":15,\"row\":8,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":7,\"id\":\"471a3580-3f6b-11e7-88e7-df1abe6547fb\",\"panelIndex\":20,\"row\":8,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":11,\"id\":\"995e2280-3df3-11e7-a44e-c79ca8efb780\",\"panelIndex\":22,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":9,\"id\":\"b2f2adb0-897f-11e7-a2d2-c57bca21b3aa\",\"panelIndex\":23,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"db55bce0-80b3-11e7-8790-73b60225f736\",\"panelIndex\":25,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":5,\"id\":\"d048c220-80b3-11e7-8790-73b60225f736\",\"panelIndex\":26,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"e46ff7f0-897d-11e7-934b-67cec0a7da65\",\"panelIndex\":27,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":3,\"id\":\"c1361da0-80b3-11e7-8790-73b60225f736\",\"panelIndex\":28,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"479deab0-8a39-11e7-a58a-9bfcb3761a3d\",\"panelIndex\":29,\"row\":4,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"}]",
"optionsJSON": "{\"darkTheme\":false}",
"uiStateJSON": "{\"P-15\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-20\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-21\":{\"vis\":{\"defaultColors\":{\"0 - 100\":\"rgb(0,104,55)\"}}},\"P-22\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-23\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-24\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-25\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-26\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-27\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-28\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-5\":{\"vis\":{\"legendOpen\":false}}}",
"version": 1,
"timeRestore": true,
"timeTo": "now",
"timeFrom": "now-30d",
"timeFrom": "now-1y",
"refreshInterval": {
"display": "Off",
"pause": false,
"value": 0
},
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[{\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}}}],\"highlightAll\":true,\"version\":true}"
"searchSourceJSON": "{\"filter\":[{\"query\":{\"match_all\":{}}}],\"highlightAll\":true,\"version\":true}"
}
}
},
{
"_id": "AWCUqesWib22Ai8JwW3u",
"_type": "dashboard",
"_source": {
"title": "VulnWhisperer - Risk Mitigation",
"hits": 0,
"description": "",
"panelsJSON": "[{\"col\":11,\"id\":\"995e2280-3df3-11e7-a44e-c79ca8efb780\",\"panelIndex\":20,\"row\":8,\"size_x\":2,\"size_y\":6,\"type\":\"visualization\"},{\"col\":1,\"id\":\"852816e0-3eb1-11e7-90cb-918f9cb01e3d\",\"panelIndex\":21,\"row\":10,\"size_x\":3,\"size_y\":5,\"type\":\"visualization\"},{\"col\":4,\"id\":\"297df800-3f7e-11e7-bd24-6903e3283192\",\"panelIndex\":27,\"row\":8,\"size_x\":3,\"size_y\":5,\"type\":\"visualization\"},{\"col\":9,\"id\":\"35b6d320-3f7f-11e7-bd24-6903e3283192\",\"panelIndex\":28,\"row\":8,\"size_x\":2,\"size_y\":6,\"type\":\"visualization\"},{\"col\":11,\"id\":\"471a3580-3f6b-11e7-88e7-df1abe6547fb\",\"panelIndex\":30,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"de1a5f40-3f85-11e7-97f9-3777d794626d\",\"panelIndex\":31,\"row\":8,\"size_x\":2,\"size_y\":5,\"type\":\"visualization\"},{\"col\":10,\"id\":\"5093c620-44e9-11e7-8014-ede06a7e69f8\",\"panelIndex\":37,\"row\":4,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":1,\"columns\":[\"host\",\"risk\",\"risk_score\",\"cve\",\"plugin_name\",\"solution\",\"plugin_output\"],\"id\":\"54648700-3f74-11e7-852e-69207a3d0726\",\"panelIndex\":38,\"row\":15,\"size_x\":12,\"size_y\":6,\"sort\":[\"@timestamp\",\"desc\"],\"type\":\"search\"},{\"col\":1,\"id\":\"fb6eb020-49ab-11e7-8f8c-57ad64ec48a6\",\"panelIndex\":39,\"row\":8,\"size_x\":3,\"size_y\":2,\"type\":\"visualization\"},{\"col\":5,\"id\":\"465c5820-8977-11e7-857e-e1d56b17746d\",\"panelIndex\":40,\"row\":4,\"size_x\":5,\"size_y\":4,\"type\":\"visualization\"},{\"col\":1,\"id\":\"56f0f5f0-3ebe-11e7-a192-93f36fbd9d05\",\"panelIndex\":46,\"row\":4,\"size_x\":4,\"size_y\":4,\"type\":\"visualization\"},{\"col\":1,\"id\":\"e46ff7f0-897d-11e7-934b-67cec0a7da65\",\"panelIndex\":47,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":3,\"id\":\"c1361da0-80b3-11e7-8790-73b60225f736\",\"panelIndex\":48,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":5,\"id\":\"d048c220-80b3-11e7-8790-73b60225f736\",\"panelIndex\":49,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"db55bce0-80b3-11e7-8790-73b60225f736\",\"panelIndex\":50,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":9,\"id\":\"b2f2adb0-897f-11e7-a2d2-c57bca21b3aa\",\"panelIndex\":51,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"}]",
"optionsJSON": "{\"darkTheme\":false}",
"uiStateJSON": "{\"P-11\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-2\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-20\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-21\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}},\"P-27\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-28\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}},\"P-3\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"asc\"}}}},\"P-30\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-31\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-40\":{\"vis\":{\"defaultColors\":{\"0 - 3\":\"rgb(0,104,55)\",\"3 - 7\":\"rgb(135,203,103)\",\"7 - 9\":\"rgb(255,255,190)\",\"9 - 11\":\"rgb(249,142,82)\"}}},\"P-41\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-42\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-43\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"}}},\"P-44\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-45\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-46\":{\"vis\":{\"legendOpen\":true}},\"P-47\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-48\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-49\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-5\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-50\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-51\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-6\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-8\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}}",
"version": 1,
"timeRestore": false,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[{\"query\":{\"match_all\":{}}}],\"highlightAll\":true,\"version\":true}"
}
}
}

View File

@ -0,0 +1,170 @@
[
{
"_id": "AWCUo-jRib22Ai8JwW1N",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: High Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer - Risk: High Qualys Scoring\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":1000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":true}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"High Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:high\"}}},\"label\":\"\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#EF843C\",\"0 - 1000\":\"#E0752D\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "AWCUozGBib22Ai8JwW1B",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Medium Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer - Risk: Medium Qualys Scoring\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":false}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Medium Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:medium\"}}},\"label\":\"Medium Risk\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#EAB839\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "AWCUpE3Kib22Ai8JwW1c",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Critical Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer - Risk: Critical Qualys Scoring\",\"type\":\"goal\",\"params\":{\"addLegend\":true,\"addTooltip\":true,\"gauge\":{\"autoExtend\":false,\"backStyle\":\"Full\",\"colorSchema\":\"Green to Red\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"gaugeColorMode\":\"Background\",\"gaugeStyle\":\"Full\",\"gaugeType\":\"Metric\",\"invertColors\":false,\"labels\":{\"color\":\"black\",\"show\":false},\"orientation\":\"vertical\",\"percentageMode\":false,\"scale\":{\"color\":\"#333\",\"labels\":false,\"show\":true,\"width\":2},\"style\":{\"bgColor\":true,\"bgFill\":\"white\",\"fontSize\":\"34\",\"labelColor\":false,\"subText\":\"Risk\"},\"type\":\"simple\",\"useRanges\":false,\"verticalSplit\":false},\"type\":\"gauge\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Critical Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:critical\"}}},\"label\":\"Critical\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"0 - 10000\":\"#BF1B00\"},\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "AWCUyeHGib22Ai8JwX62",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-RiskOverTime Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer-RiskOverTime Qualys Scoring\",\"type\":\"line\",\"params\":{\"addLegend\":true,\"addTimeMarker\":false,\"addTooltip\":true,\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"labels\":{\"show\":true,\"truncate\":100},\"position\":\"bottom\",\"scale\":{\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{\"text\":\"@timestamp per 12 hours\"},\"type\":\"category\"}],\"defaultYExtents\":false,\"drawLinesBetweenPoints\":true,\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"},\"valueAxis\":\"ValueAxis-1\"},\"interpolate\":\"linear\",\"legendPosition\":\"right\",\"orderBucketsBySum\":false,\"radiusRatio\":9,\"scale\":\"linear\",\"seriesParams\":[{\"data\":{\"id\":\"1\",\"label\":\"Count\"},\"drawLinesBetweenPoints\":true,\"interpolate\":\"linear\",\"mode\":\"normal\",\"show\":\"true\",\"showCircles\":true,\"type\":\"line\",\"valueAxis\":\"ValueAxis-1\"}],\"setYExtents\":false,\"showCircles\":true,\"times\":[],\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"labels\":{\"filter\":false,\"rotate\":0,\"show\":true,\"truncate\":100},\"name\":\"LeftAxis-1\",\"position\":\"left\",\"scale\":{\"mode\":\"normal\",\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{\"text\":\"Count\"},\"type\":\"value\"}],\"type\":\"line\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:info\"}}},\"label\":\"Info\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:low\"}}},\"label\":\"Low\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:medium\"}}},\"label\":\"Medium\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:high\"}}},\"label\":\"High\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:critical\"}}},\"label\":\"Critical\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"Critical\":\"#962D82\",\"High\":\"#BF1B00\",\"Low\":\"#629E51\",\"Medium\":\"#EAB839\",\"Info\":\"#65C5DB\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "AWCUos-Fib22Ai8JwW0y",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Low Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer - Risk: Low Qualys Scoring\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":false}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Low Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:low\"}}},\"label\":\"Low Risk\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#629E51\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "AWCg9Wsfib22Ai8Jww3v",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Qualys: Category Description",
"visState": "{\"title\":\"VulnWhisperer - Qualys: Category Description\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"category_description.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Category Description\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"match_all\":{}},\"filter\":[]}"
}
}
},
{
"_id": "AWCg88f1ib22Ai8Jww3C",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - QualysOS",
"visState": "{\"title\":\"VulnWhisperer - QualysOS\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"operating_system.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"match_all\":{}},\"filter\":[]}"
}
}
},
{
"_id": "AWCg9JUAib22Ai8Jww3Y",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - QualysOwner",
"visState": "{\"title\":\"VulnWhisperer - QualysOwner\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"owner.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"match_all\":{}},\"filter\":[]}"
}
}
},
{
"_id": "AWCg9tE6ib22Ai8Jww4R",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Qualys: Impact",
"visState": "{\"title\":\"VulnWhisperer - Qualys: Impact\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"impact.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Impact\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"match_all\":{}},\"filter\":[]}"
}
}
},
{
"_id": "AWCg9igvib22Ai8Jww36",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Qualys: Level",
"visState": "{\"title\":\"VulnWhisperer - Qualys: Level\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"level.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Level\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"match_all\":{}},\"filter\":[]}"
}
}
},
{
"_id": "AWCUsp_3ib22Ai8JwW7R",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL-Critical Risk Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer - TL-Critical Risk Qualys Scoring\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*',q='(risk:critical)').label(\\\"Original\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk:critical)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk:critical)').subtract(.es(index='logstash-vulnwhisperer-*',q='(risk:critical)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\",\"type\":\"timelion\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "AWCUtHETib22Ai8JwW79",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL-High Risk Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer - TL-High Risk Qualys Scoring\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*',q='(risk:high)').label(\\\"Original\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk:high)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk:high)').subtract(.es(index='logstash-vulnwhisperer-*',q='(risk:high)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\",\"type\":\"timelion\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
}
]

View File

@ -0,0 +1,50 @@
[
{
"_id": "AWCUrIBqib22Ai8JwW43",
"_type": "dashboard",
"_source": {
"title": "VulnWhisperer - Reporting Qualys Scoring",
"hits": 0,
"description": "",
"panelsJSON": "[{\"col\":1,\"id\":\"2f979030-44b9-11e7-a818-f5f80dfc3590\",\"panelIndex\":5,\"row\":11,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"col\":10,\"id\":\"297df800-3f7e-11e7-bd24-6903e3283192\",\"panelIndex\":15,\"row\":7,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":7,\"id\":\"471a3580-3f6b-11e7-88e7-df1abe6547fb\",\"panelIndex\":20,\"row\":7,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":11,\"id\":\"995e2280-3df3-11e7-a44e-c79ca8efb780\",\"panelIndex\":22,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":9,\"id\":\"b2f2adb0-897f-11e7-a2d2-c57bca21b3aa\",\"panelIndex\":23,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"479deab0-8a39-11e7-a58a-9bfcb3761a3d\",\"panelIndex\":29,\"row\":4,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"size_x\":6,\"size_y\":3,\"panelIndex\":30,\"type\":\"visualization\",\"id\":\"AWCUtHETib22Ai8JwW79\",\"col\":1,\"row\":8},{\"size_x\":6,\"size_y\":3,\"panelIndex\":31,\"type\":\"visualization\",\"id\":\"AWCUsp_3ib22Ai8JwW7R\",\"col\":7,\"row\":4},{\"size_x\":2,\"size_y\":3,\"panelIndex\":33,\"type\":\"visualization\",\"id\":\"AWCUozGBib22Ai8JwW1B\",\"col\":3,\"row\":1},{\"size_x\":2,\"size_y\":3,\"panelIndex\":34,\"type\":\"visualization\",\"id\":\"AWCUo-jRib22Ai8JwW1N\",\"col\":5,\"row\":1},{\"size_x\":2,\"size_y\":3,\"panelIndex\":35,\"type\":\"visualization\",\"id\":\"AWCUpE3Kib22Ai8JwW1c\",\"col\":7,\"row\":1},{\"size_x\":2,\"size_y\":3,\"panelIndex\":36,\"type\":\"visualization\",\"id\":\"AWCUos-Fib22Ai8JwW0y\",\"col\":1,\"row\":1}]",
"optionsJSON": "{\"darkTheme\":false}",
"uiStateJSON": "{\"P-15\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-20\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-21\":{\"vis\":{\"defaultColors\":{\"0 - 100\":\"rgb(0,104,55)\"}}},\"P-22\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-23\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-24\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-5\":{\"vis\":{\"legendOpen\":false}},\"P-33\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-34\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-35\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-27\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-28\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-26\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"}}},\"P-25\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-32\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-36\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}}}",
"version": 1,
"timeRestore": true,
"timeTo": "now",
"timeFrom": "now-30d",
"refreshInterval": {
"display": "Off",
"pause": false,
"value": 0
},
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[{\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"-vulnerability_category:\\\"INFORMATION_GATHERED\\\"\"}}}],\"highlightAll\":true,\"version\":true}"
}
}
},
{
"_id": "5dba30c0-3df3-11e7-a44e-c79ca8efb780",
"_type": "dashboard",
"_source": {
"title": "VulnWhisperer - Risk Mitigation Qualys Web Scoring",
"hits": 0,
"description": "",
"panelsJSON": "[{\"col\":11,\"id\":\"995e2280-3df3-11e7-a44e-c79ca8efb780\",\"panelIndex\":20,\"row\":8,\"size_x\":2,\"size_y\":7,\"type\":\"visualization\"},{\"col\":1,\"id\":\"852816e0-3eb1-11e7-90cb-918f9cb01e3d\",\"panelIndex\":21,\"row\":10,\"size_x\":3,\"size_y\":5,\"type\":\"visualization\"},{\"col\":4,\"id\":\"297df800-3f7e-11e7-bd24-6903e3283192\",\"panelIndex\":27,\"row\":8,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":9,\"id\":\"35b6d320-3f7f-11e7-bd24-6903e3283192\",\"panelIndex\":28,\"row\":8,\"size_x\":2,\"size_y\":7,\"type\":\"visualization\"},{\"col\":11,\"id\":\"471a3580-3f6b-11e7-88e7-df1abe6547fb\",\"panelIndex\":30,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"de1a5f40-3f85-11e7-97f9-3777d794626d\",\"panelIndex\":31,\"row\":8,\"size_x\":2,\"size_y\":4,\"type\":\"visualization\"},{\"col\":10,\"id\":\"5093c620-44e9-11e7-8014-ede06a7e69f8\",\"panelIndex\":37,\"row\":4,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":1,\"columns\":[\"host\",\"risk\",\"risk_score\",\"cve\",\"plugin_name\",\"solution\",\"plugin_output\"],\"id\":\"54648700-3f74-11e7-852e-69207a3d0726\",\"panelIndex\":38,\"row\":15,\"size_x\":12,\"size_y\":6,\"sort\":[\"@timestamp\",\"desc\"],\"type\":\"search\"},{\"col\":1,\"id\":\"fb6eb020-49ab-11e7-8f8c-57ad64ec48a6\",\"panelIndex\":39,\"row\":8,\"size_x\":3,\"size_y\":2,\"type\":\"visualization\"},{\"col\":5,\"id\":\"465c5820-8977-11e7-857e-e1d56b17746d\",\"panelIndex\":40,\"row\":4,\"size_x\":5,\"size_y\":4,\"type\":\"visualization\"},{\"col\":9,\"id\":\"b2f2adb0-897f-11e7-a2d2-c57bca21b3aa\",\"panelIndex\":45,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"AWCUos-Fib22Ai8JwW0y\",\"panelIndex\":47,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":3,\"id\":\"AWCUozGBib22Ai8JwW1B\",\"panelIndex\":48,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":5,\"id\":\"AWCUo-jRib22Ai8JwW1N\",\"panelIndex\":49,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"AWCUpE3Kib22Ai8JwW1c\",\"panelIndex\":50,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"AWCUyeHGib22Ai8JwX62\",\"panelIndex\":51,\"row\":4,\"size_x\":4,\"size_y\":4,\"type\":\"visualization\"},{\"col\":4,\"id\":\"AWCg88f1ib22Ai8Jww3C\",\"panelIndex\":52,\"row\":12,\"size_x\":3,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"AWCg9JUAib22Ai8Jww3Y\",\"panelIndex\":53,\"row\":12,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"}]",
"optionsJSON": "{\"darkTheme\":false}",
"uiStateJSON": "{\"P-11\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-2\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-20\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-21\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}},\"P-27\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-28\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}},\"P-3\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"asc\"}}}},\"P-30\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-31\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-40\":{\"vis\":{\"defaultColors\":{\"0 - 3\":\"rgb(0,104,55)\",\"3 - 7\":\"rgb(135,203,103)\",\"7 - 9\":\"rgb(255,255,190)\",\"9 - 11\":\"rgb(249,142,82)\"}}},\"P-41\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-42\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-43\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"}}},\"P-44\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-45\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-47\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-48\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-49\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-5\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-50\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-6\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-8\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-52\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-53\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}}",
"version": 1,
"timeRestore": true,
"timeTo": "now",
"timeFrom": "now-30d",
"refreshInterval": {
"display": "Off",
"pause": false,
"value": 0
},
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[{\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"-vulnerability_category:\\\"INFORMATION_GATHERED\\\"\"}}}],\"highlightAll\":true,\"version\":true}"
}
}
}
]

View File

@ -3,7 +3,7 @@
"_id": "54648700-3f74-11e7-852e-69207a3d0726",
"_type": "search",
"_source": {
"title": "Nessus - Saved Search",
"title": "VulnWhisperer - Saved Search",
"description": "",
"hits": 0,
"columns": [
@ -21,7 +21,7 @@
],
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-nessus-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[],\"highlight\":{\"pre_tags\":[\"@kibana-highlighted-field@\"],\"post_tags\":[\"@/kibana-highlighted-field@\"],\"fields\":{\"*\":{}},\"require_field_match\":false,\"fragment_size\":2147483647}}"
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[],\"highlight\":{\"pre_tags\":[\"@kibana-highlighted-field@\"],\"post_tags\":[\"@/kibana-highlighted-field@\"],\"fields\":{\"*\":{}},\"require_field_match\":false,\"fragment_size\":2147483647}}"
}
}
}

View File

@ -0,0 +1,220 @@
# Author: Austin Taylor and Justin Henderson
# Email: email@austintaylor.io
# Last Update: 12/20/2017
# Version 0.3
# Description: Take in nessus reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/nessus/**/*"
start_position => "beginning"
tags => "nessus"
type => "nessus"
}
file {
path => "/opt/VulnWhisperer/tenable/*.csv"
start_position => "beginning"
tags => "tenable"
type => "tenable"
}
}
filter {
if "nessus" in [tags] or "tenable" in [tags] {
# Drop the header column
if [message] =~ "^Plugin ID" { drop {} }
csv {
# columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output"]
columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output", "asset_uuid", "vulnerability_state", "ip", "fqdn", "netbios", "operating_system", "mac_address", "plugin_family", "cvss_base", "cvss_temporal", "cvss_temporal_vector", "cvss_vector", "cvss3_base", "cvss3_temporal", "cvss3_temporal_vector", "cvss3_vector", "system_type", "host_start", "host_end"]
separator => ","
source => "message"
}
ruby {
code => "if event.get('description')
event.set('description', event.get('description').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('synopsis')
event.set('synopsis', event.get('synopsis').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('solution')
event.set('solution', event.get('solution').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('see_also')
event.set('see_also', event.get('see_also').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('plugin_output')
event.set('plugin_output', event.get('plugin_output').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end"
}
#If using filebeats as your source, you will need to replace the "path" field to "source"
grok {
match => { "path" => "(?<scan_name>[a-zA-Z0-9_.\-]+)_%{INT:scan_id}_%{INT:history_id}_%{INT:last_updated}.csv$" }
tag_on_failure => []
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => ["last_updated"]
}
if [risk] == "None" {
mutate { add_field => { "risk_number" => 0 }}
}
if [risk] == "Low" {
mutate { add_field => { "risk_number" => 1 }}
}
if [risk] == "Medium" {
mutate { add_field => { "risk_number" => 2 }}
}
if [risk] == "High" {
mutate { add_field => { "risk_number" => 3 }}
}
if [risk] == "Critical" {
mutate { add_field => { "risk_number" => 4 }}
}
if ![cve] or [cve] == "nan" {
mutate { remove_field => [ "cve" ] }
}
if ![cvss] or [cvss] == "nan" {
mutate { remove_field => [ "cvss" ] }
}
if ![cvss_base] or [cvss_base] == "nan" {
mutate { remove_field => [ "cvss_base" ] }
}
if ![cvss_temporal] or [cvss_temporal] == "nan" {
mutate { remove_field => [ "cvss_temporal" ] }
}
if ![cvss_temporal_vector] or [cvss_temporal_vector] == "nan" {
mutate { remove_field => [ "cvss_temporal_vector" ] }
}
if ![cvss_vector] or [cvss_vector] == "nan" {
mutate { remove_field => [ "cvss_vector" ] }
}
if ![cvss3_base] or [cvss3_base] == "nan" {
mutate { remove_field => [ "cvss3_base" ] }
}
if ![cvss3_temporal] or [cvss3_temporal] == "nan" {
mutate { remove_field => [ "cvss3_temporal" ] }
}
if ![cvss3_temporal_vector] or [cvss3_temporal_vector] == "nan" {
mutate { remove_field => [ "cvss3_temporal_vector" ] }
}
if ![description] or [description] == "nan" {
mutate { remove_field => [ "description" ] }
}
if ![mac_address] or [mac_address] == "nan" {
mutate { remove_field => [ "mac_address" ] }
}
if ![netbios] or [netbios] == "nan" {
mutate { remove_field => [ "netbios" ] }
}
if ![operating_system] or [operating_system] == "nan" {
mutate { remove_field => [ "operating_system" ] }
}
if ![plugin_output] or [plugin_output] == "nan" {
mutate { remove_field => [ "plugin_output" ] }
}
if ![see_also] or [see_also] == "nan" {
mutate { remove_field => [ "see_also" ] }
}
if ![synopsis] or [synopsis] == "nan" {
mutate { remove_field => [ "synopsis" ] }
}
if ![system_type] or [system_type] == "nan" {
mutate { remove_field => [ "system_type" ] }
}
mutate {
remove_field => [ "message" ]
add_field => { "risk_score" => "%{cvss}" }
}
mutate {
convert => { "risk_score" => "float" }
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
# Compensating controls - adjust risk_score
# Adobe and Java are not allowed to run in browser unless whitelisted
# Therefore, lower score by dividing by 3 (score is subjective to risk)
#Modify and uncomment when ready to use
#if [risk_score] != 0 {
# if [plugin_name] =~ "Adobe" and [risk_score] > 6 or [plugin_name] =~ "Java" and [risk_score] > 6 {
# ruby {
# code => "event.set('risk_score', event.get('risk_score') / 3)"
# }
# mutate {
# add_field => { "compensating_control" => "Adobe and Flash removed from browsers unless whitelisted site." }
# }
# }
#}
# Add tags for reporting based on assets or criticality
if [asset] == "dc01" or [asset] == "dc02" or [asset] == "pki01" or [asset] == "192.168.0.54" or [asset] =~ "^192\.168\.0\." or [asset] =~ "^42.42.42." {
mutate {
add_tag => [ "critical_asset" ]
}
}
#if [asset] =~ "^192\.168\.[45][0-9][0-9]\.1$" or [asset] =~ "^192.168\.[50]\.[0-9]{1,2}\.1$"{
# mutate {
# add_tag => [ "has_hipaa_data" ]
# }
#}
#if [asset] =~ "^192\.168\.[45][0-9][0-9]\." {
# mutate {
# add_tag => [ "hipaa_asset" ]
# }
#}
if [asset] =~ "^hr" {
mutate {
add_tag => [ "pci_asset" ]
}
}
#if [asset] =~ "^10\.0\.50\." {
# mutate {
# add_tag => [ "web_servers" ]
# }
#}
}
}
output {
if "nessus" in [tags] or "tenable" in [tags] or [type] in [ "nessus", "tenable" ] {
# stdout { codec => rubydebug }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,153 @@
# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 12/30/2017
# Version 0.3
# Description: Take in qualys web scan reports from vulnWhisperer and pumps into logstash
input {
file {
path => [ "/opt/VulnWhisperer/data/qualys/*.json" , "/opt/VulnWhisperer/data/qualys_web/*.json", "/opt/VulnWhisperer/data/qualys_vuln/*.json" ]
type => json
codec => json
start_position => "beginning"
tags => [ "qualys" ]
}
}
filter {
if "qualys" in [tags] {
grok {
match => { "path" => [ "(?<tags>qualys_vuln)_scan_%{DATA}_%{INT:last_updated}.json$", "(?<tags>qualys_web)_%{INT:app_id}_%{INT:last_updated}.json$" ] }
tag_on_failure => []
}
mutate {
replace => [ "message", "%{message}" ]
#gsub => [
# "message", "\|\|\|", " ",
# "message", "\t\t", " ",
# "message", " ", " ",
# "message", " ", " ",
# "message", " ", " ",
# "message", "nan", " ",
# "message",'\n',''
#]
}
if "qualys_web" in [tags] {
mutate {
add_field => { "asset" => "%{web_application_name}" }
add_field => { "risk_score" => "%{cvss}" }
}
} else if "qualys_vuln" in [tags] {
mutate {
add_field => { "asset" => "%{ip}" }
add_field => { "risk_score" => "%{cvss}" }
}
}
if [risk] == "1" {
mutate { add_field => { "risk_number" => 0 }}
mutate { replace => { "risk" => "info" }}
}
if [risk] == "2" {
mutate { add_field => { "risk_number" => 1 }}
mutate { replace => { "risk" => "low" }}
}
if [risk] == "3" {
mutate { add_field => { "risk_number" => 2 }}
mutate { replace => { "risk" => "medium" }}
}
if [risk] == "4" {
mutate { add_field => { "risk_number" => 3 }}
mutate { replace => { "risk" => "high" }}
}
if [risk] == "5" {
mutate { add_field => { "risk_number" => 4 }}
mutate { replace => { "risk" => "critical" }}
}
mutate {
remove_field => "message"
}
if [first_time_detected] {
date {
match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_detected"
}
}
if [first_time_tested] {
date {
match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_tested"
}
}
if [last_time_detected] {
date {
match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_detected"
}
}
if [last_time_tested] {
date {
match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_tested"
}
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => "last_updated"
}
mutate {
convert => { "plugin_id" => "integer"}
convert => { "id" => "integer"}
convert => { "risk_number" => "integer"}
convert => { "risk_score" => "float"}
convert => { "total_times_detected" => "integer"}
convert => { "cvss_temporal" => "float"}
convert => { "cvss" => "float"}
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
if [asset] =~ "\.yourdomain\.(com|net)$" {
mutate {
add_tag => [ "critical_asset" ]
}
}
}
}
output {
if "qualys" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,146 @@
# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 03/04/2018
# Version 0.3
# Description: Take in qualys web scan reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/openvas/*.json"
type => json
codec => json
start_position => "beginning"
tags => [ "openvas_scan", "openvas" ]
}
}
filter {
if "openvas_scan" in [tags] {
mutate {
replace => [ "message", "%{message}" ]
gsub => [
"message", "\|\|\|", " ",
"message", "\t\t", " ",
"message", " ", " ",
"message", " ", " ",
"message", " ", " ",
"message", "nan", " ",
"message",'\n',''
]
}
grok {
match => { "path" => "openvas_scan_%{DATA:scan_id}_%{INT:last_updated}.json$" }
tag_on_failure => []
}
mutate {
add_field => { "risk_score" => "%{cvss}" }
}
if [risk] == "1" {
mutate { add_field => { "risk_number" => 0 }}
mutate { replace => { "risk" => "info" }}
}
if [risk] == "2" {
mutate { add_field => { "risk_number" => 1 }}
mutate { replace => { "risk" => "low" }}
}
if [risk] == "3" {
mutate { add_field => { "risk_number" => 2 }}
mutate { replace => { "risk" => "medium" }}
}
if [risk] == "4" {
mutate { add_field => { "risk_number" => 3 }}
mutate { replace => { "risk" => "high" }}
}
if [risk] == "5" {
mutate { add_field => { "risk_number" => 4 }}
mutate { replace => { "risk" => "critical" }}
}
mutate {
remove_field => "message"
}
if [first_time_detected] {
date {
match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_detected"
}
}
if [first_time_tested] {
date {
match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_tested"
}
}
if [last_time_detected] {
date {
match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_detected"
}
}
if [last_time_tested] {
date {
match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_tested"
}
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => "last_updated"
}
mutate {
convert => { "plugin_id" => "integer"}
convert => { "id" => "integer"}
convert => { "risk_number" => "integer"}
convert => { "risk_score" => "float"}
convert => { "total_times_detected" => "integer"}
convert => { "cvss_temporal" => "float"}
convert => { "cvss" => "float"}
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
# Add your critical assets by subnet or by hostname. Comment this field out if you don't want to tag any, but the asset panel will break.
if [asset] =~ "^10\.0\.100\." {
mutate {
add_tag => [ "critical_asset" ]
}
}
}
}
output {
if "openvas" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,21 @@
# Description: Take in jira tickets from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/jira/*.json"
type => json
codec => json
start_position => "beginning"
tags => [ "jira" ]
}
}
output {
if "jira" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

116
resources/elk6/filebeat.yml Normal file
View File

@ -0,0 +1,116 @@
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
# Linux Example
#- /var/log/*.log
#Windows Example
- c:\nessus\My Scans\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ["^DBG"]
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ["^ERR", "^WARN"]
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: [".gz$"]
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["logstash01:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["logstashserver1:5044", "logstashserver2:5044", "logstashserver3:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

52
resources/elk6/init_kibana.sh Executable file
View File

@ -0,0 +1,52 @@
#!/bin/bash
#kibana_url="localhost:5601"
kibana_url="kibana.local:5601"
elasticsearch_url="elasticsearch.local:9200"
add_saved_objects="curl -s -u elastic:changeme -k -XPOST 'http://"$kibana_url"/api/saved_objects/_bulk_create' -H 'Content-Type: application/json' -H \"kbn-xsrf: true\" -d @"
#Create all saved objects - including index pattern
saved_objects_file="kibana_APIonly.json"
#if [ `curl -I localhost:5601/status | head -n1 |cut -d$' ' -f2` -eq '200' ]; then echo "Loading VulnWhisperer Saved Objects"; eval $(echo $add_saved_objects$saved_objects_file); else echo "waiting for kibana"; fi
until curl -s "$elasticsearch_url/_cluster/health?pretty" | grep '"status"' | grep -qE "green|yellow"; do
curl -s "$elasticsearch_url/_cluster/health?pretty"
echo "Waiting for Elasticsearch..."
sleep 5
done
count=0
until curl -s --fail -XPUT "http://$elasticsearch_url/_template/vulnwhisperer" -H 'Content-Type: application/json' -d '@/opt/index-template.json'; do
echo "Loading VulnWhisperer index template..."
((count++)) && ((count==60)) && break
sleep 1
done
if [[ count -le 60 && $(curl -s -I http://$elasticsearch_url/_template/vulnwhisperer | head -n1 |cut -d$' ' -f2) == "200" ]]; then
echo -e "\n✅ VulnWhisperer index template loaded"
else
echo -e "\n❌ VulnWhisperer index template failed to load"
fi
until [ "`curl -s -I "$kibana_url"/status | head -n1 |cut -d$' ' -f2`" == "200" ]; do
curl -s -I "$kibana_url"/status
echo "Waiting for Kibana..."
sleep 5
done
echo "Loading VulnWhisperer Saved Objects"
echo $add_saved_objects$saved_objects_file
eval $(echo $add_saved_objects$saved_objects_file)
#set "*" as default index
#id_default_index="87f3bcc0-8b37-11e8-83be-afaed4786d8c"
#os.system("curl -X POST -H \"Content-Type: application/json\" -H \"kbn-xsrf: true\" -d '{\"value\":\""+id_default_index+"\"}' http://elastic:changeme@"+kibana_url+"kibana/settings/defaultIndex")
#Create vulnwhisperer index pattern
#index_name = "logstash-vulnwhisperer-*"
#os.system(add_index+index_name+"' '-d{\"attributes\":{\"title\":\""+index_name+"\",\"timeFieldName\":\"@timestamp\"}}'")
#Create jira index pattern, separated for not fill of crap variables the Discover tab by default
#index_name = "logstash-jira-*"
#os.system(add_index+index_name+"' '-d{\"attributes\":{\"title\":\""+index_name+"\",\"timeFieldName\":\"@timestamp\"}}'")

433
resources/elk6/kibana.json Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,233 @@
{
"index_patterns": "logstash-vulnwhisperer-*",
"mappings": {
"doc": {
"properties": {
"@timestamp": {
"type": "date"
},
"@version": {
"type": "keyword"
},
"asset": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"asset_uuid": {
"type": "keyword"
},
"assign_ip": {
"type": "ip"
},
"category": {
"type": "keyword"
},
"cve": {
"type": "keyword"
},
"cvss_base": {
"type": "float"
},
"cvss_temporal_vector": {
"type": "keyword"
},
"cvss_temporal": {
"type": "float"
},
"cvss_vector": {
"type": "keyword"
},
"cvss": {
"type": "float"
},
"cvss3_base": {
"type": "float"
},
"cvss3_temporal_vector": {
"type": "keyword"
},
"cvss3_temporal": {
"type": "float"
},
"cvss3_vector": {
"type": "keyword"
},
"cvss3": {
"type": "float"
},
"description": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"dns": {
"type": "keyword"
},
"exploitability": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"fqdn": {
"type": "keyword"
},
"geoip": {
"dynamic": true,
"type": "object",
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "float"
}
}
},
"history_id": {
"type": "keyword"
},
"host": {
"type": "keyword"
},
"host_end": {
"type": "date"
},
"host_start": {
"type": "date"
},
"impact": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"ip_status": {
"type": "keyword"
},
"ip": {
"type": "ip"
},
"last_updated": {
"type": "date"
},
"operating_system": {
"type": "keyword"
},
"path": {
"type": "keyword"
},
"pci_vuln": {
"type": "keyword"
},
"plugin_family": {
"type": "keyword"
},
"plugin_id": {
"type": "keyword"
},
"plugin_name": {
"type": "keyword"
},
"plugin_output": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"port": {
"type": "integer"
},
"protocol": {
"type": "keyword"
},
"results": {
"type": "text"
},
"risk_number": {
"type": "integer"
},
"risk_score_name": {
"type": "keyword"
},
"risk_score": {
"type": "float"
},
"risk": {
"type": "keyword"
},
"scan_id": {
"type": "keyword"
},
"scan_name": {
"type": "keyword"
},
"scan_reference": {
"type": "keyword"
},
"see_also": {
"type": "keyword"
},
"solution": {
"type": "keyword"
},
"source": {
"type": "keyword"
},
"ssl": {
"type": "keyword"
},
"synopsis": {
"type": "keyword"
},
"system_type": {
"type": "keyword"
},
"tags": {
"type": "keyword"
},
"threat": {
"type": "text"
},
"type": {
"type": "keyword"
},
"vendor_reference": {
"type": "keyword"
},
"vulnerability_state": {
"type": "keyword"
}
}
}
}
}

View File

@ -0,0 +1,231 @@
{
"index_patterns": "logstash-vulnwhisperer-*",
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"@version": {
"type": "keyword"
},
"asset": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"asset_uuid": {
"type": "keyword"
},
"assign_ip": {
"type": "ip"
},
"category": {
"type": "keyword"
},
"cve": {
"type": "keyword"
},
"cvss_base": {
"type": "float"
},
"cvss_temporal_vector": {
"type": "keyword"
},
"cvss_temporal": {
"type": "float"
},
"cvss_vector": {
"type": "keyword"
},
"cvss": {
"type": "float"
},
"cvss3_base": {
"type": "float"
},
"cvss3_temporal_vector": {
"type": "keyword"
},
"cvss3_temporal": {
"type": "float"
},
"cvss3_vector": {
"type": "keyword"
},
"cvss3": {
"type": "float"
},
"description": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"dns": {
"type": "keyword"
},
"exploitability": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"fqdn": {
"type": "keyword"
},
"geoip": {
"dynamic": true,
"type": "object",
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "float"
}
}
},
"history_id": {
"type": "keyword"
},
"host": {
"type": "keyword"
},
"host_end": {
"type": "date"
},
"host_start": {
"type": "date"
},
"impact": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"ip_status": {
"type": "keyword"
},
"ip": {
"type": "ip"
},
"last_updated": {
"type": "date"
},
"operating_system": {
"type": "keyword"
},
"path": {
"type": "keyword"
},
"pci_vuln": {
"type": "keyword"
},
"plugin_family": {
"type": "keyword"
},
"plugin_id": {
"type": "keyword"
},
"plugin_name": {
"type": "keyword"
},
"plugin_output": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"port": {
"type": "integer"
},
"protocol": {
"type": "keyword"
},
"results": {
"type": "text"
},
"risk_number": {
"type": "integer"
},
"risk_score_name": {
"type": "keyword"
},
"risk_score": {
"type": "float"
},
"risk": {
"type": "keyword"
},
"scan_id": {
"type": "keyword"
},
"scan_name": {
"type": "keyword"
},
"scan_reference": {
"type": "keyword"
},
"see_also": {
"type": "keyword"
},
"solution": {
"type": "keyword"
},
"source": {
"type": "keyword"
},
"ssl": {
"type": "keyword"
},
"synopsis": {
"type": "keyword"
},
"system_type": {
"type": "keyword"
},
"tags": {
"type": "keyword"
},
"threat": {
"type": "text"
},
"type": {
"type": "keyword"
},
"vendor_reference": {
"type": "keyword"
},
"vulnerability_state": {
"type": "keyword"
}
}
}
}

View File

@ -0,0 +1,9 @@
node.name: logstash
path.config: /usr/share/logstash/pipeline/
path.data: /tmp
queue.drain: true
queue.type: persisted
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.elasticsearch.url: elasticsearch:9200
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.enabled: false

View File

@ -0,0 +1,182 @@
# Author: Austin Taylor and Justin Henderson
# Email: email@austintaylor.io
# Last Update: 12/20/2017
# Version 0.3
# Description: Take in nessus reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/data/nessus/**/*"
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
tags => "nessus"
}
file {
path => "/opt/VulnWhisperer/data/tenable/*.csv"
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
tags => "tenable"
}
}
filter {
if "nessus" in [tags] or "tenable" in [tags] {
# Drop the header column
if [message] =~ "^Plugin ID" { drop {} }
csv {
# columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output"]
columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output", "asset_uuid", "vulnerability_state", "ip", "fqdn", "netbios", "operating_system", "mac_address", "plugin_family", "cvss_base", "cvss_temporal", "cvss_temporal_vector", "cvss_vector", "cvss3_base", "cvss3_temporal", "cvss3_temporal_vector", "cvss3_vector", "system_type", "host_start", "host_end"]
separator => ","
source => "message"
}
ruby {
code => "if event.get('description')
event.set('description', event.get('description').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('synopsis')
event.set('synopsis', event.get('synopsis').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('solution')
event.set('solution', event.get('solution').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('see_also')
event.set('see_also', event.get('see_also').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('plugin_output')
event.set('plugin_output', event.get('plugin_output').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end"
}
#If using filebeats as your source, you will need to replace the "path" field to "source"
# Remove when scan name is included in event (current method is error prone)
grok {
match => { "path" => "(?<scan_name>[a-zA-Z0-9_.\-]+)_%{INT:scan_id}_%{INT:history_id}_%{INT:last_updated}.csv$" }
tag_on_failure => []
}
# TODO remove when @timestamp is included in event
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => ["last_updated"]
}
if [risk] == "None" {
mutate { add_field => { "risk_number" => 0 }}
}
if [risk] == "Low" {
mutate { add_field => { "risk_number" => 1 }}
}
if [risk] == "Medium" {
mutate { add_field => { "risk_number" => 2 }}
}
if [risk] == "High" {
mutate { add_field => { "risk_number" => 3 }}
}
if [risk] == "Critical" {
mutate { add_field => { "risk_number" => 4 }}
}
if ![cve] or [cve] == "nan" {
mutate { remove_field => [ "cve" ] }
}
if ![cvss] or [cvss] == "nan" {
mutate { remove_field => [ "cvss" ] }
}
if ![cvss_base] or [cvss_base] == "nan" {
mutate { remove_field => [ "cvss_base" ] }
}
if ![cvss_temporal] or [cvss_temporal] == "nan" {
mutate { remove_field => [ "cvss_temporal" ] }
}
if ![cvss_temporal_vector] or [cvss_temporal_vector] == "nan" {
mutate { remove_field => [ "cvss_temporal_vector" ] }
}
if ![cvss_vector] or [cvss_vector] == "nan" {
mutate { remove_field => [ "cvss_vector" ] }
}
if ![cvss3_base] or [cvss3_base] == "nan" {
mutate { remove_field => [ "cvss3_base" ] }
}
if ![cvss3_temporal] or [cvss3_temporal] == "nan" {
mutate { remove_field => [ "cvss3_temporal" ] }
}
if ![cvss3_temporal_vector] or [cvss3_temporal_vector] == "nan" {
mutate { remove_field => [ "cvss3_temporal_vector" ] }
}
if ![description] or [description] == "nan" {
mutate { remove_field => [ "description" ] }
}
if ![mac_address] or [mac_address] == "nan" {
mutate { remove_field => [ "mac_address" ] }
}
if ![netbios] or [netbios] == "nan" {
mutate { remove_field => [ "netbios" ] }
}
if ![operating_system] or [operating_system] == "nan" {
mutate { remove_field => [ "operating_system" ] }
}
if ![plugin_output] or [plugin_output] == "nan" {
mutate { remove_field => [ "plugin_output" ] }
}
if ![see_also] or [see_also] == "nan" {
mutate { remove_field => [ "see_also" ] }
}
if ![synopsis] or [synopsis] == "nan" {
mutate { remove_field => [ "synopsis" ] }
}
if ![system_type] or [system_type] == "nan" {
mutate { remove_field => [ "system_type" ] }
}
mutate {
remove_field => [ "message" ]
add_field => { "risk_score" => "%{cvss}" }
}
mutate {
convert => { "risk_score" => "float" }
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
}
}
output {
if "nessus" in [tags] or "tenable" in [tags]{
stdout {
codec => dots
}
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,160 @@
# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 12/30/2017
# Version 0.3
# Description: Take in qualys web scan reports from vulnWhisperer and pumps into logstash
input {
file {
path => [ "/opt/VulnWhisperer/data/qualys/*.json" , "/opt/VulnWhisperer/data/qualys_web/*.json", "/opt/VulnWhisperer/data/qualys_vuln/*.json"]
type => json
codec => json
start_position => "beginning"
tags => [ "qualys" ]
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
}
}
filter {
if "qualys" in [tags] {
grok {
match => { "path" => [ "(?<tags>qualys_vuln)_scan_%{DATA}_%{INT:last_updated}.json$", "(?<tags>qualys_web)_%{INT:app_id}_%{INT:last_updated}.json$" ] }
tag_on_failure => []
}
mutate {
replace => [ "message", "%{message}" ]
#gsub => [
# "message", "\|\|\|", " ",
# "message", "\t\t", " ",
# "message", " ", " ",
# "message", " ", " ",
# "message", " ", " ",
# "message", "nan", " ",
# "message",'\n',''
#]
}
if "qualys_web" in [tags] {
mutate {
add_field => { "asset" => "%{web_application_name}" }
add_field => { "risk_score" => "%{cvss}" }
}
} else if "qualys_vuln" in [tags] {
mutate {
add_field => { "asset" => "%{ip}" }
add_field => { "risk_score" => "%{cvss}" }
}
}
if [risk] == "1" {
mutate { add_field => { "risk_number" => 0 }}
mutate { replace => { "risk" => "info" }}
}
if [risk] == "2" {
mutate { add_field => { "risk_number" => 1 }}
mutate { replace => { "risk" => "low" }}
}
if [risk] == "3" {
mutate { add_field => { "risk_number" => 2 }}
mutate { replace => { "risk" => "medium" }}
}
if [risk] == "4" {
mutate { add_field => { "risk_number" => 3 }}
mutate { replace => { "risk" => "high" }}
}
if [risk] == "5" {
mutate { add_field => { "risk_number" => 4 }}
mutate { replace => { "risk" => "critical" }}
}
mutate {
remove_field => "message"
}
if [first_time_detected] {
date {
match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_detected"
}
}
if [first_time_tested] {
date {
match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_tested"
}
}
if [last_time_detected] {
date {
match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_detected"
}
}
if [last_time_tested] {
date {
match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_tested"
}
}
# TODO remove when @timestamp is included in event
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => "last_updated"
}
mutate {
convert => { "plugin_id" => "integer"}
convert => { "id" => "integer"}
convert => { "risk_number" => "integer"}
convert => { "risk_score" => "float"}
convert => { "total_times_detected" => "integer"}
convert => { "cvss_temporal" => "float"}
convert => { "cvss" => "float"}
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
if [asset] =~ "\.yourdomain\.(com|net)$" {
mutate {
add_tag => [ "critical_asset" ]
}
}
}
}
output {
if "qualys" in [tags] {
stdout {
codec => dots
}
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,154 @@
# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 03/04/2018
# Version 0.3
# Description: Take in qualys web scan reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/data/openvas/*.json"
type => json
codec => json
start_position => "beginning"
tags => [ "openvas_scan", "openvas" ]
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
}
}
filter {
if "openvas_scan" in [tags] {
mutate {
replace => [ "message", "%{message}" ]
gsub => [
"message", "\|\|\|", " ",
"message", "\t\t", " ",
"message", " ", " ",
"message", " ", " ",
"message", " ", " ",
"message", "nan", " ",
"message",'\n',''
]
}
grok {
match => { "path" => "openvas_scan_%{DATA:scan_id}_%{INT:last_updated}.json$" }
tag_on_failure => []
}
mutate {
add_field => { "risk_score" => "%{cvss}" }
}
if [risk] == "1" {
mutate { add_field => { "risk_number" => 0 }}
mutate { replace => { "risk" => "info" }}
}
if [risk] == "2" {
mutate { add_field => { "risk_number" => 1 }}
mutate { replace => { "risk" => "low" }}
}
if [risk] == "3" {
mutate { add_field => { "risk_number" => 2 }}
mutate { replace => { "risk" => "medium" }}
}
if [risk] == "4" {
mutate { add_field => { "risk_number" => 3 }}
mutate { replace => { "risk" => "high" }}
}
if [risk] == "5" {
mutate { add_field => { "risk_number" => 4 }}
mutate { replace => { "risk" => "critical" }}
}
mutate {
remove_field => "message"
}
if [first_time_detected] {
date {
match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_detected"
}
}
if [first_time_tested] {
date {
match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_tested"
}
}
if [last_time_detected] {
date {
match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_detected"
}
}
if [last_time_tested] {
date {
match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_tested"
}
}
# TODO remove when @timestamp is included in event
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => "last_updated"
}
mutate {
convert => { "plugin_id" => "integer"}
convert => { "id" => "integer"}
convert => { "risk_number" => "integer"}
convert => { "risk_score" => "float"}
convert => { "total_times_detected" => "integer"}
convert => { "cvss_temporal" => "float"}
convert => { "cvss" => "float"}
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
# Add your critical assets by subnet or by hostname. Comment this field out if you don't want to tag any, but the asset panel will break.
if [asset] =~ "^10\.0\.100\." {
mutate {
add_tag => [ "critical_asset" ]
}
}
}
}
output {
if "openvas" in [tags] {
stdout {
codec => dots
}
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,25 @@
# Description: Take in jira tickets from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/data/jira/*.json"
type => json
codec => json
start_position => "beginning"
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
tags => [ "jira" ]
}
}
output {
if "jira" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,109 @@
[nessus]
enabled=true
hostname=localhost
port=8834
username=nessus_username
password=nessus_password
write_path=/opt/VulnWhisperer/data/nessus/
db_path=/opt/VulnWhisperer/database
trash=false
verbose=true
[tenable]
enabled=true
hostname=cloud.tenable.com
port=443
username=tenable.io_username
password=tenable.io_password
write_path=/opt/VulnWhisperer/data/tenable/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
[qualys_web]
#Reference https://www.qualys.com/docs/qualys-was-api-user-guide.pdf to find your API
enabled = true
hostname = qualysapi.qg2.apps.qualys.com
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys/
db_path=/opt/VulnWhisperer/data/database
verbose=true
# Set the maximum number of retries each connection should attempt.
#Note, this applies only to failed connections and timeouts, never to requests where the server returns a response.
max_retries = 10
# Template ID will need to be retrieved for each document. Please follow the reference guide above for instructions on how to get your template ID.
template_id = 126024
[qualys_vuln]
#Reference https://www.qualys.com/docs/qualys-was-api-user-guide.pdf to find your API
enabled = true
hostname = qualysapi.qg2.apps.qualys.com
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys/
db_path=/opt/VulnWhisperer/data/database
verbose=true
# Set the maximum number of retries each connection should attempt.
#Note, this applies only to failed connections and timeouts, never to requests where the server returns a response.
max_retries = 10
# Template ID will need to be retrieved for each document. Please follow the reference guide above for instructions on how to get your template ID.
template_id = 126024
[detectify]
#Reference https://developer.detectify.com/
enabled = false
hostname = api.detectify.com
#username variable used as apiKey
username = exampleuser
#password variable used as secretKey
password = examplepass
write_path =/opt/VulnWhisperer/data/detectify/
db_path = /opt/VulnWhisperer/data/database
verbose = true
[openvas]
enabled = false
hostname = localhost
port = 4000
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/openvas/
db_path=/opt/VulnWhisperer/data/database
verbose=true
#[proxy]
; This section is optional. Leave it out if you're not using a proxy.
; You can use environmental variables as well: http://www.python-requests.org/en/latest/user/advanced/#proxies
; proxy_protocol set to https, if not specified.
#proxy_url = proxy.mycorp.com
; proxy_port will override any port specified in proxy_url
#proxy_port = 8080
; proxy authentication
#proxy_username = proxyuser
#proxy_password = proxypass
[jira]
hostname = jira-host
username = username
password = password
write_path = /opt/VulnWhisperer/data/jira/
db_path = /opt/VulnWhisperer/data/database
verbose = true
dns_resolv = False
#Sample jira report scan, will automatically be created for existent scans
#[jira.qualys_vuln.test_scan]
#source = qualys_vuln
#scan_name = Test Scan
#jira_project = PROJECT
; if multiple components, separate by "," = None
#components =
; minimum criticality to report (low, medium, high or critical) = None
#min_critical_to_report = high

View File

@ -4,7 +4,7 @@ from setuptools import setup, find_packages
setup(
name='VulnWhisperer',
version='1.0.1',
version='1.8',
packages=find_packages(),
url='https://github.com/austin-taylor/vulnwhisperer',
license="""MIT License
@ -26,7 +26,7 @@ setup(
SOFTWARE.""",
author='Austin Taylor',
author_email='email@austintaylor.io',
description='Vulnerability assessment framework aggregator',
description='Vulnerability Assessment Framework Aggregator',
scripts=['bin/vuln_whisperer']
)

1
tests/data Submodule

Submodule tests/data added at 55dc6832f8

109
tests/test-docker.sh Executable file
View File

@ -0,0 +1,109 @@
#!/usr/bin/env bash
NORMAL=$(tput sgr0)
GREEN=$(tput setaf 2)
YELLOW=$(tput setaf 3)
RED=$(tput setaf 1)
function red() {
echo -e "$RED$*$NORMAL"
}
function green() {
echo -e "$GREEN$*$NORMAL"
}
function yellow() {
echo -e "$YELLOW$*$NORMAL"
}
return_code=0
elasticsearch_url="localhost:9200"
logstash_url="localhost:9600"
until curl -s "$elasticsearch_url/_cluster/health?pretty" | grep '"status"' | grep -qE "green|yellow"; do
yellow "Waiting for Elasticsearch..."
sleep 5
done
green "✅ Elasticsearch status is green..."
count=0
until [[ $(curl -s "$logstash_url/_node/stats" | jq '.events.out') -ge 1236 ]]; do
yellow "Waiting for Logstash load to finish... $(curl -s "$logstash_url/_node/stats" | jq '.events.out') of 1236 (attempt $count of 60)"
((count++)) && ((count==60)) && break
sleep 5
done
if [[ count -le 60 && $(curl -s "$logstash_url/_node/stats" | jq '.events.out') -ge 1236 ]]; then
green "✅ Logstash load finished..."
else
red "❌ Logstash load didn't complete... $(curl -s "$logstash_url/_node/stats" | jq '.events.out')"
fi
count=0
until [[ $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq '.count') -ge 1232 ]] ; do
yellow "Waiting for Elasticsearch index to sync... $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq '.count') of 1232 logs loaded (attempt $count of 150)"
((count++)) && ((count==150)) && break
sleep 2
done
if [[ count -le 50 && $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq '.count') -ge 1232 ]]; then
green "✅ logstash-vulnwhisperer-2019.03 document count >= 1232"
else
red "❌ TIMED OUT waiting for logstash-vulnwhisperer-2019.03 document count: $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq) != 1232"
fi
# if [[ $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq '.count') == 1232 ]]; then
# green "✅ Passed: logstash-vulnwhisperer-2019.03 document count == 1232"
# else
# red "❌ Failed: logstash-vulnwhisperer-2019.03 document count == 1232 was: $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count") instead"
# ((return_code = return_code + 1))
# fi
# Test Nessus plugin_name:Backported Security Patch Detection (FTP)
nessus_doc=$(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_search?q=plugin_name:%22Backported%20Security%20Patch%20Detection%20(FTP)%22%20AND%20asset:176.28.50.164%20AND%20tags:nessus" | jq '.hits.hits[]._source')
if echo $nessus_doc | jq '.risk' | grep -q "None"; then
green "✅ Passed: Nessus risk == None"
else
red "❌ Failed: Nessus risk == None was: $(echo $nessus_doc | jq '.risk') instead"
((return_code = return_code + 1))
fi
# Test Tenable plugin_name:Backported Security Patch Detection (FTP)
tenable_doc=$(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_search?q=plugin_name:%22Backported%20Security%20Patch%20Detection%20(FTP)%22%20AND%20asset:176.28.50.164%20AND%20tags:tenable" | jq '.hits.hits[]._source')
# Test asset
if echo $tenable_doc | jq .asset | grep -q '176.28.50.164'; then
green "✅ Passed: Tenable asset == 176.28.50.164"
else
red "❌ Failed: Tenable asset == 176.28.50.164 was: $(echo $tenable_doc | jq .asset) instead"
((return_code = return_code + 1))
fi
# Test @timestamp
if echo $tenable_doc | jq '.["@timestamp"]' | grep -q '2019-03-30T15:45:44.000Z'; then
green "✅ Passed: Tenable @timestamp == 2019-03-30T15:45:44.000Z"
else
red "❌ Failed: Tenable @timestamp == 2019-03-30T15:45:44.000Z was: $(echo $tenable_doc | jq '.["@timestamp"]') instead"
((return_code = return_code + 1))
fi
# Test Qualys plugin_name:OpenSSL Multiple Remote Security Vulnerabilities
qualys_vuln_doc=$(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_search?q=tags:qualys_vuln%20AND%20ip:%22176.28.50.164%22%20AND%20plugin_name:%22OpenSSL%20Multiple%20Remote%20Security%20Vulnerabilities%22%20AND%20port:465" | jq '.hits.hits[]._source')
# Test @timestamp
if echo $qualys_vuln_doc | jq '.["@timestamp"]' | grep -q '2019-03-30T10:17:41.000Z'; then
green "✅ Passed: Qualys VM @timestamp == 2019-03-30T10:17:41.000Z"
else
red "❌ Failed: Qualys VM @timestamp == 2019-03-30T10:17:41.000Z was: $(echo $qualys_vuln_doc | jq '.["@timestamp"]') instead"
((return_code = return_code + 1))
fi
# Test @XXXX
if echo $qualys_vuln_doc | jq '.cvss' | grep -q '6.8'; then
green "✅ Passed: Qualys VM cvss == 6.8"
else
red "❌ Failed: Qualys VM cvss == 6.8 was: $(echo $qualys_vuln_doc | jq '.cvss') instead"
((return_code = return_code + 1))
fi
exit $return_code

97
tests/test-vuln_whisperer.sh Executable file
View File

@ -0,0 +1,97 @@
#!/usr/bin/env bash
NORMAL=$(tput sgr0)
GREEN=$(tput setaf 2)
YELLOW=$(tput setaf 3)
RED=$(tput setaf 1)
function red() {
echo -e "$RED$*$NORMAL"
}
function green() {
echo -e "$GREEN$*$NORMAL"
}
function yellow() {
echo -e "$YELLOW$*$NORMAL"
}
return_code=0
TEST_PATH=${TEST_PATH:-"tests/data"}
yellow "\n*********************************************"
yellow "* Test successful scan download and parsing *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
if vuln_whisperer -F -c configs/test.ini --mock --mock_dir "${TEST_PATH}"; then
green "\n✅ Passed: Test successful scan download and parsing"
else
red "\n❌ Failed: Test successful scan download and parsing"
((return_code = return_code + 1))
fi
yellow "\n*********************************************"
yellow "* Test run with no scans to import *"
yellow "*********************************************"
if vuln_whisperer -F -c configs/test.ini --mock --mock_dir "${TEST_PATH}"; then
green "\n✅ Passed: Test run with no scans to import"
else
red "\n❌ Failed: Test run with no scans to import"
((return_code = return_code + 1))
fi
yellow "\n*********************************************"
yellow "* Test one failed scan *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
yellow "Removing ${TEST_PATH}/nessus/GET_scans_exports_164_download"
mv "${TEST_PATH}/nessus/GET_scans_exports_164_download"{,.bak}
if vuln_whisperer -F -c configs/test.ini --mock --mock_dir "${TEST_PATH}"; [[ $? -eq 1 ]]; then
green "\n✅ Passed: Test one failed scan"
else
red "\n❌ Failed: Test one failed scan"
((return_code = return_code + 1))
fi
yellow "\n*********************************************"
yellow "* Test two failed scans *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
yellow "Removing ${TEST_PATH}/qualys_vuln/scan_1553941061.87241"
mv "${TEST_PATH}/qualys_vuln/scan_1553941061.87241"{,.bak}
if vuln_whisperer -F -c configs/test.ini --mock --mock_dir "${TEST_PATH}"; [[ $? -eq 2 ]]; then
green "\n✅ Passed: Test two failed scans"
else
red "\n❌ Failed: Test two failed scans"
((return_code = return_code + 1))
fi
yellow "\n*********************************************"
yellow "* Test only nessus with one failed scan *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
if vuln_whisperer -F -c configs/test.ini -s nessus --mock --mock_dir "${TEST_PATH}"; [[ $? -eq 1 ]]; then
green "\n✅ Passed: Test only nessus with one failed scan"
else
red "\n❌ Failed: Test only nessus with one failed scan"
((return_code = return_code + 1))
fi
yellow "*********************************************"
yellow "* Test only Qualys VM with one failed scan *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
if vuln_whisperer -F -c configs/test.ini -s qualys_vuln --mock --mock_dir "${TEST_PATH}"; [[ $? -eq 1 ]]; then
green "\n✅ Passed: Test only Qualys VM with one failed scan"
else
red "\n❌ Failed: Test only Qualys VM with one failed scan"
((return_code = return_code + 1))
fi
# Restore the removed files
mv "${TEST_PATH}/qualys_vuln/scan_1553941061.87241.bak" "${TEST_PATH}/qualys_vuln/scan_1553941061.87241"
mv "${TEST_PATH}/nessus/GET_scans_exports_164_download.bak" "${TEST_PATH}/nessus/GET_scans_exports_164_download"
exit $return_code

View File

@ -1 +0,0 @@
from utils.cli import bcolors

View File

@ -1,8 +1,8 @@
import os
import sys
import logging
# Support for python3
if (sys.version_info > (3, 0)):
if sys.version_info > (3, 0):
import configparser as cp
else:
import ConfigParser as cp
@ -14,9 +14,70 @@ class vwConfig(object):
self.config_in = config_in
self.config = cp.RawConfigParser()
self.config.read(self.config_in)
self.logger = logging.getLogger('vwConfig')
def get(self, section, option):
self.logger.debug('Calling get for {}:{}'.format(section, option))
return self.config.get(section, option)
def getbool(self, section, option):
return self.config.getboolean(section, option)
self.logger.debug('Calling getbool for {}:{}'.format(section, option))
return self.config.getboolean(section, option)
def get_sections_with_attribute(self, attribute):
sections = []
# TODO: does this not also need the "yes" case?
check = ["true", "True", "1"]
for section in self.config.sections():
try:
if self.get(section, attribute) in check:
sections.append(section)
except:
self.logger.warn("Section {} has no option '{}'".format(section, attribute))
return sections
def exists_jira_profiles(self, profiles):
# get list of profiles source_scanner.scan_name
for profile in profiles:
if not self.config.has_section(self.normalize_section(profile)):
self.logger.warn("JIRA Scan Profile missing")
return False
return True
def update_jira_profiles(self, profiles):
# create JIRA profiles in the ini config file
self.logger.debug('Updating Jira profiles: {}'.format(str(profiles)))
for profile in profiles:
#IMPORTANT profile scans/results will be normalized to lower and "_" instead of spaces for ini file section
section_name = self.normalize_section(profile)
try:
self.get(section_name, "source")
self.logger.info("Skipping creating of section '{}'; already exists".format(section_name))
except:
self.logger.warn("Creating config section for '{}'".format(section_name))
self.config.add_section(section_name)
self.config.set(section_name, 'source', profile.split('.')[0])
# in case any scan name contains '.' character
self.config.set(section_name, 'scan_name', '.'.join(profile.split('.')[1:]))
self.config.set(section_name, 'jira_project', '')
self.config.set(section_name, '; if multiple components, separate by ","')
self.config.set(section_name, 'components', '')
self.config.set(section_name, '; minimum criticality to report (low, medium, high or critical)')
self.config.set(section_name, 'min_critical_to_report', 'high')
self.config.set(section_name, '; automatically report, boolean value ')
self.config.set(section_name, 'autoreport', 'false')
# TODO: try/catch this
# writing changes back to file
with open(self.config_in, 'w') as configfile:
self.config.write(configfile)
self.logger.debug('Written configuration to {}'.format(self.config_in))
# FIXME: this is the same as return None, that is the default return for return-less functions
return
def normalize_section(self, profile):
profile = "jira.{}".format(profile.lower().replace(" ", "_"))
self.logger.debug('Normalized profile as: {}'.format(profile))
return profile

View File

@ -1,215 +1,184 @@
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
import pytz
from datetime import datetime
import json
import sys
import time
class NessusAPI(object):
SESSION = '/session'
FOLDERS = '/folders'
SCANS = '/scans'
SCAN_ID = SCANS + '/{scan_id}'
HOST_VULN = SCAN_ID + '/hosts/{host_id}'
PLUGINS = HOST_VULN + '/plugins/{plugin_id}'
EXPORT = SCAN_ID + '/export'
EXPORT_TOKEN_DOWNLOAD = '/scans/exports/{token_id}/download'
EXPORT_FILE_DOWNLOAD = EXPORT + '/{file_id}/download'
EXPORT_STATUS = EXPORT + '/{file_id}/status'
EXPORT_HISTORY = EXPORT + '?history_id={history_id}'
def __init__(self, hostname=None, port=None, username=None, password=None, verbose=True):
if username is None or password is None:
raise Exception('ERROR: Missing username or password.')
self.user = username
self.password = password
self.base = 'https://{hostname}:{port}'.format(hostname=hostname, port=port)
self.verbose = verbose
self.headers = {
'Origin': self.base,
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.8',
'User-Agent': 'VulnWhisperer for Nessus',
'Content-Type': 'application/json',
'Accept': 'application/json, text/javascript, */*; q=0.01',
'Referer': self.base,
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive',
'X-Cookie': None
}
self.login()
self.scan_ids = self.get_scan_ids()
def vprint(self, msg):
if self.verbose:
print(msg)
def login(self):
resp = self.get_token()
if resp.status_code is 200:
self.headers['X-Cookie'] = 'token={token}'.format(token=resp.json()['token'])
else:
raise Exception('[FAIL] Could not login to Nessus')
def request(self, url, data=None, headers=None, method='POST', download=False, json=False):
if headers is None:
headers = self.headers
timeout = 0
success = False
url = self.base + url
methods = {'GET': requests.get,
'POST': requests.post,
'DELETE': requests.delete}
while (timeout <= 10) and (not success):
data = methods[method](url, data=data, headers=self.headers, verify=False)
if data.status_code == 401:
try:
self.login()
timeout += 1
self.vprint('[INFO] Token refreshed')
except Exception as e:
self.vprint('[FAIL] Could not refresh token\nReason: %s' % e)
else:
success = True
if json:
data = data.json()
if download:
return data.content
return data
def get_token(self):
auth = '{"username":"%s", "password":"%s"}' % (self.user, self.password)
token = self.request(self.SESSION, data=auth, json=False)
return token
def logout(self):
self.request(self.SESSION, method='DELETE')
def get_folders(self):
folders = self.request(self.FOLDERS, method='GET', json=True)
return folders
def get_scans(self):
scans = self.request(self.SCANS, method='GET', json=True)
return scans
def get_scan_ids(self):
scans = self.get_scans()
scan_ids = [scan_id['id'] for scan_id in scans['scans']]
return scan_ids
def count_scan(self, scans, folder_id):
count = 0
for scan in scans:
if scan['folder_id'] == folder_id: count = count + 1
return count
def print_scans(self, data):
for folder in data['folders']:
print("\\{0} - ({1})\\".format(folder['name'], self.count_scan(data['scans'], folder['id'])))
for scan in data['scans']:
if scan['folder_id'] == folder['id']:
print(
"\t\"{0}\" - sid:{1} - uuid: {2}".format(scan['name'].encode('utf-8'), scan['id'], scan['uuid']))
def get_scan_details(self, scan_id):
data = self.request(self.SCAN_ID.format(scan_id=scan_id), method='GET', json=True)
return data
def get_scan_history(self, scan_id):
data = self.request(self.SCAN_ID.format(scan_id=scan_id), method='GET', json=True)
return data['history']
def get_scan_hosts(self, scan_id):
data = self.request(self.SCAN_ID.format(scan_id=scan_id), method='GET', json=True)
return data['hosts']
def get_host_vulnerabilities(self, scan_id, host_id):
query = self.HOST_VULN.format(scan_id=scan_id, host_id=host_id)
data = self.request(query, method='GET', json=True)
return data
def get_plugin_info(self, scan_id, host_id, plugin_id):
query = self.PLUGINS.format(scan_id=scan_id, host_id=host_id, plugin_id=plugin_id)
data = self.request(query, method='GET', json=True)
return data
def export_scan(self, scan_id, history_id):
data = {'format': 'csv'}
query = self.EXPORT_REPORT.format(scan_id=scan_id, history_id=history_id)
req = self.request(query, data=data, method='POST')
return req
def download_scan(self, scan_id=None, history=None, export_format="", chapters="", dbpasswd=""):
running = True
counter = 0
data = {'format': export_format}
if not history:
query = self.EXPORT.format(scan_id=scan_id)
else:
query = self.EXPORT_HISTORY.format(scan_id=scan_id, history_id=history)
scan_id = str(scan_id)
req = self.request(query, data=json.dumps(data), method='POST', json=True)
try:
file_id = req['file']
token_id = req['token']
except Exception as e:
print("[ERROR] %s" % e)
print('Download for file id ' + str(file_id) + '.')
while running:
time.sleep(2)
counter += 2
report_status = self.request(self.EXPORT_STATUS.format(scan_id=scan_id, file_id=file_id), method='GET',
json=True)
running = report_status['status'] != 'ready'
sys.stdout.write(".")
sys.stdout.flush()
if counter % 60 == 0:
print("")
print("")
content = self.request(self.EXPORT_TOKEN_DOWNLOAD.format(token_id=token_id), method='GET', download=True)
return content
@staticmethod
def merge_dicts(self, *dict_args):
"""
Given any number of dicts, shallow copy and merge into a new dict,
precedence goes to key value pairs in latter dicts.
"""
result = {}
for dictionary in dict_args:
result.update(dictionary)
return result
def get_utc_from_local(self, date_time, local_tz=None, epoch=True):
date_time = datetime.fromtimestamp(date_time)
if local_tz is None:
local_tz = pytz.timezone('US/Central')
else:
local_tz = pytz.timezone(local_tz)
# print date_time
local_time = local_tz.normalize(local_tz.localize(date_time))
local_time = local_time.astimezone(pytz.utc)
if epoch:
naive = local_time.replace(tzinfo=None)
local_time = int((naive - datetime(1970, 1, 1)).total_seconds())
return local_time
def tz_conv(self, tz):
time_map = {'Eastern Standard Time': 'US/Eastern',
'Central Standard Time': 'US/Central',
'Pacific Standard Time': 'US/Pacific',
'None': 'US/Central'}
return time_map.get(tz, None)
import json
import logging
import sys
import time
from datetime import datetime
import pytz
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
class NessusAPI(object):
SESSION = '/session'
FOLDERS = '/folders'
SCANS = '/scans'
SCAN_ID = SCANS + '/{scan_id}'
HOST_VULN = SCAN_ID + '/hosts/{host_id}'
PLUGINS = HOST_VULN + '/plugins/{plugin_id}'
EXPORT = SCAN_ID + '/export'
EXPORT_TOKEN_DOWNLOAD = '/scans/exports/{token_id}/download'
EXPORT_FILE_DOWNLOAD = EXPORT + '/{file_id}/download'
EXPORT_STATUS = EXPORT + '/{file_id}/status'
EXPORT_HISTORY = EXPORT + '?history_id={history_id}'
def __init__(self, hostname=None, port=None, username=None, password=None, verbose=True, profile=None, access_key=None, secret_key=None):
self.logger = logging.getLogger('NessusAPI')
if verbose:
self.logger.setLevel(logging.DEBUG)
if not all((username, password)) and not all((access_key, secret_key)):
raise Exception('ERROR: Missing username, password or API keys.')
self.profile = profile
self.user = username
self.password = password
self.api_keys = False
self.access_key = access_key
self.secret_key = secret_key
self.base = 'https://{hostname}:{port}'.format(hostname=hostname, port=port)
self.verbose = verbose
self.session = requests.Session()
self.session.verify = False
self.session.stream = True
self.session.headers = {
'Origin': self.base,
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.8',
'User-Agent': 'VulnWhisperer for Nessus',
'Content-Type': 'application/json',
'Accept': 'application/json, text/javascript, */*; q=0.01',
'Referer': self.base,
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive',
'X-Cookie': None
}
if all((self.access_key, self.secret_key)):
self.logger.debug('Using {} API keys'.format(self.profile))
self.api_keys = True
self.session.headers['X-ApiKeys'] = 'accessKey={}; secretKey={}'.format(self.access_key, self.secret_key)
else:
self.login()
self.scans = self.get_scans()
self.scan_ids = self.get_scan_ids()
def login(self):
auth = '{"username":"%s", "password":"%s"}' % (self.user, self.password)
resp = self.request(self.SESSION, data=auth, json_output=False)
if resp.status_code == 200:
self.session.headers['X-Cookie'] = 'token={token}'.format(token=resp.json()['token'])
else:
raise Exception('[FAIL] Could not login to Nessus')
def request(self, url, data=None, headers=None, method='POST', download=False, json_output=False):
timeout = 0
success = False
method = method.lower()
url = self.base + url
self.logger.debug('Requesting to url {}'.format(url))
while (timeout <= 10) and (not success):
response = getattr(self.session, method)(url, data=data)
if response.status_code == 401:
if url == self.base + self.SESSION:
break
try:
timeout += 1
if self.api_keys:
continue
self.login()
self.logger.info('Token refreshed')
except Exception as e:
self.logger.error('Could not refresh token\nReason: {}'.format(str(e)))
else:
success = True
if json_output:
return response.json()
if download:
self.logger.debug('Returning data.content')
response_data = ''
count = 0
for chunk in response.iter_content(chunk_size=8192):
count += 1
if chunk:
response_data += chunk
self.logger.debug('Processed {} chunks'.format(count))
return response_data
return response
def get_scans(self):
scans = self.request(self.SCANS, method='GET', json_output=True)
return scans
def get_scan_ids(self):
scans = self.scans
scan_ids = [scan_id['id'] for scan_id in scans['scans']] if scans['scans'] else []
self.logger.debug('Found {} scan_ids'.format(len(scan_ids)))
return scan_ids
def get_scan_history(self, scan_id):
data = self.request(self.SCAN_ID.format(scan_id=scan_id), method='GET', json_output=True)
return data['history']
def download_scan(self, scan_id=None, history=None, export_format=""):
running = True
counter = 0
data = {'format': export_format}
if not history:
query = self.EXPORT.format(scan_id=scan_id)
else:
query = self.EXPORT_HISTORY.format(scan_id=scan_id, history_id=history)
scan_id = str(scan_id)
req = self.request(query, data=json.dumps(data), method='POST', json_output=True)
try:
file_id = req['file']
if self.profile == 'nessus':
token_id = req['token'] if 'token' in req else req['temp_token']
except Exception as e:
self.logger.error('{}'.format(str(e)))
self.logger.info('Download for file id {}'.format(str(file_id)))
while running:
time.sleep(2)
counter += 2
report_status = self.request(self.EXPORT_STATUS.format(scan_id=scan_id, file_id=file_id), method='GET',
json_output=True)
running = report_status['status'] != 'ready'
sys.stdout.write(".")
sys.stdout.flush()
# FIXME: why? can this be removed in favour of a counter?
if counter % 60 == 0:
self.logger.info("Completed: {}".format(counter))
self.logger.info("Done: {}".format(counter))
if self.profile == 'tenable' or self.api_keys:
content = self.request(self.EXPORT_FILE_DOWNLOAD.format(scan_id=scan_id, file_id=file_id), method='GET', download=True)
else:
content = self.request(self.EXPORT_TOKEN_DOWNLOAD.format(token_id=token_id), method='GET', download=True)
return content
def get_utc_from_local(self, date_time, local_tz=None, epoch=True):
date_time = datetime.fromtimestamp(date_time)
if local_tz is None:
local_tz = pytz.timezone('UTC')
else:
local_tz = pytz.timezone(local_tz)
local_time = local_tz.normalize(local_tz.localize(date_time))
local_time = local_time.astimezone(pytz.utc)
if epoch:
naive = local_time.replace(tzinfo=None)
local_time = int((naive - datetime(1970, 1, 1)).total_seconds())
self.logger.debug('Converted timestamp {} in datetime {}'.format(date_time, local_time))
return local_time
def tz_conv(self, tz):
time_map = {'Eastern Standard Time': 'US/Eastern',
'Central Standard Time': 'US/Central',
'Pacific Standard Time': 'US/Pacific',
'None': 'US/Central'}
return time_map.get(tz, None)

View File

@ -0,0 +1,192 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
__author__ = 'Austin Taylor'
import datetime as dt
import io
import logging
import pandas as pd
import requests
from bs4 import BeautifulSoup
class OpenVAS_API(object):
OMP = '/omp'
def __init__(self,
hostname=None,
port=None,
username=None,
password=None,
report_format_id=None,
verbose=True):
self.logger = logging.getLogger('OpenVAS_API')
if verbose:
self.logger.setLevel(logging.DEBUG)
if username is None or password is None:
raise Exception('ERROR: Missing username or password.')
self.username = username
self.password = password
self.base = 'https://{hostname}:{port}'.format(hostname=hostname, port=port)
self.verbose = verbose
self.processed_reports = 0
self.report_format_id = report_format_id
self.headers = {
'Origin': self.base,
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.8',
'User-Agent': 'VulnWhisperer for OpenVAS',
'Content-Type': 'application/x-www-form-urlencoded',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Cache-Control': 'max-age=0',
'Referer': self.base,
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive',
}
self.login()
self.openvas_reports = self.get_reports()
self.report_formats = self.get_report_formats()
def login(self):
resp = self.get_token()
if resp.status_code is 200:
xml_response = BeautifulSoup(resp.content, 'lxml')
self.token = xml_response.find(attrs={'id': 'gsa-token'}).text
self.cookies = resp.cookies.get_dict()
else:
raise Exception('[FAIL] Could not login to OpenVAS')
def request(self, url, data=None, params=None, headers=None, cookies=None, method='POST', download=False,
json=False):
if headers is None:
headers = self.headers
if cookies is None:
cookies = self.cookies
timeout = 0
success = False
url = self.base + url
methods = {'GET': requests.get,
'POST': requests.post,
'DELETE': requests.delete}
while (timeout <= 10) and (not success):
data = methods[method](url,
data=data,
headers=self.headers,
params=params,
cookies=cookies,
verify=False)
if data.status_code == 401:
try:
self.login()
timeout += 1
self.logger.info(' Token refreshed')
except Exception as e:
self.logger.error('Could not refresh token\nReason: {}'.format(str(e)))
else:
success = True
if json:
data = data.json()
if download:
return data.content
return data
def get_token(self):
data = [
('cmd', 'login'),
('text', '/omp?r=1'),
('login', self.username),
('password', self.password),
]
token = requests.post(self.base + self.OMP, data=data, verify=False)
return token
def get_report_formats(self):
params = (
('cmd', 'get_report_formats'),
('token', self.token)
)
self.logger.info('Retrieving available report formats')
data = self.request(url=self.OMP, method='GET', params=params)
bs = BeautifulSoup(data.content, "lxml")
table_body = bs.find('tbody')
rows = table_body.find_all('tr')
format_mapping = {}
for row in rows:
cols = row.find_all('td')
for x in cols:
for y in x.find_all('a'):
if y.get_text() != '':
format_mapping[y.get_text()] = \
[h.split('=')[1] for h in y['href'].split('&') if 'report_format_id' in h][0]
return format_mapping
def get_reports(self, complete=True):
self.logger.info('Retreiving OpenVAS report data...')
params = (('cmd', 'get_reports'),
('token', self.token),
('max_results', 1),
('ignore_pagination', 1),
('filter', 'apply_overrides=1 min_qod=70 autofp=0 first=1 rows=0 levels=hml sort-reverse=severity'),
)
reports = self.request(self.OMP, params=params, method='GET')
soup = BeautifulSoup(reports.text, 'lxml')
data = []
links = []
table = soup.find('table', attrs={'class': 'gbntable'})
table_body = table.find('tbody')
rows = table_body.find_all('tr')
for row in rows:
cols = row.find_all('td')
links.extend([a['href'] for a in row.find_all('a', href=True) if 'get_report' in str(a)])
cols = [ele.text.strip() for ele in cols]
data.append([ele for ele in cols if ele])
report = pd.DataFrame(data, columns=['date', 'status', 'task', 'scan_severity', 'high', 'medium', 'low', 'log',
'false_pos'])
if report.shape[0] != 0:
report['links'] = links
report['report_ids'] = report.links.str.extract('.*report_id=([a-z-0-9]*)', expand=False)
report['epoch'] = (pd.to_datetime(report['date']) - dt.datetime(1970, 1, 1)).dt.total_seconds().astype(int)
else:
raise Exception("Could not retrieve OpenVAS Reports - Please check your settings and try again")
report['links'] = links
report['report_ids'] = report.links.str.extract('.*report_id=([a-z-0-9]*)', expand=False)
report['epoch'] = (pd.to_datetime(report['date']) - dt.datetime(1970, 1, 1)).dt.total_seconds().astype(int)
if complete:
report = report[report.status == 'Done']
severity_extraction = report.scan_severity.str.extract('([0-9.]*) \(([\w]+)\)', expand=False)
severity_extraction.columns = ['scan_highest_severity', 'severity_rate']
report_with_severity = pd.concat([report, severity_extraction], axis=1)
return report_with_severity
def process_report(self, report_id):
params = (
('token', self.token),
('cmd', 'get_report'),
('report_id', report_id),
('filter', 'apply_overrides=0 min_qod=70 autofp=0 levels=hml first=1 rows=0 sort-reverse=severity'),
('ignore_pagination', '1'),
('report_format_id', '{report_format_id}'.format(report_format_id=self.report_formats['CSV Results'])),
('submit', 'Download'),
)
self.logger.info('Retrieving {}'.format(report_id))
req = self.request(self.OMP, params=params, method='GET')
report_df = pd.read_csv(io.BytesIO(req.text.encode('utf-8')))
report_df['report_ids'] = report_id
self.processed_reports += 1
merged_df = pd.merge(report_df, self.openvas_reports, on='report_ids').reset_index().drop('index', axis=1)
return merged_df

View File

@ -1,836 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
__author__ = 'Austin Taylor'
from lxml import objectify
from lxml.builder import E
import xml.etree.ElementTree as ET
import pandas as pd
import qualysapi
import qualysapi.config as qcconf
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
import sys
import os
import csv
import dateutil.parser as dp
class qualysWhisperAPI(object):
COUNT_WEBAPP = '/count/was/webapp'
COUNT_WASSCAN = '/count/was/wasscan'
DELETE_REPORT = '/delete/was/report/{report_id}'
GET_WEBAPP_DETAILS = '/get/was/webapp/{was_id}'
QPS_REST_3 = '/qps/rest/3.0'
REPORT_DETAILS = '/get/was/report/{report_id}'
REPORT_STATUS = '/status/was/report/{report_id}'
REPORT_CREATE = '/create/was/report'
REPORT_DOWNLOAD = '/download/was/report/{report_id}'
SCAN_DETAILS = '/get/was/wasscan/{scan_id}'
SCAN_DOWNLOAD = '/download/was/wasscan/{scan_id}'
SEARCH_REPORTS = '/search/was/report'
SEARCH_WEB_APPS = '/search/was/webapp'
SEARCH_WAS_SCAN = '/search/was/wasscan'
VERSION = '/qps/rest/portal/version'
def __init__(self, config=None):
self.config = config
try:
self.qgc = qualysapi.connect(config)
print('[SUCCESS] - Connected to Qualys at %s' % self.qgc.server)
except Exception as e:
print('[ERROR] Could not connect to Qualys - %s' % e)
self.headers = {
"content-type": "text/xml"}
self.config_parse = qcconf.QualysConnectConfig(config)
try:
self.template_id = self.config_parse.get_template_id()
except:
print('ERROR - Could not retrieve template ID')
def request(self, path, method='get', data=None):
methods = {'get': requests.get,
'post': requests.post}
base = 'https://' + self.qgc.server + path
req = methods[method](base, auth=self.qgc.auth, data=data, headers=self.headers).content
return req
def get_version(self):
return self.request(self.VERSION)
def get_scan_count(self, scan_name):
parameters = (
E.ServiceRequest(
E.filters(
E.Criteria({'field': 'name', 'operator': 'CONTAINS'}, scan_name))))
xml_output = self.qgc.request(self.COUNT_WEBAPP, parameters)
root = objectify.fromstring(xml_output)
return root.count.text
def get_was_scan_count(self, status):
parameters = (
E.ServiceRequest(
E.filters(
E.Criteria({'field': 'status', 'operator': 'EQUALS'}, status))))
xml_output = self.qgc.request(self.COUNT_WASSCAN, parameters)
root = objectify.fromstring(xml_output)
return root.count.text
def get_reports(self):
return self.qgc.request(self.SEARCH_REPORTS)
def xml_parser(self, xml, dupfield=None):
all_records = []
root = ET.XML(xml)
for i, child in enumerate(root):
for subchild in child:
record = {}
dup_tracker = 0
for p in subchild:
record[p.tag] = p.text
for o in p:
if o.tag in record:
dup_tracker += 1
record[o.tag + '_%s' % dup_tracker] = o.text
else:
record[o.tag] = o.text
all_records.append(record)
return pd.DataFrame(all_records)
def get_report_list(self):
"""Returns a dataframe of reports"""
return self.xml_parser(self.get_reports(), dupfield='user_id')
def get_web_apps(self):
"""Returns webapps available for account"""
return self.qgc.request(self.SEARCH_WEB_APPS)
def get_web_app_list(self):
"""Returns dataframe of webapps"""
return self.xml_parser(self.get_web_apps(), dupfield='user_id')
def get_web_app_details(self, was_id):
"""Get webapp details - use to retrieve app ID tag"""
return self.qgc.request(self.GET_WEBAPP_DETAILS.format(was_id=was_id))
def get_scans_by_app_id(self, app_id):
data = self.generate_app_id_scan_XML(app_id)
return self.qgc.request(self.SEARCH_WAS_SCAN, data)
def get_scan_info(self, limit=1000, offset=1, status='FINISHED'):
""" Returns XML of ALL WAS Scans"""
data = self.generate_scan_result_XML(limit=limit, offset=offset, status=status)
return self.qgc.request(self.SEARCH_WAS_SCAN, data)
def get_all_scans(self, limit=1000, offset=1, status='FINISHED'):
qualys_api_limit = limit
dataframes = []
_records = []
total = int(self.get_was_scan_count(status=status))
print('Processing %s total scans' % total)
for i in range(0, total):
if i % limit == 0:
if (total - i) < limit:
qualys_api_limit = total - i
print('Making a request with a limit of %s at offset %s' % (str(qualys_api_limit), str(i + 1)))
scan_info = self.get_scan_info(limit=qualys_api_limit, offset=i + 1, status=status)
_records.append(scan_info)
print('Converting XML to DataFrame')
dataframes = [self.xml_parser(xml) for xml in _records]
return pd.concat(dataframes, axis=0).reset_index().drop('index', axis=1)
def get_scan_details(self, scan_id):
return self.qgc.request(self.SCAN_DETAILS.format(scan_id=scan_id))
def get_report_details(self, report_id):
return self.qgc.request(self.REPORT_DETAILS.format(report_id=report_id))
def get_report_status(self, report_id):
return self.qgc.request(self.REPORT_STATUS.format(report_id=report_id))
def download_report(self, report_id):
return self.qgc.request(self.REPORT_DOWNLOAD.format(report_id=report_id))
def download_scan_results(self, scan_id):
return self.qgc.request(self.SCAN_DOWNLOAD.format(scan_id=scan_id))
def generate_scan_result_XML(self, limit=1000, offset=1, status='FINISHED'):
report_xml = E.ServiceRequest(
E.filters(
E.Criteria({'field': 'status', 'operator': 'EQUALS'}, status
),
),
E.preferences(
E.startFromOffset(str(offset)),
E.limitResults(str(limit))
),
)
return report_xml
def generate_scan_report_XML(self, scan_id):
"""Generates a CSV report for an asset based on template defined in .ini file"""
report_xml = E.ServiceRequest(
E.data(
E.Report(
E.name('![CDATA[API Scan Report generated by VulnWhisperer]]>'),
E.description('<![CDATA[CSV Scanning report for VulnWhisperer]]>'),
E.format('CSV'),
E.type('WAS_SCAN_REPORT'),
E.template(
E.id(self.template_id)
),
E.config(
E.scanReport(
E.target(
E.scans(
E.WasScan(
E.id(scan_id)
)
),
),
),
)
)
)
)
return report_xml
def generate_webapp_report_XML(self, app_id):
"""Generates a CSV report for an asset based on template defined in .ini file"""
report_xml = E.ServiceRequest(
E.data(
E.Report(
E.name('![CDATA[API Web Application Report generated by VulnWhisperer]]>'),
E.description('<![CDATA[CSV WebApp report for VulnWhisperer]]>'),
E.format('CSV'),
E.template(
E.id(self.template_id)
),
E.config(
E.webAppReport(
E.target(
E.webapps(
E.WebApp(
E.id(app_id)
)
),
),
),
)
)
)
)
return report_xml
def generate_app_id_scan_XML(self, app_id):
report_xml = E.ServiceRequest(
E.filters(
E.Criteria({'field': 'webApp.id', 'operator': 'EQUALS'}, app_id
),
),
)
return report_xml
def create_report(self, report_id, kind='scan'):
mapper = {'scan': self.generate_scan_report_XML,
'webapp': self.generate_webapp_report_XML}
try:
# print lxml.etree.tostring(mapper[kind](report_id), pretty_print=True)
data = mapper[kind](report_id)
except Exception as e:
print(e)
return self.qgc.request(self.REPORT_CREATE, data)
def delete_report(self, report_id):
return self.qgc.request(self.DELETE_REPORT.format(report_id=report_id))
class qualysReportFields:
CATEGORIES = ['VULNERABILITY',
'SENSITIVECONTENT',
'INFORMATION_GATHERED']
# URL Vulnerability Information
VULN_BLOCK = [
CATEGORIES[0],
'ID',
'QID',
'Url',
'Param',
'Function',
'Form Entry Point',
'Access Path',
'Authentication',
'Ajax Request',
'Ajax Request ID',
'Ignored',
'Ignore Reason',
'Ignore Date',
'Ignore User',
'Ignore Comments',
'First Time Detected',
'Last Time Detected',
'Last Time Tested',
'Times Detected',
'Payload #1',
'Request Method #1',
'Request URL #1',
'Request Headers #1',
'Response #1',
'Evidence #1',
]
INFO_HEADER = [
'Vulnerability Category',
'ID',
'QID',
'Response #1',
'Last Time Detected',
]
INFO_BLOCK = [
CATEGORIES[2],
'ID',
'QID',
'Results',
'Detection Date',
]
QID_HEADER = [
'QID',
'Id',
'Title',
'Category',
'Severity Level',
'Groups',
'OWASP',
'WASC',
'CWE',
'CVSS Base',
'CVSS Temporal',
'Description',
'Impact',
'Solution',
]
GROUP_HEADER = ['GROUP', 'Name', 'Category']
OWASP_HEADER = ['OWASP', 'Code', 'Name']
WASC_HEADER = ['WASC', 'Code', 'Name']
SCAN_META = ['Web Application Name', 'URL', 'Owner', 'Scope', 'Operating System']
CATEGORY_HEADER = ['Category', 'Severity', 'Level', 'Description']
class qualysUtils:
def __init__(self):
pass
def grab_section(
self,
report,
section,
end=[],
pop_last=False,
):
temp_list = []
max_col_count = 0
with open(report, 'rb') as csvfile:
q_report = csv.reader(csvfile, delimiter=',', quotechar='"')
for line in q_report:
if set(line) == set(section):
break
# Reads text until the end of the block:
for line in q_report: # This keeps reading the file
temp_list.append(line)
if line in end:
break
if pop_last and len(temp_list) > 1:
temp_list.pop(-1)
return temp_list
def iso_to_epoch(self, dt):
return dp.parse(dt).strftime('%s')
def cleanser(self, _data):
repls = (('\n', '|||'), ('\r', '|||'), (',', ';'), ('\t', '|||'
))
if _data:
_data = reduce(lambda a, kv: a.replace(*kv), repls, str(_data))
return _data
class qualysWebAppReport:
# URL Vulnerability Information
WEB_APP_VULN_BLOCK = list(qualysReportFields.VULN_BLOCK)
WEB_APP_VULN_BLOCK.insert(0, 'Web Application Name')
WEB_APP_VULN_BLOCK.insert(WEB_APP_VULN_BLOCK.index('Ignored'), 'Status')
WEB_APP_VULN_HEADER = list(WEB_APP_VULN_BLOCK)
WEB_APP_VULN_HEADER[WEB_APP_VULN_BLOCK.index(qualysReportFields.CATEGORIES[0])] = \
'Vulnerability Category'
WEB_APP_SENSITIVE_HEADER = list(WEB_APP_VULN_HEADER)
WEB_APP_SENSITIVE_HEADER.insert(WEB_APP_SENSITIVE_HEADER.index('Url'
), 'Content')
WEB_APP_SENSITIVE_BLOCK = list(WEB_APP_SENSITIVE_HEADER)
WEB_APP_SENSITIVE_BLOCK[WEB_APP_SENSITIVE_BLOCK.index('Vulnerability Category'
)] = qualysReportFields.CATEGORIES[1]
WEB_APP_INFO_HEADER = list(qualysReportFields.INFO_HEADER)
WEB_APP_INFO_HEADER.insert(0, 'Web Application Name')
WEB_APP_INFO_BLOCK = list(qualysReportFields.INFO_BLOCK)
WEB_APP_INFO_BLOCK.insert(0, 'Web Application Name')
QID_HEADER = list(qualysReportFields.QID_HEADER)
GROUP_HEADER = list(qualysReportFields.GROUP_HEADER)
OWASP_HEADER = list(qualysReportFields.OWASP_HEADER)
WASC_HEADER = list(qualysReportFields.WASC_HEADER)
SCAN_META = list(qualysReportFields.SCAN_META)
CATEGORY_HEADER = list(qualysReportFields.CATEGORY_HEADER)
def __init__(
self,
config=None,
file_in=None,
file_stream=False,
delimiter=',',
quotechar='"',
):
self.file_in = file_in
self.file_stream = file_stream
self.report = None
self.utils = qualysUtils()
if config:
try:
self.qw = qualysWhisperAPI(config=config)
except Exception as e:
print('Could not load config! Please check settings for %s' \
% e)
if file_stream:
self.open_file = file_in.splitlines()
elif file_in:
self.open_file = open(file_in, 'rb')
self.downloaded_file = None
def get_hostname(self, report):
host = ''
with open(report, 'rb') as csvfile:
q_report = csv.reader(csvfile, delimiter=',', quotechar='"')
for x in q_report:
if 'Web Application Name' in x[0]:
host = q_report.next()[0]
return host
def get_scanreport_name(self, report):
scan_name = ''
with open(report, 'rb') as csvfile:
q_report = csv.reader(csvfile, delimiter=',', quotechar='"')
for x in q_report:
if 'Scans' in x[0]:
scan_name = x[1]
return scan_name
def grab_sections(self, report):
all_dataframes = []
with open(report, 'rb') as csvfile:
all_dataframes.append(pd.DataFrame(self.utils.grab_section(report,
self.WEB_APP_VULN_BLOCK,
end=[self.WEB_APP_SENSITIVE_BLOCK,
self.WEB_APP_INFO_BLOCK],
pop_last=True),
columns=self.WEB_APP_VULN_HEADER))
all_dataframes.append(pd.DataFrame(self.utils.grab_section(report,
self.WEB_APP_SENSITIVE_BLOCK,
end=[self.WEB_APP_INFO_BLOCK,
self.WEB_APP_SENSITIVE_BLOCK],
pop_last=True),
columns=self.WEB_APP_SENSITIVE_HEADER))
all_dataframes.append(pd.DataFrame(self.utils.grab_section(report,
self.WEB_APP_INFO_BLOCK,
end=[self.QID_HEADER],
pop_last=True),
columns=self.WEB_APP_INFO_HEADER))
all_dataframes.append(pd.DataFrame(self.utils.grab_section(report,
self.QID_HEADER,
end=[self.GROUP_HEADER],
pop_last=True),
columns=self.QID_HEADER))
all_dataframes.append(pd.DataFrame(self.utils.grab_section(report,
self.GROUP_HEADER,
end=[self.OWASP_HEADER],
pop_last=True),
columns=self.GROUP_HEADER))
all_dataframes.append(pd.DataFrame(self.utils.grab_section(report,
self.OWASP_HEADER,
end=[self.WASC_HEADER],
pop_last=True),
columns=self.OWASP_HEADER))
all_dataframes.append(pd.DataFrame(self.utils.grab_section(report,
self.WASC_HEADER, end=[['APPENDIX']],
pop_last=True),
columns=self.WASC_HEADER))
all_dataframes.append(pd.DataFrame(self.utils.grab_section(report,
self.CATEGORY_HEADER),
columns=self.CATEGORY_HEADER))
return all_dataframes
def data_normalizer(self, dataframes):
"""
Merge and clean data
:param dataframes:
:return:
"""
merged_df = pd.concat([dataframes[0], dataframes[1],
dataframes[2]], axis=0,
ignore_index=False)
merged_df = pd.merge(merged_df, dataframes[3], left_on='QID',
right_on='Id')
if 'Content' not in merged_df:
merged_df['Content'] = ''
columns_to_cleanse = ['Payload #1', 'Request Method #1', 'Request URL #1',
'Request Headers #1', 'Response #1', 'Evidence #1',
'Description', 'Impact', 'Solution', 'Url', 'Content']
for col in columns_to_cleanse:
merged_df[col] = merged_df[col].astype(str).apply(self.utils.cleanser)
merged_df = merged_df.drop(['QID_y', 'QID_x'], axis=1)
merged_df = merged_df.rename(columns={'Id': 'QID'})
merged_df = merged_df.replace('N/A','').fillna('')
try:
merged_df = \
merged_df[~merged_df.Title.str.contains('Links Crawled|External Links Discovered'
)]
except Exception as e:
print(e)
return merged_df
def download_file(self, file_id):
report = self.qw.download_report(file_id)
filename = str(file_id) + '.csv'
file_out = open(filename, 'w')
for line in report.splitlines():
file_out.write(line + '\n')
file_out.close()
print('[ACTION] - File written to %s' % filename)
return filename
def remove_file(self, filename):
os.remove(filename)
def process_data(self, file_id, scan=True, cleanup=True):
"""Downloads a file from qualys and normalizes it"""
download_file = self.download_file(file_id)
print('[ACTION] - Downloading file ID: %s' % file_id)
report_data = self.grab_sections(download_file)
merged_data = self.data_normalizer(report_data)
if scan:
scan_name = self.get_scanreport_name(download_file)
merged_data['ScanName'] = scan_name
# TODO cleanup old data (delete)
return merged_data
def whisper_reports(self, report_id, updated_date, cleanup=False):
"""
report_id: App ID
updated_date: Last time scan was ran for app_id
"""
vuln_ready = None
try:
if 'Z' in updated_date:
updated_date = self.utils.iso_to_epoch(updated_date)
report_name = 'qualys_web_' + str(report_id) \
+ '_{last_updated}'.format(last_updated=updated_date) \
+ '.csv'
if os.path.isfile(report_name):
print('[ACTION] - File already exist! Skipping...')
pass
else:
print('[ACTION] - Generating report for %s' % report_id)
status = self.qw.create_report(report_id)
root = objectify.fromstring(status)
if root.responseCode == 'SUCCESS':
print('[INFO] - Successfully generated report for webapp: %s' \
% report_id)
generated_report_id = root.data.Report.id
print ('[INFO] - New Report ID: %s' \
% generated_report_id)
vuln_ready = self.process_data(generated_report_id)
vuln_ready.to_csv(report_name, index=False, header=True) # add when timestamp occured
print('[SUCCESS] - Report written to %s' \
% report_name)
if cleanup:
print('[ACTION] - Removing report %s' \
% generated_report_id)
cleaning_up = \
self.qw.delete_report(generated_report_id)
self.remove_file(str(generated_report_id) + '.csv')
print('[ACTION] - Deleted report: %s' \
% generated_report_id)
else:
print('Could not process report ID: %s' % status)
except Exception as e:
print('[ERROR] - Could not process %s - %s' % (report_id, e))
return vuln_ready
class qualysScanReport:
# URL Vulnerability Information
WEB_SCAN_VULN_BLOCK = list(qualysReportFields.VULN_BLOCK)
WEB_SCAN_VULN_BLOCK.insert(WEB_SCAN_VULN_BLOCK.index('QID'), 'Detection ID')
WEB_SCAN_VULN_HEADER = list(WEB_SCAN_VULN_BLOCK)
WEB_SCAN_VULN_HEADER[WEB_SCAN_VULN_BLOCK.index(qualysReportFields.CATEGORIES[0])] = \
'Vulnerability Category'
WEB_SCAN_SENSITIVE_HEADER = list(WEB_SCAN_VULN_HEADER)
WEB_SCAN_SENSITIVE_HEADER.insert(WEB_SCAN_SENSITIVE_HEADER.index('Url'
), 'Content')
WEB_SCAN_SENSITIVE_BLOCK = list(WEB_SCAN_SENSITIVE_HEADER)
WEB_SCAN_SENSITIVE_BLOCK.insert(WEB_SCAN_SENSITIVE_BLOCK.index('QID'), 'Detection ID')
WEB_SCAN_SENSITIVE_BLOCK[WEB_SCAN_SENSITIVE_BLOCK.index('Vulnerability Category'
)] = qualysReportFields.CATEGORIES[1]
WEB_SCAN_INFO_HEADER = list(qualysReportFields.INFO_HEADER)
WEB_SCAN_INFO_HEADER.insert(WEB_SCAN_INFO_HEADER.index('QID'), 'Detection ID')
WEB_SCAN_INFO_BLOCK = list(qualysReportFields.INFO_BLOCK)
WEB_SCAN_INFO_BLOCK.insert(WEB_SCAN_INFO_BLOCK.index('QID'), 'Detection ID')
QID_HEADER = list(qualysReportFields.QID_HEADER)
GROUP_HEADER = list(qualysReportFields.GROUP_HEADER)
OWASP_HEADER = list(qualysReportFields.OWASP_HEADER)
WASC_HEADER = list(qualysReportFields.WASC_HEADER)
SCAN_META = list(qualysReportFields.SCAN_META)
CATEGORY_HEADER = list(qualysReportFields.CATEGORY_HEADER)
def __init__(
self,
config=None,
file_in=None,
file_stream=False,
delimiter=',',
quotechar='"',
):
self.file_in = file_in
self.file_stream = file_stream
self.report = None
self.utils = qualysUtils()
if config:
try:
self.qw = qualysWhisperAPI(config=config)
except Exception as e:
print('Could not load config! Please check settings for %s' \
% e)
if file_stream:
self.open_file = file_in.splitlines()
elif file_in:
self.open_file = open(file_in, 'rb')
self.downloaded_file = None
def grab_sections(self, report):
all_dataframes = []
dict_tracker = {}
with open(report, 'rb') as csvfile:
dict_tracker['WEB_SCAN_VULN_BLOCK'] = pd.DataFrame(self.utils.grab_section(report,
self.WEB_SCAN_VULN_BLOCK,
end=[
self.WEB_SCAN_SENSITIVE_BLOCK,
self.WEB_SCAN_INFO_BLOCK],
pop_last=True),
columns=self.WEB_SCAN_VULN_HEADER)
dict_tracker['WEB_SCAN_SENSITIVE_BLOCK'] = pd.DataFrame(self.utils.grab_section(report,
self.WEB_SCAN_SENSITIVE_BLOCK,
end=[
self.WEB_SCAN_INFO_BLOCK,
self.WEB_SCAN_SENSITIVE_BLOCK],
pop_last=True),
columns=self.WEB_SCAN_SENSITIVE_HEADER)
dict_tracker['WEB_SCAN_INFO_BLOCK'] = pd.DataFrame(self.utils.grab_section(report,
self.WEB_SCAN_INFO_BLOCK,
end=[self.QID_HEADER],
pop_last=True),
columns=self.WEB_SCAN_INFO_HEADER)
dict_tracker['QID_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.QID_HEADER,
end=[self.GROUP_HEADER],
pop_last=True),
columns=self.QID_HEADER)
dict_tracker['GROUP_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.GROUP_HEADER,
end=[self.OWASP_HEADER],
pop_last=True),
columns=self.GROUP_HEADER)
dict_tracker['OWASP_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.OWASP_HEADER,
end=[self.WASC_HEADER],
pop_last=True),
columns=self.OWASP_HEADER)
dict_tracker['WASC_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.WASC_HEADER, end=[['APPENDIX']],
pop_last=True),
columns=self.WASC_HEADER)
dict_tracker['SCAN_META'] = pd.DataFrame(self.utils.grab_section(report,
self.SCAN_META,
end=[self.CATEGORY_HEADER],
pop_last=True),
columns=self.SCAN_META)
dict_tracker['CATEGORY_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.CATEGORY_HEADER),
columns=self.CATEGORY_HEADER)
all_dataframes.append(dict_tracker)
return all_dataframes
def data_normalizer(self, dataframes):
"""
Merge and clean data
:param dataframes:
:return:
"""
df_dict = dataframes[0]
merged_df = pd.concat([df_dict['WEB_SCAN_VULN_BLOCK'], df_dict['WEB_SCAN_SENSITIVE_BLOCK'],
df_dict['WEB_SCAN_INFO_BLOCK']], axis=0,
ignore_index=False)
merged_df = pd.merge(merged_df, df_dict['QID_HEADER'], left_on='QID',
right_on='Id')
if 'Content' not in merged_df:
merged_df['Content'] = ''
columns_to_cleanse = ['Payload #1', 'Request Method #1', 'Request URL #1',
'Request Headers #1', 'Response #1', 'Evidence #1',
'Description', 'Impact', 'Solution', 'Url', 'Content']
for col in columns_to_cleanse:
merged_df[col] = merged_df[col].apply(self.utils.cleanser)
merged_df = merged_df.drop(['QID_y', 'QID_x'], axis=1)
merged_df = merged_df.rename(columns={'Id': 'QID'})
merged_df = merged_df.assign(**df_dict['SCAN_META'].to_dict(orient='records')[0])
merged_df = pd.merge(merged_df, df_dict['CATEGORY_HEADER'], how='left', left_on=['Category', 'Severity Level'],
right_on=['Category', 'Severity'], suffixes=('Severity', 'CatSev'))
merged_df = merged_df.replace('N/A', '').fillna('')
try:
merged_df = \
merged_df[~merged_df.Title.str.contains('Links Crawled|External Links Discovered'
)]
except Exception as e:
print(e)
return merged_df
def download_file(self, path='', file_id=None):
report = self.qw.download_report(file_id)
filename = path + str(file_id) + '.csv'
file_out = open(filename, 'w')
for line in report.splitlines():
file_out.write(line + '\n')
file_out.close()
print('[ACTION] - File written to %s' % filename)
return filename
def remove_file(self, filename):
os.remove(filename)
def process_data(self, path='', file_id=None, cleanup=True):
"""Downloads a file from qualys and normalizes it"""
download_file = self.download_file(path=path, file_id=file_id)
print('[ACTION] - Downloading file ID: %s' % file_id)
report_data = self.grab_sections(download_file)
merged_data = self.data_normalizer(report_data)
merged_data.sort_index(axis=1, inplace=True)
# TODO cleanup old data (delete)
return merged_data
def whisper_reports(self, report_id, updated_date, cleanup=False):
"""
report_id: App ID
updated_date: Last time scan was ran for app_id
"""
vuln_ready = None
try:
if 'Z' in updated_date:
updated_date = self.utils.iso_to_epoch(updated_date)
report_name = 'qualys_web_' + str(report_id) \
+ '_{last_updated}'.format(last_updated=updated_date) \
+ '.csv'
if os.path.isfile(report_name):
print('[ACTION] - File already exist! Skipping...')
pass
else:
print('[ACTION] - Generating report for %s' % report_id)
status = self.qw.create_report(report_id)
root = objectify.fromstring(status)
if root.responseCode == 'SUCCESS':
print('[INFO] - Successfully generated report for webapp: %s' \
% report_id)
generated_report_id = root.data.Report.id
print ('[INFO] - New Report ID: %s' \
% generated_report_id)
vuln_ready = self.process_data(generated_report_id)
vuln_ready.to_csv(report_name, index=False, header=True) # add when timestamp occured
print('[SUCCESS] - Report written to %s' \
% report_name)
if cleanup:
print('[ACTION] - Removing report %s' \
% generated_report_id)
cleaning_up = \
self.qw.delete_report(generated_report_id)
self.remove_file(str(generated_report_id) + '.csv')
print('[ACTION] - Deleted report: %s' \
% generated_report_id)
else:
print('Could not process report ID: %s' % status)
except Exception as e:
print('[ERROR] - Could not process %s - %s' % (report_id, e))
return vuln_ready
maxInt = sys.maxsize
decrement = True
while decrement:
decrement = False
try:
csv.field_size_limit(maxInt)
except OverflowError:
maxInt = int(maxInt/10)
decrement = True

View File

@ -0,0 +1,124 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
__author__ = 'Nathan Young'
import logging
import sys
import xml.etree.ElementTree as ET
import dateutil.parser as dp
import pandas as pd
import qualysapi
class qualysWhisperAPI(object):
SCANS = 'api/2.0/fo/scan'
def __init__(self, config=None):
self.logger = logging.getLogger('qualysWhisperAPI')
self.config = config
try:
self.qgc = qualysapi.connect(config, 'qualys_vuln')
# Fail early if we can't make a request or auth is incorrect
self.qgc.request('about.php')
self.logger.info('Connected to Qualys at {}'.format(self.qgc.server))
except Exception as e:
self.logger.error('Could not connect to Qualys: {}'.format(str(e)))
sys.exit(1)
def scan_xml_parser(self, xml):
all_records = []
root = ET.XML(xml.encode("utf-8"))
for child in root.find('.//SCAN_LIST'):
all_records.append({
'name': child.find('TITLE').text,
'id': child.find('REF').text,
'date': child.find('LAUNCH_DATETIME').text,
'type': child.find('TYPE').text,
'duration': child.find('DURATION').text,
'status': child.find('.//STATE').text,
})
return pd.DataFrame(all_records)
def get_all_scans(self):
parameters = {
'action': 'list',
'echo_request': 0,
'show_op': 0,
'launched_after_datetime': '0001-01-01'
}
scans_xml = self.qgc.request(self.SCANS, parameters)
return self.scan_xml_parser(scans_xml)
def get_scan_details(self, scan_id=None):
parameters = {
'action': 'fetch',
'echo_request': 0,
'output_format': 'json_extended',
'mode': 'extended',
'scan_ref': scan_id
}
scan_json = self.qgc.request(self.SCANS, parameters)
# First two columns are metadata we already have
# Last column corresponds to "target_distribution_across_scanner_appliances" element
# which doesn't follow the schema and breaks the pandas data manipulation
return pd.read_json(scan_json).iloc[2:-1]
class qualysUtils:
def __init__(self):
self.logger = logging.getLogger('qualysUtils')
def iso_to_epoch(self, dt):
out = dp.parse(dt).strftime('%s')
self.logger.info('Converted {} to {}'.format(dt, out))
return out
class qualysVulnScan:
def __init__(
self,
config=None,
file_in=None,
file_stream=False,
delimiter=',',
quotechar='"',
):
self.logger = logging.getLogger('qualysVulnScan')
self.file_in = file_in
self.file_stream = file_stream
self.report = None
self.utils = qualysUtils()
if config:
try:
self.qw = qualysWhisperAPI(config=config)
except Exception as e:
self.logger.error('Could not load config! Please check settings. Error: {}'.format(str(e)))
if file_stream:
self.open_file = file_in.splitlines()
elif file_in:
self.open_file = open(file_in, 'rb')
self.downloaded_file = None
def process_data(self, scan_id=None):
"""Downloads a file from Qualys and normalizes it"""
self.logger.info('Downloading scan ID: {}'.format(scan_id))
scan_report = self.qw.get_scan_details(scan_id=scan_id)
if not scan_report.empty:
keep_columns = ['category', 'cve_id', 'cvss3_base', 'cvss3_temporal', 'cvss_base',
'cvss_temporal', 'dns', 'exploitability', 'fqdn', 'impact', 'ip', 'ip_status',
'netbios', 'os', 'pci_vuln', 'port', 'protocol', 'qid', 'results', 'severity',
'solution', 'ssl', 'threat', 'title', 'type', 'vendor_reference']
scan_report = scan_report.filter(keep_columns)
scan_report['severity'] = scan_report['severity'].astype(int).astype(str)
scan_report['qid'] = scan_report['qid'].astype(int).astype(str)
else:
self.logger.warn('Scan ID {} has no vulnerabilities, skipping.'.format(scan_id))
return scan_report
return scan_report

View File

@ -0,0 +1,465 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
__author__ = 'Austin Taylor'
from lxml import objectify
from lxml.builder import E
import xml.etree.ElementTree as ET
import pandas as pd
import qualysapi
import qualysapi.config as qcconf
import requests
import sys
import os
import csv
import logging
import dateutil.parser as dp
class qualysWhisperAPI(object):
COUNT_WEBAPP = '/count/was/webapp'
COUNT_WASSCAN = '/count/was/wasscan'
DELETE_REPORT = '/delete/was/report/{report_id}'
GET_WEBAPP_DETAILS = '/get/was/webapp/{was_id}'
QPS_REST_3 = '/qps/rest/3.0'
REPORT_DETAILS = '/get/was/report/{report_id}'
REPORT_STATUS = '/status/was/report/{report_id}'
REPORT_CREATE = '/create/was/report'
REPORT_DOWNLOAD = '/download/was/report/{report_id}'
SCAN_DETAILS = '/get/was/wasscan/{scan_id}'
SCAN_DOWNLOAD = '/download/was/wasscan/{scan_id}'
SEARCH_REPORTS = '/search/was/report'
SEARCH_WEB_APPS = '/search/was/webapp'
SEARCH_WAS_SCAN = '/search/was/wasscan'
VERSION = '/qps/rest/portal/version'
def __init__(self, config=None):
self.logger = logging.getLogger('qualysWhisperAPI')
self.config = config
try:
self.qgc = qualysapi.connect(config, 'qualys_web')
self.logger.info('Connected to Qualys at {}'.format(self.qgc.server))
except Exception as e:
self.logger.error('Could not connect to Qualys: {}'.format(str(e)))
self.headers = {
#"content-type": "text/xml"}
"Accept" : "application/json",
"Content-Type": "application/json"}
self.config_parse = qcconf.QualysConnectConfig(config, 'qualys_web')
try:
self.template_id = self.config_parse.get_template_id()
except:
self.logger.error('Could not retrieve template ID')
####
#### GET SCANS TO PROCESS
####
def get_was_scan_count(self, status):
"""
Checks number of scans, used to control the api limits
"""
parameters = (
E.ServiceRequest(
E.filters(
E.Criteria({'field': 'status', 'operator': 'EQUALS'}, status))))
xml_output = self.qgc.request(self.COUNT_WASSCAN, parameters)
root = objectify.fromstring(xml_output.encode('utf-8'))
return root.count.text
def generate_scan_result_XML(self, limit=1000, offset=1, status='FINISHED'):
report_xml = E.ServiceRequest(
E.filters(
E.Criteria({'field': 'status', 'operator': 'EQUALS'}, status
),
),
E.preferences(
E.startFromOffset(str(offset)),
E.limitResults(str(limit))
),
)
return report_xml
def get_scan_info(self, limit=1000, offset=1, status='FINISHED'):
""" Returns XML of ALL WAS Scans"""
data = self.generate_scan_result_XML(limit=limit, offset=offset, status=status)
return self.qgc.request(self.SEARCH_WAS_SCAN, data)
def xml_parser(self, xml, dupfield=None):
all_records = []
root = ET.XML(xml)
for i, child in enumerate(root):
for subchild in child:
record = {}
dup_tracker = 0
for p in subchild:
record[p.tag] = p.text
for o in p:
if o.tag in record:
dup_tracker += 1
record[o.tag + '_%s' % dup_tracker] = o.text
else:
record[o.tag] = o.text
all_records.append(record)
return pd.DataFrame(all_records)
def get_all_scans(self, limit=1000, offset=1, status='FINISHED'):
qualys_api_limit = limit
dataframes = []
_records = []
try:
total = int(self.get_was_scan_count(status=status))
self.logger.error('Already have WAS scan count')
self.logger.info('Retrieving information for {} scans'.format(total))
for i in range(0, total):
if i % limit == 0:
if (total - i) < limit:
qualys_api_limit = total - i
self.logger.info('Making a request with a limit of {} at offset {}'.format((str(qualys_api_limit)), str(i + 1)))
scan_info = self.get_scan_info(limit=qualys_api_limit, offset=i + 1, status=status)
_records.append(scan_info)
self.logger.debug('Converting XML to DataFrame')
dataframes = [self.xml_parser(xml) for xml in _records]
except Exception as e:
self.logger.error("Couldn't process all scans: {}".format(e))
return pd.concat(dataframes, axis=0).reset_index().drop('index', axis=1)
####
#### CREATE VULNERABILITY REPORT AND DOWNLOAD IT
####
def get_report_status(self, report_id):
return self.qgc.request(self.REPORT_STATUS.format(report_id=report_id))
def download_report(self, report_id):
return self.qgc.request(self.REPORT_DOWNLOAD.format(report_id=report_id))
def generate_scan_report_XML(self, scan_id):
"""Generates a CSV report for an asset based on template defined in .ini file"""
report_xml = E.ServiceRequest(
E.data(
E.Report(
E.name('<![CDATA[API Scan Report generated by VulnWhisperer]]>'),
E.description('<![CDATA[CSV Scanning report for VulnWhisperer]]>'),
E.format('CSV'),
#type is not needed, as the template already has it
E.type('WAS_SCAN_REPORT'),
E.template(
E.id(self.template_id)
),
E.config(
E.scanReport(
E.target(
E.scans(
E.WasScan(
E.id(scan_id)
)
),
),
),
)
)
)
)
return report_xml
def create_report(self, report_id, kind='scan'):
mapper = {'scan': self.generate_scan_report_XML}
try:
data = mapper[kind](report_id)
except Exception as e:
self.logger.error('Error creating report: {}'.format(str(e)))
return self.qgc.request(self.REPORT_CREATE, data).encode('utf-8')
def delete_report(self, report_id):
return self.qgc.request(self.DELETE_REPORT.format(report_id=report_id))
class qualysReportFields:
CATEGORIES = ['VULNERABILITY',
'SENSITIVECONTENT',
'INFORMATION_GATHERED']
# URL Vulnerability Information
VULN_BLOCK = [
CATEGORIES[0],
'ID',
'QID',
'Url',
'Param',
'Function',
'Form Entry Point',
'Access Path',
'Authentication',
'Ajax Request',
'Ajax Request ID',
'Ignored',
'Ignore Reason',
'Ignore Date',
'Ignore User',
'Ignore Comments',
'First Time Detected',
'Last Time Detected',
'Last Time Tested',
'Times Detected',
'Payload #1',
'Request Method #1',
'Request URL #1',
'Request Headers #1',
'Response #1',
'Evidence #1',
]
INFO_HEADER = [
'Vulnerability Category',
'ID',
'QID',
'Response #1',
'Last Time Detected',
]
INFO_BLOCK = [
CATEGORIES[2],
'ID',
'QID',
'Results',
'Detection Date',
]
QID_HEADER = [
'QID',
'Id',
'Title',
'Category',
'Severity Level',
'Groups',
'OWASP',
'WASC',
'CWE',
'CVSS Base',
'CVSS Temporal',
'Description',
'Impact',
'Solution',
]
GROUP_HEADER = ['GROUP', 'Name', 'Category']
OWASP_HEADER = ['OWASP', 'Code', 'Name']
WASC_HEADER = ['WASC', 'Code', 'Name']
SCAN_META = ['Web Application Name', 'URL', 'Owner', 'Scope', 'Operating System']
CATEGORY_HEADER = ['Category', 'Severity', 'Level', 'Description']
class qualysUtils:
def __init__(self):
self.logger = logging.getLogger('qualysUtils')
def grab_section(
self,
report,
section,
end=[],
pop_last=False,
):
temp_list = []
max_col_count = 0
with open(report, 'rb') as csvfile:
q_report = csv.reader(csvfile, delimiter=',', quotechar='"')
for line in q_report:
if set(line) == set(section):
break
# Reads text until the end of the block:
for line in q_report: # This keeps reading the file
temp_list.append(line)
if line in end:
break
if pop_last and len(temp_list) > 1:
temp_list.pop(-1)
return temp_list
def iso_to_epoch(self, dt):
return dp.parse(dt).strftime('%s')
def cleanser(self, _data):
repls = (('\n', '|||'), ('\r', '|||'), (',', ';'), ('\t', '|||'))
if _data:
_data = reduce(lambda a, kv: a.replace(*kv), repls, str(_data))
return _data
class qualysScanReport:
# URL Vulnerability Information
WEB_SCAN_VULN_BLOCK = list(qualysReportFields.VULN_BLOCK)
WEB_SCAN_VULN_BLOCK.insert(WEB_SCAN_VULN_BLOCK.index('QID'), 'Detection ID')
WEB_SCAN_VULN_HEADER = list(WEB_SCAN_VULN_BLOCK)
WEB_SCAN_VULN_HEADER[WEB_SCAN_VULN_BLOCK.index(qualysReportFields.CATEGORIES[0])] = \
'Vulnerability Category'
WEB_SCAN_SENSITIVE_HEADER = list(WEB_SCAN_VULN_HEADER)
WEB_SCAN_SENSITIVE_HEADER.insert(WEB_SCAN_SENSITIVE_HEADER.index('Url'
), 'Content')
WEB_SCAN_SENSITIVE_BLOCK = list(WEB_SCAN_SENSITIVE_HEADER)
WEB_SCAN_SENSITIVE_BLOCK.insert(WEB_SCAN_SENSITIVE_BLOCK.index('QID'), 'Detection ID')
WEB_SCAN_SENSITIVE_BLOCK[WEB_SCAN_SENSITIVE_BLOCK.index('Vulnerability Category'
)] = qualysReportFields.CATEGORIES[1]
WEB_SCAN_INFO_HEADER = list(qualysReportFields.INFO_HEADER)
WEB_SCAN_INFO_HEADER.insert(WEB_SCAN_INFO_HEADER.index('QID'), 'Detection ID')
WEB_SCAN_INFO_BLOCK = list(qualysReportFields.INFO_BLOCK)
WEB_SCAN_INFO_BLOCK.insert(WEB_SCAN_INFO_BLOCK.index('QID'), 'Detection ID')
QID_HEADER = list(qualysReportFields.QID_HEADER)
GROUP_HEADER = list(qualysReportFields.GROUP_HEADER)
OWASP_HEADER = list(qualysReportFields.OWASP_HEADER)
WASC_HEADER = list(qualysReportFields.WASC_HEADER)
SCAN_META = list(qualysReportFields.SCAN_META)
CATEGORY_HEADER = list(qualysReportFields.CATEGORY_HEADER)
def __init__(
self,
config=None,
file_in=None,
file_stream=False,
delimiter=',',
quotechar='"',
):
self.logger = logging.getLogger('qualysScanReport')
self.file_in = file_in
self.file_stream = file_stream
self.report = None
self.utils = qualysUtils()
if config:
try:
self.qw = qualysWhisperAPI(config=config)
except Exception as e:
self.logger.error('Could not load config! Please check settings. Error: {}'.format(str(e)))
if file_stream:
self.open_file = file_in.splitlines()
elif file_in:
self.open_file = open(file_in, 'rb')
self.downloaded_file = None
def grab_sections(self, report):
all_dataframes = []
dict_tracker = {}
with open(report, 'rb') as csvfile:
dict_tracker['WEB_SCAN_VULN_BLOCK'] = pd.DataFrame(self.utils.grab_section(report,
self.WEB_SCAN_VULN_BLOCK,
end=[
self.WEB_SCAN_SENSITIVE_BLOCK,
self.WEB_SCAN_INFO_BLOCK],
pop_last=True),
columns=self.WEB_SCAN_VULN_HEADER)
dict_tracker['WEB_SCAN_SENSITIVE_BLOCK'] = pd.DataFrame(self.utils.grab_section(report,
self.WEB_SCAN_SENSITIVE_BLOCK,
end=[
self.WEB_SCAN_INFO_BLOCK,
self.WEB_SCAN_SENSITIVE_BLOCK],
pop_last=True),
columns=self.WEB_SCAN_SENSITIVE_HEADER)
dict_tracker['WEB_SCAN_INFO_BLOCK'] = pd.DataFrame(self.utils.grab_section(report,
self.WEB_SCAN_INFO_BLOCK,
end=[self.QID_HEADER],
pop_last=True),
columns=self.WEB_SCAN_INFO_HEADER)
dict_tracker['QID_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.QID_HEADER,
end=[self.GROUP_HEADER],
pop_last=True),
columns=self.QID_HEADER)
dict_tracker['GROUP_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.GROUP_HEADER,
end=[self.OWASP_HEADER],
pop_last=True),
columns=self.GROUP_HEADER)
dict_tracker['OWASP_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.OWASP_HEADER,
end=[self.WASC_HEADER],
pop_last=True),
columns=self.OWASP_HEADER)
dict_tracker['WASC_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.WASC_HEADER, end=[['APPENDIX']],
pop_last=True),
columns=self.WASC_HEADER)
dict_tracker['SCAN_META'] = pd.DataFrame(self.utils.grab_section(report,
self.SCAN_META,
end=[self.CATEGORY_HEADER],
pop_last=True),
columns=self.SCAN_META)
dict_tracker['CATEGORY_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.CATEGORY_HEADER),
columns=self.CATEGORY_HEADER)
all_dataframes.append(dict_tracker)
return all_dataframes
def data_normalizer(self, dataframes):
"""
Merge and clean data
:param dataframes:
:return:
"""
df_dict = dataframes[0]
merged_df = pd.concat([df_dict['WEB_SCAN_VULN_BLOCK'], df_dict['WEB_SCAN_SENSITIVE_BLOCK'],
df_dict['WEB_SCAN_INFO_BLOCK']], axis=0,
ignore_index=False)
merged_df = pd.merge(merged_df, df_dict['QID_HEADER'], left_on='QID',
right_on='Id')
if 'Content' not in merged_df:
merged_df['Content'] = ''
columns_to_cleanse = ['Payload #1', 'Request Method #1', 'Request URL #1',
'Request Headers #1', 'Response #1', 'Evidence #1',
'Description', 'Impact', 'Solution', 'Url', 'Content']
for col in columns_to_cleanse:
merged_df[col] = merged_df[col].apply(self.utils.cleanser)
merged_df = merged_df.drop(['QID_y', 'QID_x'], axis=1)
merged_df = merged_df.rename(columns={'Id': 'QID'})
merged_df = merged_df.assign(**df_dict['SCAN_META'].to_dict(orient='records')[0])
merged_df = pd.merge(merged_df, df_dict['CATEGORY_HEADER'], how='left', left_on=['Category', 'Severity Level'],
right_on=['Category', 'Severity'], suffixes=('Severity', 'CatSev'))
merged_df = merged_df.replace('N/A', '').fillna('')
try:
merged_df = \
merged_df[~merged_df.Title.str.contains('Links Crawled|External Links Discovered')]
except Exception as e:
self.logger.error('Error normalizing: {}'.format(str(e)))
return merged_df
def download_file(self, path='', file_id=None):
report = self.qw.download_report(file_id)
filename = path + str(file_id) + '.csv'
file_out = open(filename, 'w')
for line in report.splitlines():
file_out.write(line + '\n')
file_out.close()
self.logger.info('File written to {}'.format(filename))
return filename
def process_data(self, path='', file_id=None, cleanup=True):
"""Downloads a file from qualys and normalizes it"""
download_file = self.download_file(path=path, file_id=file_id)
self.logger.info('Downloading file ID: {}'.format(file_id))
report_data = self.grab_sections(download_file)
merged_data = self.data_normalizer(report_data)
merged_data.sort_index(axis=1, inplace=True)
return merged_data

View File

View File

@ -0,0 +1,669 @@
import json
import os
from datetime import datetime, date, timedelta
from jira import JIRA
import requests
import logging
from bottle import template
import re
class JiraAPI(object):
def __init__(self, hostname=None, username=None, password=None, path="", debug=False, clean_obsolete=True, max_time_window=12, decommission_time_window=3):
self.logger = logging.getLogger('JiraAPI')
if debug:
self.logger.setLevel(logging.DEBUG)
if "https://" not in hostname:
hostname = "https://{}".format(hostname)
self.username = username
self.password = password
self.jira = JIRA(options={'server': hostname}, basic_auth=(self.username, self.password))
self.logger.info("Created vjira service for {}".format(hostname))
self.all_tickets = []
self.excluded_tickets = []
self.JIRA_REOPEN_ISSUE = "Reopen Issue"
self.JIRA_CLOSE_ISSUE = "Close Issue"
self.JIRA_RESOLUTION_OBSOLETE = "Obsolete"
self.JIRA_RESOLUTION_FIXED = "Fixed"
self.template_path = 'vulnwhisp/reporting/resources/ticket.tpl'
self.max_ips_ticket = 30
self.attachment_filename = "vulnerable_assets.txt"
self.max_time_tracking = max_time_window #in months
if path:
self.download_tickets(path)
else:
self.logger.warn("No local path specified, skipping Jira ticket download.")
self.max_decommission_time = decommission_time_window #in months
# [HIGIENE] close tickets older than 12 months as obsolete (max_time_window defined)
if clean_obsolete:
self.close_obsolete_tickets()
# deletes the tag "server_decommission" from those tickets closed <=3 months ago
self.decommission_cleanup()
self.jira_still_vulnerable_comment = '''This ticket has been reopened due to the vulnerability not having been fixed (if multiple assets are affected, all need to be fixed; if the server is down, lastest known vulnerability might be the one reported).
- In the case of the team accepting the risk and wanting to close the ticket, please add the label "*risk_accepted*" to the ticket before closing it.
- If server has been decommissioned, please add the label "*server_decommission*" to the ticket before closing it.
- If when checking the vulnerability it looks like a false positive, _+please elaborate in a comment+_ and add the label "*false_positive*" before closing it; we will review it and report it to the vendor.
If you have further doubts, please contact the Security Team.'''
def create_ticket(self, title, desc, project="IS", components=[], tags=[], attachment_contents = []):
labels = ['vulnerability_management']
for tag in tags:
labels.append(str(tag))
self.logger.info("Creating ticket for project {} title: {}".format(project, title[:20]))
self.logger.debug("project {} has a component requirement: {}".format(project, components))
project_obj = self.jira.project(project)
components_ticket = []
for component in components:
exists = False
for c in project_obj.components:
if component == c.name:
self.logger.debug("resolved component name {} to id {}".format(c.name, c.id))
components_ticket.append({ "id": c.id })
exists=True
if not exists:
self.logger.error("Error creating Ticket: component {} not found".format(component))
return 0
try:
new_issue = self.jira.create_issue(project=project,
summary=title,
description=desc,
issuetype={'name': 'Bug'},
labels=labels,
components=components_ticket)
self.logger.info("Ticket {} created successfully".format(new_issue))
if attachment_contents:
self.add_content_as_attachment(new_issue, attachment_contents)
except Exception as e:
self.logger.error("Failed to create ticket on Jira Project '{}'. Error: {}".format(project, e))
new_issue = False
return new_issue
#Basic JIRA Metrics
def metrics_open_tickets(self, project=None):
jql = "labels= vulnerability_management and resolution = Unresolved"
if project:
jql += " and (project='{}')".format(project)
self.logger.debug('Executing: {}'.format(jql))
return len(self.jira.search_issues(jql, maxResults=0))
def metrics_closed_tickets(self, project=None):
jql = "labels= vulnerability_management and NOT resolution = Unresolved AND created >=startOfMonth(-{})".format(self.max_time_tracking)
if project:
jql += " and (project='{}')".format(project)
return len(self.jira.search_issues(jql, maxResults=0))
def sync(self, vulnerabilities, project, components=[]):
#JIRA structure of each vulnerability: [source, scan_name, title, diagnosis, consequence, solution, ips, risk, references]
self.logger.info("JIRA Sync started")
for vuln in vulnerabilities:
# JIRA doesn't allow labels with spaces, so making sure that the scan_name doesn't have spaces
# if it has, they will be replaced by "_"
if " " in vuln['scan_name']:
vuln['scan_name'] = "_".join(vuln['scan_name'].split(" "))
# we exclude from the vulnerabilities to report those assets that already exist with *risk_accepted*/*server_decommission*
vuln = self.exclude_accepted_assets(vuln)
# make sure after exclusion of risk_accepted assets there are still assets
if vuln['ips']:
exists = False
to_update = False
ticketid = ""
ticket_assets = []
exists, to_update, ticketid, ticket_assets = self.check_vuln_already_exists(vuln)
if exists:
# If ticket "resolved" -> reopen, as vulnerability is still existent
self.reopen_ticket(ticketid=ticketid, comment=self.jira_still_vulnerable_comment)
self.add_label(ticketid, vuln['risk'])
continue
elif to_update:
self.ticket_update_assets(vuln, ticketid, ticket_assets)
self.add_label(ticketid, vuln['risk'])
continue
attachment_contents = []
# if assets >30, add as attachment
# create local text file with assets, attach it to ticket
if len(vuln['ips']) > self.max_ips_ticket:
attachment_contents = vuln['ips']
vuln['ips'] = ["Affected hosts ({assets}) exceed Jira's allowed character limit, added as an attachment.".format(assets = len(attachment_contents))]
try:
tpl = template(self.template_path, vuln)
except Exception as e:
self.logger.error('Exception templating: {}'.format(str(e)))
return 0
self.create_ticket(title=vuln['title'], desc=tpl, project=project, components=components, tags=[vuln['source'], vuln['scan_name'], 'vulnerability', vuln['risk']], attachment_contents = attachment_contents)
else:
self.logger.info("Ignoring vulnerability as all assets are already reported in a risk_accepted ticket")
self.close_fixed_tickets(vulnerabilities)
# we reinitialize so the next sync redoes the query with their specific variables
self.all_tickets = []
self.excluded_tickets = []
return True
def exclude_accepted_assets(self, vuln):
# we want to check JIRA tickets with risk_accepted/server_decommission or false_positive labels sharing the same source
# will exclude tickets older than 12 months, old tickets will get closed for higiene and recreated if still vulnerable
labels = [vuln['source'], vuln['scan_name'], 'vulnerability_management', 'vulnerability']
if not self.excluded_tickets:
jql = "{} AND labels in (risk_accepted,server_decommission, false_positive) AND NOT labels=advisory AND created >=startOfMonth(-{})".format(" AND ".join(["labels={}".format(label) for label in labels]), self.max_time_tracking)
self.excluded_tickets = self.jira.search_issues(jql, maxResults=0)
title = vuln['title']
#WARNING: function IGNORES DUPLICATES, after finding a "duplicate" will just return it exists
#it wont iterate over the rest of tickets looking for other possible duplicates/similar issues
self.logger.info("Comparing vulnerability to risk_accepted tickets")
assets_to_exclude = []
tickets_excluded_assets = []
for index in range(len(self.excluded_tickets)):
checking_ticketid, checking_title, checking_assets = self.ticket_get_unique_fields(self.excluded_tickets[index])
if title.encode('ascii') == checking_title.encode('ascii'):
if checking_assets:
#checking_assets is a list, we add to our full list for later delete all assets
assets_to_exclude+=checking_assets
tickets_excluded_assets.append(checking_ticketid)
if assets_to_exclude:
assets_to_remove = []
self.logger.warn("Vulnerable Assets seen on an already existing risk_accepted Jira ticket: {}".format(', '.join(tickets_excluded_assets)))
self.logger.debug("Original assets: {}".format(vuln['ips']))
#assets in vulnerability have the structure "ip - hostname - port", so we need to match by partial
for exclusion in assets_to_exclude:
# for efficiency, we walk the backwards the array of ips from the scanners, as we will be popping out the matches
# and we don't want it to affect the rest of the processing (otherwise, it would miss the asset right after the removed one)
for index in range(len(vuln['ips']))[::-1]:
if exclusion == vuln['ips'][index].split(" - ")[0]:
self.logger.debug("Deleting asset {} from vulnerability {}, seen in risk_accepted.".format(vuln['ips'][index], title))
vuln['ips'].pop(index)
self.logger.debug("Modified assets: {}".format(vuln['ips']))
return vuln
def check_vuln_already_exists(self, vuln):
'''
This function compares a vulnerability with a collection of tickets.
Returns [exists (bool), is equal (bool), ticketid (str), assets (array)]
'''
# we need to return if the vulnerability has already been reported and the ID of the ticket for further processing
#function returns array [duplicated(bool), update(bool), ticketid, ticket_assets]
title = vuln['title']
labels = [vuln['source'], vuln['scan_name'], 'vulnerability_management', 'vulnerability']
#list(set()) to remove duplicates
assets = list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", ",".join(vuln['ips']))))
if not self.all_tickets:
self.logger.info("Retrieving all JIRA tickets with the following tags {}".format(labels))
# we want to check all JIRA tickets, to include tickets moved to other queues
# will exclude tickets older than 12 months, old tickets will get closed for higiene and recreated if still vulnerable
jql = "{} AND NOT labels=advisory AND created >=startOfMonth(-{})".format(" AND ".join(["labels={}".format(label) for label in labels]), self.max_time_tracking)
self.all_tickets = self.jira.search_issues(jql, maxResults=0)
#WARNING: function IGNORES DUPLICATES, after finding a "duplicate" will just return it exists
#it wont iterate over the rest of tickets looking for other possible duplicates/similar issues
self.logger.info("Comparing Vulnerabilities to created tickets")
for index in range(len(self.all_tickets)):
checking_ticketid, checking_title, checking_assets = self.ticket_get_unique_fields(self.all_tickets[index])
# added "not risk_accepted", as if it is risk_accepted, we will create a new ticket excluding the accepted assets
if title.encode('ascii') == checking_title.encode('ascii') and not self.is_risk_accepted(self.jira.issue(checking_ticketid)):
difference = list(set(assets).symmetric_difference(checking_assets))
#to check intersection - set(assets) & set(checking_assets)
if difference:
self.logger.info("Asset mismatch, ticket to update. Ticket ID: {}".format(checking_ticketid))
return False, True, checking_ticketid, checking_assets #this will automatically validate
else:
self.logger.info("Confirmed duplicated. TickedID: {}".format(checking_ticketid))
return True, False, checking_ticketid, [] #this will automatically validate
return False, False, "", []
def ticket_get_unique_fields(self, ticket):
title = ticket.raw.get('fields', {}).get('summary').encode("ascii").strip()
ticketid = ticket.key.encode("ascii")
assets = self.get_assets_from_description(ticket)
if not assets:
#check if attachment, if so, get assets from attachment
assets = self.get_assets_from_attachment(ticket)
return ticketid, title, assets
def get_assets_from_description(self, ticket, _raw = False):
# Get the assets as a string "host - protocol/port - hostname" separated by "\n"
# structure the text to have the same structure as the assets from the attachment
affected_assets = ""
try:
affected_assets = ticket.raw.get('fields', {}).get('description').encode("ascii").split("{panel:title=Affected Assets}")[1].split("{panel}")[0].replace('\n','').replace(' * ','\n').replace('\n', '', 1)
except Exception as e:
self.logger.error("Unable to process the Ticket's 'Affected Assets'. Ticket ID: {}. Reason: {}".format(ticket, e))
if affected_assets:
if _raw:
# from line 406 check if the text in the panel corresponds to having added an attachment
if "added as an attachment" in affected_assets:
return False
return affected_assets
try:
# if _raw is not true, we return only the IPs of the affected assets
return list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", affected_assets)))
except Exception as e:
self.logger.error("Ticket IPs regex failed. Ticket ID: {}. Reason: {}".format(ticket, e))
return False
def get_assets_from_attachment(self, ticket, _raw = False):
# Get the assets as a string "host - protocol/port - hostname" separated by "\n"
affected_assets = []
try:
fields = self.jira.issue(ticket.key).raw.get('fields', {})
attachments = fields.get('attachment', {})
affected_assets = ""
#we will make sure we get the latest version of the file
latest = ''
attachment_id = ''
if attachments:
for item in attachments:
if item.get('filename') == self.attachment_filename:
if not latest:
latest = item.get('created')
attachment_id = item.get('id')
else:
if latest < item.get('created'):
latest = item.get('created')
attachment_id = item.get('id')
affected_assets = self.jira.attachment(attachment_id).get()
except Exception as e:
self.logger.error("Failed to get assets from ticket attachment. Ticket ID: {}. Reason: {}".format(ticket, e))
if affected_assets:
if _raw:
return affected_assets
try:
# if _raw is not true, we return only the IPs of the affected assets
affected_assets = list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", affected_assets)))
return affected_assets
except Exception as e:
self.logger.error("Ticket IPs Attachment regex failed. Ticket ID: {}. Reason: {}".format(ticket, e))
return False
def parse_asset_to_json(self, asset):
hostname, protocol, port = "", "", ""
asset_info = asset.split(" - ")
ip = asset_info[0]
proto_port = asset_info[1]
# in case there is some case where hostname is not reported at all
if len(asset_info) == 3:
hostname = asset_info[2]
if proto_port != "N/A/N/A":
protocol, port = proto_port.split("/")
port = int(float(port))
asset_dict = {
"host": ip,
"protocol": protocol,
"port": port,
"hostname": hostname
}
return asset_dict
def clean_old_attachments(self, ticket):
fields = ticket.raw.get('fields')
attachments = fields.get('attachment')
if attachments:
for item in attachments:
if item.get('filename') == self.attachment_filename:
self.jira.delete_attachment(item.get('id'))
def add_content_as_attachment(self, issue, contents):
try:
#Create the file locally with the data
attachment_file = open(self.attachment_filename, "w")
attachment_file.write("\n".join(contents))
attachment_file.close()
#Push the created file to the ticket
attachment_file = open(self.attachment_filename, "rb")
self.jira.add_attachment(issue, attachment_file, self.attachment_filename)
attachment_file.close()
#remove the temp file
os.remove(self.attachment_filename)
self.logger.info("Added attachment successfully.")
except:
self.logger.error("Error while attaching file to ticket.")
return False
return True
def get_ticket_reported_assets(self, ticket):
#[METRICS] return a list with all the affected assets for that vulnerability (including already resolved ones)
return list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b",str(self.jira.issue(ticket).raw))))
def get_resolution_time(self, ticket):
#get time a ticket took to be resolved
ticket_obj = self.jira.issue(ticket)
if self.is_ticket_resolved(ticket_obj):
ticket_data = ticket_obj.raw.get('fields')
#dates follow format '2018-11-06T10:36:13.849+0100'
created = [int(x) for x in ticket_data['created'].split('.')[0].replace('T', '-').replace(':','-').split('-')]
resolved =[int(x) for x in ticket_data['resolutiondate'].split('.')[0].replace('T', '-').replace(':','-').split('-')]
start = datetime(created[0],created[1],created[2],created[3],created[4],created[5])
end = datetime(resolved[0],resolved[1],resolved[2],resolved[3],resolved[4],resolved[5])
return (end-start).days
else:
self.logger.error("Ticket {ticket} is not resolved, can't calculate resolution time".format(ticket=ticket))
return False
def ticket_update_assets(self, vuln, ticketid, ticket_assets):
# correct description will always be in the vulnerability to report, only needed to update description to new one
self.logger.info("Ticket {} exists, UPDATE requested".format(ticketid))
#for now, if a vulnerability has been accepted ('accepted_risk'), ticket is completely ignored and not updated (no new assets)
#TODO when vulnerability accepted, create a new ticket with only the non-accepted vulnerable assets
#this would require go through the downloaded tickets, check duplicates/accepted ones, and if so,
#check on their assets to exclude them from the new ticket
risk_accepted = False
ticket_obj = self.jira.issue(ticketid)
if self.is_ticket_resolved(ticket_obj):
if self.is_risk_accepted(ticket_obj):
return 0
self.reopen_ticket(ticketid=ticketid, comment=self.jira_still_vulnerable_comment)
#First will do the comparison of assets
ticket_obj.update()
assets = list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", ",".join(vuln['ips']))))
difference = list(set(assets).symmetric_difference(ticket_assets))
comment = ''
added = ''
removed = ''
#put a comment with the assets that have been added/removed
for asset in difference:
if asset in assets:
if not added:
added = '\nThe following assets *have been newly detected*:\n'
added += '* {}\n'.format(asset)
elif asset in ticket_assets:
if not removed:
removed= '\nThe following assets *have been resolved*:\n'
removed += '* {}\n'.format(asset)
comment = added + removed
#then will check if assets are too many that need to be added as an attachment
attachment_contents = []
if len(vuln['ips']) > self.max_ips_ticket:
attachment_contents = vuln['ips']
vuln['ips'] = ["Affected hosts ({assets}) exceed Jira's allowed character limit, added as an attachment.".format(assets = len(attachment_contents))]
#fill the ticket description template
try:
tpl = template(self.template_path, vuln)
except Exception as e:
self.logger.error('Exception updating assets: {}'.format(str(e)))
return 0
#proceed checking if it requires adding as an attachment
try:
#update attachment with hosts and delete the old versions
if attachment_contents:
self.clean_old_attachments(ticket_obj)
self.add_content_as_attachment(ticket_obj, attachment_contents)
ticket_obj.update(description=tpl, comment=comment, fields={"labels":ticket_obj.fields.labels})
self.logger.info("Ticket {} updated successfully".format(ticketid))
self.add_label(ticketid, 'updated')
except Exception as e:
self.logger.error("Error while trying up update ticket {ticketid}.\nReason: {e}".format(ticketid = ticketid, e=e))
return 0
def add_label(self, ticketid, label):
ticket_obj = self.jira.issue(ticketid)
if label not in [x.encode('utf8') for x in ticket_obj.fields.labels]:
ticket_obj.fields.labels.append(label)
try:
ticket_obj.update(fields={"labels":ticket_obj.fields.labels})
self.logger.info("Added label {label} to ticket {ticket}".format(label=label, ticket=ticketid))
except:
self.logger.error("Error while trying to add label {label} to ticket {ticket}".format(label=label, ticket=ticketid))
return 0
def remove_label(self, ticketid, label):
ticket_obj = self.jira.issue(ticketid)
if label in [x.encode('utf8') for x in ticket_obj.fields.labels]:
ticket_obj.fields.labels.remove(label)
try:
ticket_obj.update(fields={"labels":ticket_obj.fields.labels})
self.logger.info("Removed label {label} from ticket {ticket}".format(label=label, ticket=ticketid))
except:
self.logger.error("Error while trying to remove label {label} to ticket {ticket}".format(label=label, ticket=ticketid))
else:
self.logger.error("Error: label {label} not in ticket {ticket}".format(label=label, ticket=ticketid))
return 0
def close_fixed_tickets(self, vulnerabilities):
'''
Close tickets which vulnerabilities have been resolved and are still open.
Higiene clean up affects to all tickets created by the module, filters by label 'vulnerability_management'
'''
found_vulns = []
for vuln in vulnerabilities:
found_vulns.append(vuln['title'])
comment = '''This ticket is being closed as it appears that the vulnerability no longer exists.
If the vulnerability reappears, a new ticket will be opened.'''
for ticket in self.all_tickets:
if ticket.raw['fields']['summary'].strip() in found_vulns:
self.logger.info("Ticket {} is still vulnerable".format(ticket))
continue
self.logger.info("Ticket {} is no longer vulnerable".format(ticket))
self.close_ticket(ticket, self.JIRA_RESOLUTION_FIXED, comment)
return 0
def is_ticket_reopenable(self, ticket_obj):
transitions = self.jira.transitions(ticket_obj)
for transition in transitions:
if transition.get('name') == self.JIRA_REOPEN_ISSUE:
self.logger.debug("Ticket is reopenable")
return True
self.logger.error("Ticket {} can't be opened. Check Jira transitions.".format(ticket_obj))
return False
def is_ticket_closeable(self, ticket_obj):
transitions = self.jira.transitions(ticket_obj)
for transition in transitions:
if transition.get('name') == self.JIRA_CLOSE_ISSUE:
return True
self.logger.error("Ticket {} can't closed. Check Jira transitions.".format(ticket_obj))
return False
def is_ticket_resolved(self, ticket_obj):
#Checks if a ticket is resolved or not
if ticket_obj is not None:
if ticket_obj.raw['fields'].get('resolution') is not None:
if ticket_obj.raw['fields'].get('resolution').get('name') != 'Unresolved':
self.logger.debug("Checked ticket {} is already closed".format(ticket_obj))
self.logger.info("Ticket {} is closed".format(ticket_obj))
return True
self.logger.debug("Checked ticket {} is already open".format(ticket_obj))
return False
def is_risk_accepted(self, ticket_obj):
if ticket_obj is not None:
if ticket_obj.raw['fields'].get('labels') is not None:
labels = ticket_obj.raw['fields'].get('labels')
if "risk_accepted" in labels:
self.logger.warn("Ticket {} accepted risk, will be ignored".format(ticket_obj))
return True
elif "server_decommission" in labels:
self.logger.warn("Ticket {} server decommissioned, will be ignored".format(ticket_obj))
return True
elif "false_positive" in labels:
self.logger.warn("Ticket {} flagged false positive, will be ignored".format(ticket_obj))
return True
self.logger.info("Ticket {} risk has not been accepted".format(ticket_obj))
return False
def reopen_ticket(self, ticketid, ignore_labels=False, comment=""):
self.logger.debug("Ticket {} exists, REOPEN requested".format(ticketid))
# this will reopen a ticket by ticketid
ticket_obj = self.jira.issue(ticketid)
if self.is_ticket_resolved(ticket_obj):
if (not self.is_risk_accepted(ticket_obj) or ignore_labels):
try:
if self.is_ticket_reopenable(ticket_obj):
error = self.jira.transition_issue(issue=ticketid, transition=self.JIRA_REOPEN_ISSUE, comment = comment)
self.logger.info("Ticket {} reopened successfully".format(ticketid))
if not ignore_labels:
self.add_label(ticketid, 'reopened')
return 1
except Exception as e:
# continue with ticket data so that a new ticket is created in place of the "lost" one
self.logger.error("error reopening ticket {}: {}".format(ticketid, e))
return 0
return 0
def close_ticket(self, ticketid, resolution, comment):
# this will close a ticket by ticketid
self.logger.debug("Ticket {} exists, CLOSE requested".format(ticketid))
ticket_obj = self.jira.issue(ticketid)
if not self.is_ticket_resolved(ticket_obj):
try:
if self.is_ticket_closeable(ticket_obj):
#need to add the label before closing the ticket
self.add_label(ticketid, 'closed')
error = self.jira.transition_issue(issue=ticketid, transition=self.JIRA_CLOSE_ISSUE, comment = comment, resolution = {"name": resolution })
self.logger.info("Ticket {} closed successfully".format(ticketid))
return 1
except Exception as e:
# continue with ticket data so that a new ticket is created in place of the "lost" one
self.logger.error("error closing ticket {}: {}".format(ticketid, e))
return 0
return 0
def close_obsolete_tickets(self):
# Close tickets older than 12 months, vulnerabilities not solved will get created a new ticket
self.logger.info("Closing obsolete tickets older than {} months".format(self.max_time_tracking))
jql = "labels=vulnerability_management AND NOT labels=advisory AND created <startOfMonth(-{}) and resolution=Unresolved".format(self.max_time_tracking)
tickets_to_close = self.jira.search_issues(jql, maxResults=0)
comment = '''This ticket is being closed for hygiene, as it is more than {} months old.
If the vulnerability still exists, a new ticket will be opened.'''.format(self.max_time_tracking)
for ticket in tickets_to_close:
self.close_ticket(ticket, self.JIRA_RESOLUTION_OBSOLETE, comment)
return 0
def project_exists(self, project):
try:
self.jira.project(project)
return True
except:
return False
return False
def download_tickets(self, path):
'''
saves all tickets locally, local snapshot of vulnerability_management ticktes
'''
#check if file already exists
check_date = str(date.today())
fname = '{}jira_{}.json'.format(path, check_date)
if os.path.isfile(fname):
self.logger.info("File {} already exists, skipping ticket download".format(fname))
return True
try:
self.logger.info("Saving locally tickets from the last {} months".format(self.max_time_tracking))
jql = "labels=vulnerability_management AND NOT labels=advisory AND created >=startOfMonth(-{})".format(self.max_time_tracking)
tickets_data = self.jira.search_issues(jql, maxResults=0)
#TODO process tickets, creating a new field called "_metadata" with all the affected assets well structured
# for future processing in ELK/Splunk; this includes downloading attachments with assets and processing them
processed_tickets = []
for ticket in tickets_data:
assets = self.get_assets_from_description(ticket, _raw=True)
if not assets:
# check if attachment, if so, get assets from attachment
assets = self.get_assets_from_attachment(ticket, _raw=True)
# process the affected assets to save them as json structure on a new field from the JSON
_metadata = {"affected_hosts": []}
if assets:
if "\n" in assets:
for asset in assets.split("\n"):
assets_json = self.parse_asset_to_json(asset)
_metadata["affected_hosts"].append(assets_json)
else:
assets_json = self.parse_asset_to_json(assets)
_metadata["affected_hosts"].append(assets_json)
temp_ticket = ticket.raw.get('fields')
temp_ticket['_metadata'] = _metadata
processed_tickets.append(temp_ticket)
#end of line needed, as writelines() doesn't add it automatically, otherwise one big line
to_save = [json.dumps(ticket.raw.get('fields'))+"\n" for ticket in tickets_data]
with open(fname, 'w') as outfile:
outfile.writelines(to_save)
self.logger.info("Tickets saved succesfully.")
return True
except Exception as e:
self.logger.error("Tickets could not be saved locally: {}.".format(e))
return False
def decommission_cleanup(self):
'''
deletes the server_decomission tag from those tickets that have been
closed already for more than x months (default is 3 months) in order to clean solved issues
for statistics purposes
'''
self.logger.info("Deleting 'server_decommission' tag from tickets closed more than {} months ago".format(self.max_decommission_time))
jql = "labels=vulnerability_management AND labels=server_decommission and resolutiondate <=startOfMonth(-{})".format(self.max_decommission_time)
decommissioned_tickets = self.jira.search_issues(jql, maxResults=0)
comment = '''This ticket is having deleted the *server_decommission* tag, as it is more than {} months old and is expected to already have been decommissioned.
If that is not the case and the vulnerability still exists, the vulnerability will be opened again.'''.format(self.max_decommission_time)
for ticket in decommissioned_tickets:
#we open first the ticket, as we want to make sure the process is not blocked due to
#an unexisting jira workflow or unallowed edit from closed tickets
self.reopen_ticket(ticketid=ticket, ignore_labels=True)
self.remove_label(ticket, 'server_decommission')
self.close_ticket(ticket, self.JIRA_RESOLUTION_FIXED, comment)
return 0

View File

@ -0,0 +1,34 @@
{panel:title=Description}
{{ !diagnosis}}
{panel}
{panel:title=Consequence}
{{ !consequence}}
{panel}
{panel:title=Solution}
{{ !solution}}
{panel}
{panel:title=Affected Assets}
% for ip in ips:
* {{ip}}
% end
{panel}
{panel:title=References}
% for ref in references:
* {{ref}}
% end
{panel}
Please do not delete or modify the ticket assigned tags or title, as they are used to be synced. If the ticket ceases to be recognised, a new ticket will raise.
In the case of the team accepting the risk and wanting to close the ticket, please add the label "*risk_accepted*" to the ticket before closing it.
If server has been decommissioned, please add the label "*server_decommission*" to the ticket before closing it.
If when checking the vulnerability it looks like a false positive, _+please elaborate in a comment+_ and add the label "*false_positive*" before closing it; we will review it and report it to the vendor.

View File

76
vulnwhisp/test/mock.py Normal file
View File

@ -0,0 +1,76 @@
import os
import logging
import httpretty
class mockAPI(object):
def __init__(self, mock_dir=None, debug=False):
self.mock_dir = mock_dir
if not self.mock_dir:
# Try to guess the mock_dir if python setup.py develop was used
self.mock_dir = '/'.join(__file__.split('/')[:-3]) + '/tests/data'
self.logger = logging.getLogger('mockAPI')
if debug:
self.logger.setLevel(logging.DEBUG)
self.logger.info('mockAPI initialised, API requests will be mocked')
self.logger.debug('Test path resolved as {}'.format(self.mock_dir))
def get_directories(self, path):
dir, subdirs, files = next(os.walk(path))
return subdirs
def get_files(self, path):
dir, subdirs, files = next(os.walk(path))
return files
def qualys_vuln_callback(self, request, uri, response_headers):
self.logger.debug('Simulating response for {} ({})'.format(uri, request.body))
if 'list' in request.parsed_body['action']:
return [200,
response_headers,
open('{}/{}'.format(self.qualys_vuln_path, 'scans')).read()]
elif 'fetch' in request.parsed_body['action']:
try:
response_body = open('{}/{}'.format(
self.qualys_vuln_path,
request.parsed_body['scan_ref'][0].replace('/', '_'))
).read()
except:
# Can't find the file, just send an empty response
response_body = ''
return [200, response_headers, response_body]
def create_nessus_resource(self, framework):
for filename in self.get_files('{}/{}'.format(self.mock_dir, framework)):
method, resource = filename.split('_', 1)
resource = resource.replace('_', '/')
self.logger.debug('Adding mocked {} endpoint {} {}'.format(framework, method, resource))
httpretty.register_uri(
getattr(httpretty, method), 'https://{}:443/{}'.format(framework, resource),
body=open('{}/{}/{}'.format(self.mock_dir, framework, filename)).read()
)
def create_qualys_vuln_resource(self, framework):
# Create health check endpoint
self.logger.debug('Adding mocked {} endpoint {} {}'.format(framework, 'GET', 'msp/about.php'))
httpretty.register_uri(
httpretty.GET,
'https://{}:443/{}'.format(framework, 'msp/about.php'),
body='')
self.logger.debug('Adding mocked {} endpoint {} {}'.format(framework, 'POST', 'api/2.0/fo/scan'))
httpretty.register_uri(
httpretty.POST, 'https://{}:443/{}'.format(framework, 'api/2.0/fo/scan/'),
body=self.qualys_vuln_callback)
def mock_endpoints(self):
for framework in self.get_directories(self.mock_dir):
if framework in ['nessus', 'tenable']:
self.create_nessus_resource(framework)
elif framework == 'qualys_vuln':
self.qualys_vuln_path = self.mock_dir + '/' + framework
self.create_qualys_vuln_resource(framework)
httpretty.enable()

View File

@ -1,17 +0,0 @@
class bcolors:
"""
Utility to add colors to shell for scripts
"""
HEADERS = '\033[95m'
OKBLUE = '\033[94m'
OKGREEN = '\033[92m'
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
INFO = '{info}[INFO]{endc}'.format(info=OKBLUE, endc=ENDC)
ACTION = '{info}[ACTION]{endc}'.format(info=OKBLUE, endc=ENDC)
SUCCESS = '{green}[SUCCESS]{endc}'.format(green=OKGREEN, endc=ENDC)
FAIL = '{red}[FAIL]{endc}'.format(red=FAIL, endc=ENDC)

File diff suppressed because it is too large Load Diff