348 Commits

Author SHA1 Message Date
a9289d8e47 Merge pull request 'Traduction du modèle de rapport de bug' (#1) from mes-modifs into master
Reviewed-on: #1
2025-07-07 12:56:24 +00:00
67ec8af3ae Traduction du modèle de rapport de bug 2025-07-07 14:41:21 +02:00
691f45a1dc Merge pull request #232 from HASecuritySolutions/dependabot/pip/lxml-4.6.5
Bump lxml from 4.1.1 to 4.6.5
2022-06-11 20:39:14 -05:00
80197454a3 Update README.md 2022-02-03 10:33:12 -06:00
841cd09f2d Bump lxml from 4.1.1 to 4.6.5
Bumps [lxml](https://github.com/lxml/lxml) from 4.1.1 to 4.6.5.
- [Release notes](https://github.com/lxml/lxml/releases)
- [Changelog](https://github.com/lxml/lxml/blob/master/CHANGES.txt)
- [Commits](https://github.com/lxml/lxml/compare/lxml-4.1.1...lxml-4.6.5)

---
updated-dependencies:
- dependency-name: lxml
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-12-13 19:44:12 +00:00
e7183864d0 Merge pull request #216 from Yashvendra/patch-1
Updated 3000_openvas.conf
2020-07-20 10:45:00 +02:00
12ac3dbf62 Merge pull request #217 from andrew-bailey/patch-1
Update README.md
2020-07-20 10:43:09 +02:00
e41ec93058 Update README.md
Fix license badge from MIT to Apache 2.0 which is the current license applied in Github
2020-07-20 11:57:36 +09:30
8a86e3142a Update 3000_openvas.conf
Fixed Description
2020-07-19 14:41:21 +05:30
9d003d12b4 improved error logging and excepcions 2020-04-08 12:01:47 +02:00
63c638751b Merge pull request #207 from spasaintk/patch-1
Update vulnwhisp.py
2020-02-29 20:05:51 +01:00
a3e85b7207 Update vulnwhisp.py
Code triggers a crash:
ERROR:root:main:local variable 'vw' referenced before assignment
ERROR: local variable 'vw' referenced before assignment

Proposed fix deals with the issue.
After fix:
INFO:vulnWhispererOpenVAS:process_openvas_scans:Processing complete
2020-02-28 00:33:38 +01:00
4974be02b4 fix of fix... 2020-02-21 16:17:00 +01:00
7fe2f9a5c1 casting port from jira local download to an int 2020-02-21 16:09:25 +01:00
f4634d03bd Merge pull request #206 from HASecuritySolutions/jira_ticket_download_attachment_data
Jira ticket download attachment data
2020-02-21 15:58:05 +01:00
e1ca9fadcd fixed issue where when actioning all actions, if one failed it exited the program 2020-02-21 15:50:14 +01:00
adb7700300 added on Jira local download an extra field with affected assets in json format for further processing in Splunk/ELK 2020-02-21 11:00:07 +01:00
ced0d4c2fc Hotfix #190 2020-02-04 16:47:37 +01:00
f483c76638 latest qualysapi version that supports python 2 is 6.0.0 2020-01-13 11:34:21 +01:00
f65116aec8 fix requirements issue, new version of qualysapi to be reviewed 2020-01-13 11:03:04 +01:00
bdcb6de4b2 Target CentOS 7 (issue #199) (#200) 2019-12-03 16:21:48 +01:00
af8e27d075 Bump requests from 2.18.3 to 2.20.0 (#196)
Bumps [requests](https://github.com/requests/requests) from 2.18.3 to 2.20.0.
- [Release notes](https://github.com/requests/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/master/HISTORY.md)
- [Commits](https://github.com/requests/requests/compare/v2.18.3...v2.20.0)

Signed-off-by: dependabot[bot] <support@github.com>
2019-12-03 16:20:36 +01:00
accf926ff7 fixed ELK7 logstash compatibility, #187 2019-09-16 15:35:34 +02:00
acf387bd0e added ELK versions supported (6 and 7) 2019-08-24 15:06:33 +02:00
ab7a91e020 Update frameworks_example.ini (#186) 2019-08-10 05:32:19 +02:00
a1a0d6b757 Merge pull request #182 from HASecuritySolutions/save_assets_no_DNS_record
[JIRA] added local file save with assets not resolving hostname
2019-06-18 12:05:49 +02:00
2fb089805c [JIRA] added local file save with assets not resolving hostname 2019-06-18 10:53:55 +02:00
6cf2a94431 Support tenable API keys (#176)
* support tenable API keys

* more flexible config support

* add nessus API key support

* fix whitespace
2019-05-02 10:26:51 +02:00
162636e60f Fix newlines in MAC Address field output (#178)
* fix newlines in all MAC Address field

* remove newline

* only cleanse if col exists
2019-05-02 08:58:18 +02:00
60c56b778e Update README.md
Fixed link references
2019-04-17 10:52:13 +02:00
093f963adf Merge pull request #170 from HASecuritySolutions/beta-1.8
VulnWhisperer Release 1.8
2019-04-17 10:36:35 +02:00
3464cfed68 Merge pull request #174 from pemontto/docker-fixes
Docker fixes
2019-04-17 10:29:32 +02:00
c78f22ed88 Merge pull request #12 from pemontto/travis-docker-latest 2019-04-17 15:09:37 +10:00
c3167bd76b fix test output 2019-04-17 14:52:03 +10:00
30e3efe2cb set default path and fix restore 2019-04-17 14:52:03 +10:00
549791470a Set limit to bail out on 2019-04-17 14:52:03 +10:00
e9aba0796f increase timeout for ES sync 2019-04-17 14:52:03 +10:00
2c5fbfc3ef restore deleted files 2019-04-17 14:52:03 +10:00
60b9e2b3d9 Test updates 2019-04-17 14:52:03 +10:00
bb60fae67e Move vulnwhisperer tests to a script 2019-04-17 14:52:03 +10:00
e30dbe244b standardise /tmp to /opt 2019-04-17 14:52:03 +10:00
c3fb65e67a Update test 2019-04-17 14:52:03 +10:00
a7ae44f981 Add docker test script 2019-04-17 14:50:06 +10:00
e0de8c6818 Expose Logstash API port 2019-04-17 14:50:06 +10:00
47a96a2984 sudo chown 2019-04-17 14:50:06 +10:00
5828d05627 fix 2019-04-17 14:50:06 +10:00
bfcb10ea0e Fix permissions for ES 2019-04-17 14:50:06 +10:00
0102ccb2f7 Fix build command 2019-04-17 14:50:06 +10:00
3860438903 Test travis docker 2019-04-17 14:50:06 +10:00
e17ff42adb update kibana objects to match template 2019-04-17 14:41:25 +10:00
f7d47ae753 update index template 2019-04-17 14:41:14 +10:00
d67122a099 Retry template installation a few times 2019-04-17 14:40:07 +10:00
3433231bb4 Add initial ELK6 index template 2019-04-16 11:30:27 +10:00
d9ab33d6c9 Set logstash and vw to use the same volume 2019-04-16 11:18:27 +10:00
4d153ec7f2 Add index template to ES for docker 2019-04-16 09:57:20 +10:00
1d92f71f9c fix issue mentioned in #163, although not applied to ELK6 2019-04-15 17:06:09 +02:00
3ecb26886a added proxy config to instructions 2019-04-15 12:43:47 +02:00
4c9fa9d241 Merge pull request #172 from pemontto/feature-fixes
Feature fixes
2019-04-15 11:47:02 +02:00
bf5070f361 fix vulnwhisperer image 2019-04-12 17:55:59 +10:00
0227636c4c unify case among config 2019-04-12 17:54:17 +10:00
b35da1c79e reduce docker layers and support test data 2019-04-12 17:51:15 +10:00
668efe2b7a Add extra test case 2019-04-12 11:44:04 +10:00
8433055f17 Fix more unicode issues 2019-04-12 11:40:01 +10:00
90908bd0c6 Remove deps from docker image 2019-04-12 11:39:49 +10:00
f23dd0bc83 Merge pull request #171 from pemontto/feature-separate-qualys
Feature separate qualys
2019-04-11 22:06:28 +02:00
8dc3b2f8ac Add qualys paths to elk5 logstash config 2019-04-11 10:41:13 +10:00
d2a7513ed1 Fix nessus logstash field cvss3_vector 2019-04-11 10:36:41 +10:00
4ed6827ee6 Clean config and separate qualys data 2019-04-11 08:27:28 +10:00
b25c769a01 readme details 2019-04-10 15:46:57 +02:00
4405284015 Merge branch 'beta-1.8' of https://github.com/HASecuritySolutions/VulnWhisperer into beta-1.8 2019-04-10 15:30:18 +02:00
7960bd3c59 updating documentation 2019-04-10 15:29:29 +02:00
4800d42eef Merge pull request #169 from HASecuritySolutions/submodule
updating submodule
2019-04-10 12:07:41 +02:00
8b8938e7b3 updating submodule 2019-04-10 12:04:36 +02:00
db669c531a changing submodule reference 2019-04-10 11:47:58 +02:00
74db06b17a Merge pull request #168 from HASecuritySolutions/qualys_was_fix
Qualys was fix
2019-04-10 11:35:37 +02:00
45e23985d3 added comment 2019-04-10 11:25:28 +02:00
cde2fe2dd8 final commit for qualys web 2019-04-10 11:19:44 +02:00
001462a848 changed version to 1.8 2019-04-08 16:37:42 +02:00
36a8528abc Jira Workflow documentation 2019-04-08 12:27:26 +02:00
913bbfb2de Merge pull request #167 from pemontto/feature-nessus-stream
Feature nessus stream
2019-04-08 11:45:14 +02:00
302037893d Add test path to env vars 2019-04-08 19:41:48 +10:00
c8d906c05f Fix tenable downloads 2019-04-08 19:30:48 +10:00
e1f2c00b9e fix tests 2019-04-08 19:17:47 +10:00
3d2c939cfb Update .travis.yml 2019-04-08 19:13:45 +10:00
7b1ebb51fa Updates tests 2019-04-08 19:02:02 +10:00
8086e7cf9f Fix tests directory 2019-04-08 18:46:30 +10:00
1ef7289b8d reduntant replace, formatting 2019-04-08 18:44:30 +10:00
a12e9f70a1 Remove redundant param 2019-04-08 18:38:03 +10:00
873066a419 reorder imports 2019-04-08 17:43:50 +10:00
973c69dffb Updates tests 2019-04-08 17:43:15 +10:00
12e6c6d0d5 Merge pull request #166 from pemontto/feature-fix-import
Fix missing sys import
2019-04-08 09:07:26 +02:00
ec5d6cd388 Iterate through nessus download data 2019-04-08 12:25:50 +10:00
33f2a5a3d1 Use a session and don't overwrite imports 2019-04-08 12:24:22 +10:00
5edde8760a Fix missing sys import 2019-04-06 11:02:42 +11:00
7370f5b608 Merge branch 'beta-1.8' of https://github.com/HASecuritySolutions/VulnWhisperer into beta-1.8 2019-04-05 23:37:41 +02:00
0a877ce267 fix nessus download 'imported' scans 2019-04-05 23:37:04 +02:00
1ef67d48be Feature error codes (#165)
* Use error codes for failed scans

* Fix indentations

* Fix more indentation

* Continue after failed download

* Add tests for failed scans

* Add more tests

* move definition

* Update nessus.py

This function was used by function `print_scans` which at the same time was an unused one that had been deleted in the PR itself.
2019-04-05 11:36:13 +02:00
27412d31b4 Merge branch 'beta-1.8' of https://github.com/HASecuritySolutions/VulnWhisperer into beta-1.8 2019-04-05 11:04:29 +02:00
71352aee57 Add external API mocking and travis tests (#164)
* Fix closing logging handlers

* Fix *some* unicode issues for nessus and qualys

* Prevent multiple requests to nessus scans endpoint

* More unicode fixes

* Remove unnecessary call

* Fix whitespace

* Add mock module and argument

* Add test config and data

* Fix whitespace again

* Disable qualys_web until data is available

* Use logging module

* Delete report_tracker.db

* Cleanup mock calls

* Add httpretty to requirements

* Refactor into a class

* Updates travis tests

* Fix exit codes

* Remove print statements

* Remove test

* Add test directory as submodule
2019-04-05 10:57:39 +02:00
eae64a745d cleanup of unused code and fixes, still breaks 2019-04-04 11:24:01 +02:00
03f7a4cedb fixed line 2019-04-04 11:05:39 +02:00
a30a22ab98 fix wrong parenthesis on qualys was 2019-04-03 15:15:31 +02:00
f33644b814 fix reported tracking for jira 2019-04-02 11:58:44 +02:00
fa0b3c867b added tracking of scans processed by jira, will only process if new scans now (backwards compatibility 2019-04-01 15:55:02 +02:00
e32c9bf55d Fix *some* unicode issues for nessus and qualys (#160)
* Fix *some* unicode issues for nessus and qualys

* More unicode fixes
2019-04-01 10:06:16 +02:00
9619a47d7a Fix Tenable and Nessus scan listing (#162)
* Prevent multiple requests to nessus scans endpoint

* Remove unnecessary call
2019-04-01 10:04:12 +02:00
383e7f5478 Fix closing logging handlers (#159) 2019-04-01 09:07:29 +02:00
3601ace5e1 improved file logging format 2019-03-22 10:42:30 +01:00
97e4f073bf added logging to file 2019-03-22 10:38:55 +01:00
a4b1b9cdd4 fixed issue where, asset after a removed one, was ignored due to python listing 2019-03-21 15:52:18 +01:00
843aac6a83 fixing issue with new vulns of already risk accepted issues not being reported anymore; now, new ticket is raised, excluding all the assets that have been previously considered risk accepted in another ticket 2019-03-20 16:37:50 +01:00
47df1ee538 typo 2019-03-20 10:55:54 +01:00
a4420b7df8 reverse unintended change on frameworks_example.ini 2019-03-20 09:11:18 +01:00
9d52596be9 fix xml encoding issue #156 2019-03-20 08:49:36 +01:00
5cdb2552f0 Merge branch 'beta-1.8' of https://github.com/HASecuritySolutions/VulnWhisperer into beta-1.8 2019-03-20 08:35:32 +01:00
70e1d7703f fix missing section specification on qualys was connector #156 2019-03-20 08:35:03 +01:00
2d3a140042 fix bug 2019-03-19 15:19:27 +01:00
936c4a3e1b added automatic jira server_decommission label removal after x time 2019-03-19 12:58:38 +01:00
e7bd4d2a55 deleting dependency and pulling qualysapi official library, vulnwhisperer compatible 2019-03-15 12:03:02 +01:00
401dfec2c8 fix #143, added a temporary container to upload through kibana API 2019-03-04 15:10:51 +01:00
86e792f5aa workaround regarding ignoring ticket updates after risk accepted 2019-03-01 15:18:49 +01:00
a288f416f7 added label *false positive* for reporting on jira 2019-02-27 18:06:16 +01:00
623c881928 fix jira issue index when comparing created tickets 2019-02-27 11:27:44 +01:00
4e94bef245 fix bug not detecting existent label due to string format 2019-02-26 15:26:14 +01:00
a3da41e487 added to readme openvas supported versions 2019-02-26 09:59:50 +01:00
46ddee391b confirm openvas 9 works 2019-02-25 22:09:29 +01:00
b36e31566e fix #142 2019-02-25 22:02:20 +01:00
05420ddfd0 readding docker-compose credentials template 2019-02-25 12:32:32 +01:00
bdbe31d425 resources reorg 2 2019-02-25 12:29:00 +01:00
f170dcb05f reorg resources files 2019-02-25 12:27:30 +01:00
5dd6503d38 Merge branch 'beta-1.8' of https://github.com/HASecuritySolutions/VulnWhisperer into beta-1.8 2019-02-25 12:09:46 +01:00
2c7965d2d9 fix #151 2019-02-25 12:08:04 +01:00
521184d079 Update bug_report.md
added debug trail request
2019-02-21 22:20:19 +01:00
c2d80c7fce made host resolution optional from the config file with dns_resolv var 2019-02-15 16:24:52 +01:00
587546a726 fix typo 2019-02-14 14:16:31 +01:00
177c2548ba allow jira sync module to run after the rest 2019-02-12 18:18:24 +01:00
bc3367e310 exception of empty scans 2019-02-12 18:01:46 +01:00
8c53987270 tracking of processing was in debug instead of info logging 2019-02-12 16:56:00 +01:00
ccf2e4b1d1 fix #147 2019-02-12 16:51:26 +01:00
b0caccdc89 fixed issues plus jira comment formatting 2019-02-12 16:25:28 +01:00
4ea384c9cc fix issue #110 (one line) 2019-02-08 10:56:32 +01:00
699fc75446 Update README.md
Nessus v8 also supported
2019-02-08 09:10:04 +01:00
53dc65e492 fix qualysapi library dependencies 2019-02-08 09:08:21 +01:00
0ea144bf87 Qualysapi fix (#146)
* moved qualysapi to branch master-update

* fixing bug of qualys scan without vulnerabilities: vulnWhispererQualysVuln[1361] ERROR Could not process scan/1549159480.84792: 'severity'

* change to fixed qualysapi branch

* fix bug and changed to qualysapi fork master branch

* updated submodule to master branch
2019-02-06 17:00:43 +01:00
14b71a25b8 Created the version 6 for ELK. Fixed #135 (#145)
* Created the version 6 for ELK. Fixed #135

* Needed to make sure all the data volumes were set up properly.  Some paths had VulnWhisperer, vulnwhisperer, vulnwhisp/data.

* Delete 9998_output_broker_rabbitmq.conf

* Delete 9998_input_broker_rabbitmq.conf

* Delete 0001_input_beats.conf

* add to gitignore creds files + correct elk5 docker-compose

* elk changed to 6.6.0 from 6.5.2, output path from logstash to elasticsearch host
2019-02-05 17:30:51 +01:00
3cd13229a3 Update issue templates (#144)
* Update issue templates

Add an issue template for bug reports

* Update bug_report.md

Changing the "Desktop" label to "System in which VulnWhisperer runs"
2019-02-01 11:01:49 +01:00
177d384353 Fixed #134 (#139) 2019-01-15 23:57:09 -05:00
b1404cf0be change ./dep/qualysapi origin to https due to Github complains 2018-12-14 15:47:11 +01:00
48b17c5cbe Add a Dockerfile (#132)
* updating my base to match original vulnwhisperer (#1)

* Create docker-compose.yml

* Update 9000_output_nessus.conf

* Added an argument for username and password, which takes precendece over nessus.  Fixed #5

* Update README.md

* Silence NoneType object

* Put in a check to make sure that the config file exists.  FIXES austin-taylor/VulnWhisperer#4

* remove leading and trailing spaces around all input switches. Fixes austi-taylor/VulnWhisperer#6

* Update README.md

* Allow for any directories to be monitored

* Addition of Qualys WebApp Processing

* Addition of Qualys WebApp Processing

* Fixed multiple bugs, cleaned up formatting, produces solid csv output for Qualys Web App scans

* Adding custom version of QualysAPI

* Field Cleanup

* Addition of submodules, update to connectors, base class start

* Addition of submodules, update to connectors, base class start

* Addition of submodules, update to connectors, base class start

* Refactored classes to be more modular, update to ini file and submodules

* Refactored classes to be more modular, update to ini file and submodules

* Removing commented code

* Addition of category class and special class for Qualys Scanning Reports. Also added additional enrichments to reports

* Column update for scans and N/A cleanup

* Fix for str casting

* Update README.md

* Update to README

* Update to README

* Update to README

* Update to requirements.txt

* Support for json output

* Database tracking for processed Qualys scans

* Database tracking for processed Qualys scans

* Bug fix for counter in Nessus and format fix for qualys

* Check for new records

* Update to count tracker

* Update to write path logic

* Better database handling

* Addition of VulnWhisperer-Qualys logstash files

* Addition of VulnWhisperer-Qualys logstash files

* Update to logstash template

* Updated dashboard

* Update to README

* Update to README

* Logo update

* Readme Update

* Readme Update

* Readme Update

* Adding name of scan and scan reference

* Plugin name converted to scan name

* Update to README

* Documentation update

* README Update

* README Update

* Update README.md

* Add free automated flake8 testing of pull requests

[Flake8](http://flake8.pycqa.org) tests can help to find Python syntax errors, undefined names, and other code quality issues.  [Travis CI](https://travis-ci.org) is a framework for running flake8 and other tests which is free for open source projects like this one.  The owner of the this repo would need to go to https://travis-ci.org/profile and flip the repository switch __on__ to enable free automated flake8 testing of each pull request.

* Testing build with no submodules

* flake8 --ignore=deps/qualysapi

* flake8 . --exclude=/deps/qualysapi

* Remove leading slash

* Add build status to README

* Travis Config update

* README Update

* README Update

* Create CNAME

* Set theme jekyll-theme-leap-day

* README Update

* Getting started steps

* Getting started steps

* Remind user to select section if using a config

* Update to readme

* Update to readme

* Update to readme

* Update to readme

* Update to README

* Update to README

* Update to example logstash config

* Update to qualys logstash conf to reflect example config

* Update to README

* Update to README

* Readme update

* Rename logstash-nessus-template.json to logstash-vulnwhisperer-template.json

* Update 1000_nessus_process_file.conf

* Delete LICENSE

* Create LICENSE

* Update to make nessus visualizations consistent with qualys

* Update to README

* Update to README

* Badge addition

* Badge addition

* Addition of OpenVAS Connector

* Addition of OpenVAS

* Update 9000_output_nessus.conf

* Delete 9000_output_nessus.conf

* Update 1000_nessus_process_file.conf

* Automatically create filepath and directory if it does not exist

* Addition of OpenVas -- ready for alpha

* Addition of OpenVas -- ready for alpha

* Allow template defined config form IDs

* Completion of OpenVAS module

* Completion of OpenVAS module

* Remove template format

* Addition of openvas logstash config

* Update setup.py

* Update README.md

* ELK Sample Install (#37)

Updated Readme.md to include a Sample ELK Install guide addressing multiple issues around ELK Cluster/Node Configuration.

* Update vulnwhisp.py

* VulnFramework Links (#39)

Quick update regarding issue #33

* Updating config to be consistent with conf files

*  Preserving newlines & carriage returns  (#48)

* Preserve newlines & carriage returns

* Convert '\n' & '\r' to newlines & carriage returns

* Removed no longer supported InsecureRequestWarning workaround. (#55)

* Removed no longer supported InsecureRequestWarning workaround.

* Add dependencies to README.md

* Update vulnwhisp.py

* Fix to apt-get install

* Nessus bugfixes (#68)

* Handle cases where no scans are present

* Prevent infinite login loop with incorrect creds

* Print actual config file path

* Don't overwrite Nessus Synopsis with Description

* Tenable.io support (#70)

* Basic tenable.io support

* Add tenable config section

* Use existing variable

* Fix indent

* Fix paren

* Use ternary syntax

* Update Logstash config for tenable.io

* Update README.md

* Update template to version 5.x (#73)

* Update template to Elasticsearch 5.x

* Update template to Elasticsearch 5.x

I think _all field is no longer needed from ES 5.x because of the search all field execution if _all is disabled

* Qualys Vulnerability Management integration (#74)

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Fix error: "Cannot convert non-finite values (NA or inf) to integer"

When trying to download the results of Qualys Vulnerability Management scans, the following error pops up:

[FAIL] - Could not process scan/xxxxxxxxxx.xxxxx - Cannot convert non-finite values (NA or inf) to integer

This error is due to pandas operating with the scan results json file, as the last element from the json doesn't fir with the rest of the response's scheme: that element is "target_distribution_across_scanner_appliances", which contains the scanners used and the IP ranges that each scanner went through.

Taking out the last line solves the issue.

Also adding the qualys_vuln scheme to the frameworks_example.ini

* Update README.md

* example.ini is frameworks_example.ini (#77)

* No need to specify section to run (#88)

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Fix error: "Cannot convert non-finite values (NA or inf) to integer"

When trying to download the results of Qualys Vulnerability Management scans, the following error pops up:

[FAIL] - Could not process scan/xxxxxxxxxx.xxxxx - Cannot convert non-finite values (NA or inf) to integer

This error is due to pandas operating with the scan results json file, as the last element from the json doesn't fir with the rest of the response's scheme: that element is "target_distribution_across_scanner_appliances", which contains the scanners used and the IP ranges that each scanner went through.

Taking out the last line solves the issue.

Also adding the qualys_vuln scheme to the frameworks_example.ini

* No need to specify section to run

Until now it vulnwhisperer was not running if a section was not specified,
but there is the variable "enabled" on each module config, so now it will
check which modules are enabled and run them sequentialy.

Made mainly in order to be able to automate with docker-compose instance,
as the docker with vulnwhisperer (https://github.com/HASecuritySolutions/docker_vulnwhisperer)
has that command run at the end.

* added to readme + detectify

* Silence requests warnings

* Docker-compose fully working with vulnwhisperer integrated (#90)

* ignore nessus requests warnings

* docker-compose fully working with vulnwhisperer integrated

* remove comments docker-compose

* documenting docker-compose

* Readme corrections

* fix after recheck everything works out of the box

* fix exits that break the no specified section execution mode

* fix docker qualysapi issue, updated README

* revert change on deps/qualysapi/qualysapi/util.py (no effect)

* temporarily changed Dockerfile link to the working one

* Update README.md

* Update README.md

* Fix docker-compose logstash config (#92)

* ignore nessus requests warnings

* docker-compose fully working with vulnwhisperer integrated

* remove comments docker-compose

* documenting docker-compose

* Readme corrections

* fix after recheck everything works out of the box

* fix exits that break the no specified section execution mode

* fix docker qualysapi issue, updated README

* revert change on deps/qualysapi/qualysapi/util.py (no effect)

* temporarily changed Dockerfile link to the working one

* fix docker-compose logstash config

* permissions needed for logstash container to work

* changing default path qualys, there are no folders

* Update 1000_vulnWhispererBaseVisuals.json

Update field to include keyword to prevent error: TypeError: "field" is a required parameter

* Update docker-compose.yml (#93)

increase file descriptors to allow elasticsearch to start.

* Update Slack link on README.md

* Update README.md

Added to README.md @pemontto as contributor

* Jira module fully working (#104)

* clean OS X .DS_Store files

* fix nessus end of line carriage, added JIRA args

* JIRA module fully working

* jira module working with nessus

* added check on already existing jira config, update README

* qualys_vm<->jira working, qualys_vm database entries with qualys_vm, improved checks

* JIRA module updates ticket's assets and comments update

* added JIRA auto-close function for resolved vulnerabitilies

* fix if components variable empty issue

* fix creation of new ticket after updating existing one

* final fixes, added extra line in template

* added vulnerability criticality as label in order to be able to filter

* Added jira section to config file and fail check for config variable (#105)

* clean OS X .DS_Store files

* fix nessus end of line carriage, added JIRA args

* JIRA module fully working

* jira module working with nessus

* added check on already existing jira config, update README

* qualys_vm<->jira working, qualys_vm database entries with qualys_vm, improved checks

* JIRA module updates ticket's assets and comments update

* added JIRA auto-close function for resolved vulnerabitilies

* fix if components variable empty issue

* fix creation of new ticket after updating existing one

* final fixes, added extra line in template

* added vulnerability criticality as label in order to be able to filter

* jira module gets now minimum criticality from config file

* added jira config to frameworks_example.ini

* fail check for config variable in case it is left empty

* fix issue jira-qualys criticality comparison

* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web (#108)

* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* Rework logging using the stdlib machinery (#116)

* Rework logging using the stdlib machinery
Use the verbose or debug flag to enable/disable logging.DEBUG
Remove the vprint function from all classes
Remove bcolors from all code
Cleanup [INFO], [ERROR], {success} and similar

* fix some errors my local linter missed but travis catched

* add coloredlogs and --fancy command line flag

* qualysapi dependency removal

* Qualysapi update (#118)

* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* delete qualysapi fork and added to requirements

* merge with testing

* Jira extras (#120)

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* initialize variable fullpath to avoid break

* fix get latest scan entry from db and ignore 'potential' not verified vulns

* added host resolv + cache to speed already resolved, jira logging

* make sure that vulnerability criticality appears as a label on ticket + automatic actions

* jira bulk report of scans, fix on nessus logging, jira time resolution and list all ticket reported assets

* added jira ticket data download + change default time window from 6 to 12 months

* small fixes

* jira logstash files

* fix variable confusion (thx Travis :)

* update readme (#121)

* Add ansible provisioning (#122)

* first ansible skeleton

* first commit of ansible installation of vulnwhisperer outside docker

* first ansible skeleton

* first commit of ansible installation of vulnwhisperer outside docker

* refactor the ansible role a bit

* update readme, add fail validation step to provision.yml and fix
typo when calling a logging funciton

* removing ansible from vulnwhisperer, creating a new repo for ansible deployment

* closed ticket metrics only get last 12 months tickets

* Update README.md

Fixing travis link

* Restoring custom qualys wrapper

* Restoring custom qualys wrapper

* Update README.md

* Created the dockerfile

* Updating dockerfile

* in a production system, it is not advisable to have git pulling repos from inside a docker image when there is a pypi repo.

* builds the vulnwhisperer image without any of the ELK configs.  It can also be used in the same directory as the main project

* reverted the qualys call
2018-12-14 15:23:54 +01:00
a5972cfacd V6 Dashboard (#131)
* updating my base to match original vulnwhisperer (#1)

* Create docker-compose.yml

* Update 9000_output_nessus.conf

* Added an argument for username and password, which takes precendece over nessus.  Fixed #5

* Update README.md

* Silence NoneType object

* Put in a check to make sure that the config file exists.  FIXES austin-taylor/VulnWhisperer#4

* remove leading and trailing spaces around all input switches. Fixes austi-taylor/VulnWhisperer#6

* Update README.md

* Allow for any directories to be monitored

* Addition of Qualys WebApp Processing

* Addition of Qualys WebApp Processing

* Fixed multiple bugs, cleaned up formatting, produces solid csv output for Qualys Web App scans

* Adding custom version of QualysAPI

* Field Cleanup

* Addition of submodules, update to connectors, base class start

* Addition of submodules, update to connectors, base class start

* Addition of submodules, update to connectors, base class start

* Refactored classes to be more modular, update to ini file and submodules

* Refactored classes to be more modular, update to ini file and submodules

* Removing commented code

* Addition of category class and special class for Qualys Scanning Reports. Also added additional enrichments to reports

* Column update for scans and N/A cleanup

* Fix for str casting

* Update README.md

* Update to README

* Update to README

* Update to README

* Update to requirements.txt

* Support for json output

* Database tracking for processed Qualys scans

* Database tracking for processed Qualys scans

* Bug fix for counter in Nessus and format fix for qualys

* Check for new records

* Update to count tracker

* Update to write path logic

* Better database handling

* Addition of VulnWhisperer-Qualys logstash files

* Addition of VulnWhisperer-Qualys logstash files

* Update to logstash template

* Updated dashboard

* Update to README

* Update to README

* Logo update

* Readme Update

* Readme Update

* Readme Update

* Adding name of scan and scan reference

* Plugin name converted to scan name

* Update to README

* Documentation update

* README Update

* README Update

* Update README.md

* Add free automated flake8 testing of pull requests

[Flake8](http://flake8.pycqa.org) tests can help to find Python syntax errors, undefined names, and other code quality issues.  [Travis CI](https://travis-ci.org) is a framework for running flake8 and other tests which is free for open source projects like this one.  The owner of the this repo would need to go to https://travis-ci.org/profile and flip the repository switch __on__ to enable free automated flake8 testing of each pull request.

* Testing build with no submodules

* flake8 --ignore=deps/qualysapi

* flake8 . --exclude=/deps/qualysapi

* Remove leading slash

* Add build status to README

* Travis Config update

* README Update

* README Update

* Create CNAME

* Set theme jekyll-theme-leap-day

* README Update

* Getting started steps

* Getting started steps

* Remind user to select section if using a config

* Update to readme

* Update to readme

* Update to readme

* Update to readme

* Update to README

* Update to README

* Update to example logstash config

* Update to qualys logstash conf to reflect example config

* Update to README

* Update to README

* Readme update

* Rename logstash-nessus-template.json to logstash-vulnwhisperer-template.json

* Update 1000_nessus_process_file.conf

* Delete LICENSE

* Create LICENSE

* Update to make nessus visualizations consistent with qualys

* Update to README

* Update to README

* Badge addition

* Badge addition

* Addition of OpenVAS Connector

* Addition of OpenVAS

* Update 9000_output_nessus.conf

* Delete 9000_output_nessus.conf

* Update 1000_nessus_process_file.conf

* Automatically create filepath and directory if it does not exist

* Addition of OpenVas -- ready for alpha

* Addition of OpenVas -- ready for alpha

* Allow template defined config form IDs

* Completion of OpenVAS module

* Completion of OpenVAS module

* Remove template format

* Addition of openvas logstash config

* Update setup.py

* Update README.md

* ELK Sample Install (#37)

Updated Readme.md to include a Sample ELK Install guide addressing multiple issues around ELK Cluster/Node Configuration.

* Update vulnwhisp.py

* VulnFramework Links (#39)

Quick update regarding issue #33

* Updating config to be consistent with conf files

*  Preserving newlines & carriage returns  (#48)

* Preserve newlines & carriage returns

* Convert '\n' & '\r' to newlines & carriage returns

* Removed no longer supported InsecureRequestWarning workaround. (#55)

* Removed no longer supported InsecureRequestWarning workaround.

* Add dependencies to README.md

* Update vulnwhisp.py

* Fix to apt-get install

* Nessus bugfixes (#68)

* Handle cases where no scans are present

* Prevent infinite login loop with incorrect creds

* Print actual config file path

* Don't overwrite Nessus Synopsis with Description

* Tenable.io support (#70)

* Basic tenable.io support

* Add tenable config section

* Use existing variable

* Fix indent

* Fix paren

* Use ternary syntax

* Update Logstash config for tenable.io

* Update README.md

* Update template to version 5.x (#73)

* Update template to Elasticsearch 5.x

* Update template to Elasticsearch 5.x

I think _all field is no longer needed from ES 5.x because of the search all field execution if _all is disabled

* Qualys Vulnerability Management integration (#74)

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Fix error: "Cannot convert non-finite values (NA or inf) to integer"

When trying to download the results of Qualys Vulnerability Management scans, the following error pops up:

[FAIL] - Could not process scan/xxxxxxxxxx.xxxxx - Cannot convert non-finite values (NA or inf) to integer

This error is due to pandas operating with the scan results json file, as the last element from the json doesn't fir with the rest of the response's scheme: that element is "target_distribution_across_scanner_appliances", which contains the scanners used and the IP ranges that each scanner went through.

Taking out the last line solves the issue.

Also adding the qualys_vuln scheme to the frameworks_example.ini

* Update README.md

* example.ini is frameworks_example.ini (#77)

* No need to specify section to run (#88)

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Fix error: "Cannot convert non-finite values (NA or inf) to integer"

When trying to download the results of Qualys Vulnerability Management scans, the following error pops up:

[FAIL] - Could not process scan/xxxxxxxxxx.xxxxx - Cannot convert non-finite values (NA or inf) to integer

This error is due to pandas operating with the scan results json file, as the last element from the json doesn't fir with the rest of the response's scheme: that element is "target_distribution_across_scanner_appliances", which contains the scanners used and the IP ranges that each scanner went through.

Taking out the last line solves the issue.

Also adding the qualys_vuln scheme to the frameworks_example.ini

* No need to specify section to run

Until now it vulnwhisperer was not running if a section was not specified,
but there is the variable "enabled" on each module config, so now it will
check which modules are enabled and run them sequentialy.

Made mainly in order to be able to automate with docker-compose instance,
as the docker with vulnwhisperer (https://github.com/HASecuritySolutions/docker_vulnwhisperer)
has that command run at the end.

* added to readme + detectify

* Silence requests warnings

* Docker-compose fully working with vulnwhisperer integrated (#90)

* ignore nessus requests warnings

* docker-compose fully working with vulnwhisperer integrated

* remove comments docker-compose

* documenting docker-compose

* Readme corrections

* fix after recheck everything works out of the box

* fix exits that break the no specified section execution mode

* fix docker qualysapi issue, updated README

* revert change on deps/qualysapi/qualysapi/util.py (no effect)

* temporarily changed Dockerfile link to the working one

* Update README.md

* Update README.md

* Fix docker-compose logstash config (#92)

* ignore nessus requests warnings

* docker-compose fully working with vulnwhisperer integrated

* remove comments docker-compose

* documenting docker-compose

* Readme corrections

* fix after recheck everything works out of the box

* fix exits that break the no specified section execution mode

* fix docker qualysapi issue, updated README

* revert change on deps/qualysapi/qualysapi/util.py (no effect)

* temporarily changed Dockerfile link to the working one

* fix docker-compose logstash config

* permissions needed for logstash container to work

* changing default path qualys, there are no folders

* Update 1000_vulnWhispererBaseVisuals.json

Update field to include keyword to prevent error: TypeError: "field" is a required parameter

* Update docker-compose.yml (#93)

increase file descriptors to allow elasticsearch to start.

* Update Slack link on README.md

* Update README.md

Added to README.md @pemontto as contributor

* Jira module fully working (#104)

* clean OS X .DS_Store files

* fix nessus end of line carriage, added JIRA args

* JIRA module fully working

* jira module working with nessus

* added check on already existing jira config, update README

* qualys_vm<->jira working, qualys_vm database entries with qualys_vm, improved checks

* JIRA module updates ticket's assets and comments update

* added JIRA auto-close function for resolved vulnerabitilies

* fix if components variable empty issue

* fix creation of new ticket after updating existing one

* final fixes, added extra line in template

* added vulnerability criticality as label in order to be able to filter

* Added jira section to config file and fail check for config variable (#105)

* clean OS X .DS_Store files

* fix nessus end of line carriage, added JIRA args

* JIRA module fully working

* jira module working with nessus

* added check on already existing jira config, update README

* qualys_vm<->jira working, qualys_vm database entries with qualys_vm, improved checks

* JIRA module updates ticket's assets and comments update

* added JIRA auto-close function for resolved vulnerabitilies

* fix if components variable empty issue

* fix creation of new ticket after updating existing one

* final fixes, added extra line in template

* added vulnerability criticality as label in order to be able to filter

* jira module gets now minimum criticality from config file

* added jira config to frameworks_example.ini

* fail check for config variable in case it is left empty

* fix issue jira-qualys criticality comparison

* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web (#108)

* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* Rework logging using the stdlib machinery (#116)

* Rework logging using the stdlib machinery
Use the verbose or debug flag to enable/disable logging.DEBUG
Remove the vprint function from all classes
Remove bcolors from all code
Cleanup [INFO], [ERROR], {success} and similar

* fix some errors my local linter missed but travis catched

* add coloredlogs and --fancy command line flag

* qualysapi dependency removal

* Qualysapi update (#118)

* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* delete qualysapi fork and added to requirements

* merge with testing

* Jira extras (#120)

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* initialize variable fullpath to avoid break

* fix get latest scan entry from db and ignore 'potential' not verified vulns

* added host resolv + cache to speed already resolved, jira logging

* make sure that vulnerability criticality appears as a label on ticket + automatic actions

* jira bulk report of scans, fix on nessus logging, jira time resolution and list all ticket reported assets

* added jira ticket data download + change default time window from 6 to 12 months

* small fixes

* jira logstash files

* fix variable confusion (thx Travis :)

* update readme (#121)

* Add ansible provisioning (#122)

* first ansible skeleton

* first commit of ansible installation of vulnwhisperer outside docker

* first ansible skeleton

* first commit of ansible installation of vulnwhisperer outside docker

* refactor the ansible role a bit

* update readme, add fail validation step to provision.yml and fix
typo when calling a logging funciton

* removing ansible from vulnwhisperer, creating a new repo for ansible deployment

* closed ticket metrics only get last 12 months tickets

* Update README.md

Fixing travis link

* Restoring custom qualys wrapper

* Restoring custom qualys wrapper

* Update README.md

* Updated the visualizations to support the 6.x ELK stack

* making the text message more generic

* removed visualizations that were not part of a dashboard

* Built a single file, since Kibana allows for that.  Created a new scripted value in the logstash-vulnwhisperer that will allow uniqu fingerprinting. Updated all visualizations to support the unqiue count of the scan_fingerprint. Fixes #130 Fixes #126 Fixes #111
2018-12-14 15:22:27 +01:00
ff8d078294 Update README.md 2018-12-04 16:33:18 -07:00
73bd289aa6 Restoring custom qualys wrapper 2018-12-04 16:23:41 -07:00
a63b19914c Restoring custom qualys wrapper 2018-12-04 16:23:28 -07:00
71227d6bd8 Update README.md
Fixing travis link
2018-12-04 15:56:32 -07:00
c88379dd2a closed ticket metrics only get last 12 months tickets 2018-11-16 09:38:18 +01:00
edabf8cda6 removing ansible from vulnwhisperer, creating a new repo for ansible deployment 2018-11-14 15:06:48 +01:00
3a09f60543 Add ansible provisioning (#122)
* first ansible skeleton

* first commit of ansible installation of vulnwhisperer outside docker

* first ansible skeleton

* first commit of ansible installation of vulnwhisperer outside docker

* refactor the ansible role a bit

* update readme, add fail validation step to provision.yml and fix
typo when calling a logging funciton
2018-11-14 10:14:12 +01:00
a8671a7303 update readme (#121) 2018-11-12 13:47:14 +01:00
8bd3c5cab9 Jira extras (#120)
* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* initialize variable fullpath to avoid break

* fix get latest scan entry from db and ignore 'potential' not verified vulns

* added host resolv + cache to speed already resolved, jira logging

* make sure that vulnerability criticality appears as a label on ticket + automatic actions

* jira bulk report of scans, fix on nessus logging, jira time resolution and list all ticket reported assets

* added jira ticket data download + change default time window from 6 to 12 months

* small fixes

* jira logstash files

* fix variable confusion (thx Travis :)
2018-11-08 09:24:24 +01:00
0b571799dc merging testing 2018-11-05 15:18:16 +01:00
cf879b4731 merge with testing 2018-11-05 15:16:22 +01:00
b3f7144f85 Qualysapi update (#118)
* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.

* delete qualysapi fork and added to requirements
2018-11-05 15:07:25 +01:00
0d5b6479ac qualysapi dependency removal 2018-11-05 15:06:29 +01:00
e3e416fe44 Rework logging using the stdlib machinery (#116)
* Rework logging using the stdlib machinery
Use the verbose or debug flag to enable/disable logging.DEBUG
Remove the vprint function from all classes
Remove bcolors from all code
Cleanup [INFO], [ERROR], {success} and similar

* fix some errors my local linter missed but travis catched

* add coloredlogs and --fancy command line flag
2018-11-04 05:39:27 -06:00
b7d6d6207f update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web (#108)
* update qualysapi to latest + PR and refactored vulnwhisperer qualys module to qualys-web

* changing config template paths for qualys

* Update frameworks_example.ini

Will leave for now qualys local folder as "qualys" instead of changing to one for each module, as like this it will still be compatible with the current logstash and we will be able to update master to drop the qualysapi fork once the new version is uploaded to PyPI repository.
PR from qualysapi repo has already been merged, so the only missing is the upload to PyPI.
2018-10-18 04:39:08 -05:00
46955bff75 Merge pull request #109 from qmontal/master
fix issue jira-qualys criticality comparison
2018-10-17 14:20:32 +02:00
911b9910a8 fix issue jira-qualys criticality comparison 2018-10-17 14:17:49 +02:00
9383c12495 Added jira section to config file and fail check for config variable (#105)
* clean OS X .DS_Store files

* fix nessus end of line carriage, added JIRA args

* JIRA module fully working

* jira module working with nessus

* added check on already existing jira config, update README

* qualys_vm<->jira working, qualys_vm database entries with qualys_vm, improved checks

* JIRA module updates ticket's assets and comments update

* added JIRA auto-close function for resolved vulnerabitilies

* fix if components variable empty issue

* fix creation of new ticket after updating existing one

* final fixes, added extra line in template

* added vulnerability criticality as label in order to be able to filter

* jira module gets now minimum criticality from config file

* added jira config to frameworks_example.ini

* fail check for config variable in case it is left empty
2018-10-13 14:01:51 -05:00
4422db586d Jira module fully working (#104)
* clean OS X .DS_Store files

* fix nessus end of line carriage, added JIRA args

* JIRA module fully working

* jira module working with nessus

* added check on already existing jira config, update README

* qualys_vm<->jira working, qualys_vm database entries with qualys_vm, improved checks

* JIRA module updates ticket's assets and comments update

* added JIRA auto-close function for resolved vulnerabitilies

* fix if components variable empty issue

* fix creation of new ticket after updating existing one

* final fixes, added extra line in template

* added vulnerability criticality as label in order to be able to filter
2018-10-12 09:30:14 -05:00
13bb288217 Update README.md
Added to README.md @pemontto as contributor
2018-10-06 20:45:38 +02:00
e1a54fc414 Update Slack link on README.md 2018-10-03 09:08:13 +02:00
bbb0cf3434 Merge pull request #95 from rogierm/rogierm-patch-3
Update 1000_vulnWhispererBaseVisuals.json
2018-09-27 08:43:42 +02:00
078bd9559e Update docker-compose.yml (#93)
increase file descriptors to allow elasticsearch to start.
2018-09-04 01:58:36 -04:00
258f9ae4ca Update 1000_vulnWhispererBaseVisuals.json
Update field to include keyword to prevent error: TypeError: "field" is a required parameter
2018-09-03 00:40:23 +02:00
fc5f9b5b7c Fix docker-compose logstash config (#92)
* ignore nessus requests warnings

* docker-compose fully working with vulnwhisperer integrated

* remove comments docker-compose

* documenting docker-compose

* Readme corrections

* fix after recheck everything works out of the box

* fix exits that break the no specified section execution mode

* fix docker qualysapi issue, updated README

* revert change on deps/qualysapi/qualysapi/util.py (no effect)

* temporarily changed Dockerfile link to the working one

* fix docker-compose logstash config

* permissions needed for logstash container to work

* changing default path qualys, there are no folders
2018-08-20 09:20:58 -04:00
a159d5b06f Update README.md 2018-08-19 12:03:38 -04:00
7b4202de52 Update README.md 2018-08-18 14:29:23 -04:00
8336b72314 Docker-compose fully working with vulnwhisperer integrated (#90)
* ignore nessus requests warnings

* docker-compose fully working with vulnwhisperer integrated

* remove comments docker-compose

* documenting docker-compose

* Readme corrections

* fix after recheck everything works out of the box

* fix exits that break the no specified section execution mode

* fix docker qualysapi issue, updated README

* revert change on deps/qualysapi/qualysapi/util.py (no effect)

* temporarily changed Dockerfile link to the working one
2018-08-17 08:51:28 -04:00
5b879e13c7 Silence requests warnings 2018-08-14 06:23:18 -04:00
a84576b551 No need to specify section to run (#88)
* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Fix error: "Cannot convert non-finite values (NA or inf) to integer"

When trying to download the results of Qualys Vulnerability Management scans, the following error pops up:

[FAIL] - Could not process scan/xxxxxxxxxx.xxxxx - Cannot convert non-finite values (NA or inf) to integer

This error is due to pandas operating with the scan results json file, as the last element from the json doesn't fir with the rest of the response's scheme: that element is "target_distribution_across_scanner_appliances", which contains the scanners used and the IP ranges that each scanner went through.

Taking out the last line solves the issue.

Also adding the qualys_vuln scheme to the frameworks_example.ini

* No need to specify section to run

Until now it vulnwhisperer was not running if a section was not specified,
but there is the variable "enabled" on each module config, so now it will
check which modules are enabled and run them sequentialy.

Made mainly in order to be able to automate with docker-compose instance,
as the docker with vulnwhisperer (https://github.com/HASecuritySolutions/docker_vulnwhisperer)
has that command run at the end.

* added to readme + detectify
2018-08-09 16:39:57 -07:00
46be3c71ef example.ini is frameworks_example.ini (#77) 2018-07-06 22:18:26 -07:00
608a49d178 Update README.md 2018-07-05 13:47:22 -04:00
7f2c59f531 Qualys Vulnerability Management integration (#74)
* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Add Qualys vulnerability scans

* Use non-zero exit codes for failures

* Convert to strings for Logstash

* Update logstash config for vulnerability scans

* Update README

* Grab all scans statuses

* Fix error: "Cannot convert non-finite values (NA or inf) to integer"

When trying to download the results of Qualys Vulnerability Management scans, the following error pops up:

[FAIL] - Could not process scan/xxxxxxxxxx.xxxxx - Cannot convert non-finite values (NA or inf) to integer

This error is due to pandas operating with the scan results json file, as the last element from the json doesn't fir with the rest of the response's scheme: that element is "target_distribution_across_scanner_appliances", which contains the scanners used and the IP ranges that each scanner went through.

Taking out the last line solves the issue.

Also adding the qualys_vuln scheme to the frameworks_example.ini
2018-07-05 10:34:02 -07:00
3ac9a8156a Update template to version 5.x (#73)
* Update template to Elasticsearch 5.x

* Update template to Elasticsearch 5.x

I think _all field is no longer needed from ES 5.x because of the search all field execution if _all is disabled
2018-06-30 13:25:29 -07:00
9a08acb2d6 Update README.md 2018-06-26 13:04:40 -04:00
38d2eec065 Tenable.io support (#70)
* Basic tenable.io support

* Add tenable config section

* Use existing variable

* Fix indent

* Fix paren

* Use ternary syntax

* Update Logstash config for tenable.io
2018-06-26 13:03:08 -04:00
9b10711d34 Nessus bugfixes (#68)
* Handle cases where no scans are present

* Prevent infinite login loop with incorrect creds

* Print actual config file path

* Don't overwrite Nessus Synopsis with Description
2018-06-13 02:56:06 -04:00
9049b1ff0f Fix to apt-get install 2018-06-04 20:23:17 -04:00
d1d679b12f Update vulnwhisp.py 2018-05-04 10:03:58 -04:00
8ca1c3540d Removed no longer supported InsecureRequestWarning workaround. (#55)
* Removed no longer supported InsecureRequestWarning workaround.

* Add dependencies to README.md
2018-04-17 13:27:23 -04:00
e4e9ed7f28 Preserving newlines & carriage returns (#48)
* Preserve newlines & carriage returns

* Convert '\n' & '\r' to newlines & carriage returns
2018-04-10 08:54:21 -04:00
0982e26197 Updating config to be consistent with conf files 2018-04-02 17:53:24 -04:00
9fc9af37f7 VulnFramework Links (#39)
Quick update regarding issue #33
2018-03-07 14:21:15 -05:00
3984c879cd Update vulnwhisp.py 2018-03-05 07:03:49 -05:00
f83a5d89a3 ELK Sample Install (#37)
Updated Readme.md to include a Sample ELK Install guide addressing multiple issues around ELK Cluster/Node Configuration.
2018-03-04 19:14:51 -05:00
1400cacfcb Update README.md 2018-03-04 17:18:34 -05:00
6f96536145 Update setup.py 2018-03-04 17:15:32 -05:00
4a60306bdd Addition of openvas logstash config 2018-03-04 16:06:53 -05:00
d509c03d68 Remove template format 2018-03-04 15:41:23 -05:00
f6745b00fd Completion of OpenVAS module 2018-03-04 15:06:09 -05:00
21b2a03b36 Completion of OpenVAS module 2018-03-04 14:33:18 -05:00
a658b7abab Allow template defined config form IDs 2018-03-04 08:43:35 -05:00
f21d3a3f64 Addition of OpenVas -- ready for alpha 2018-03-03 15:54:24 -05:00
53b0b27cb2 Addition of OpenVas -- ready for alpha 2018-03-03 15:53:23 -05:00
d8e813ff5a Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-02-25 21:15:54 -05:00
a0de072394 Automatically create filepath and directory if it does not exist 2018-02-25 21:15:50 -05:00
13dbc79b27 Update 1000_nessus_process_file.conf 2018-02-17 22:57:32 -05:00
42e72c36dd Delete 9000_output_nessus.conf 2018-02-17 22:30:16 -05:00
554b739146 Update 9000_output_nessus.conf 2018-02-17 22:29:41 -05:00
54337d3bfa Addition of OpenVAS 2018-02-11 16:07:50 -05:00
8b63aa4fbc Addition of OpenVAS Connector 2018-02-11 16:02:16 -05:00
5362d6f9e8 Badge addition 2018-01-31 10:12:47 -05:00
645e5707a4 Badge addition 2018-01-31 10:11:14 -05:00
03a2125dd1 Update to README 2018-01-31 10:04:39 -05:00
8e85eb0981 Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-01-31 09:51:55 -05:00
136cc3ac61 Update to README 2018-01-31 09:51:51 -05:00
0c6611711c Merge pull request #23 from HASecuritySolutions/master
HA Sync
2018-01-29 22:38:39 -05:00
f3eb2fbda1 Merge pull request #5 from austin-taylor/master
Fork Sync
2018-01-29 22:38:04 -05:00
124cbf2753 Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-01-29 22:35:55 -05:00
13a01fbfd0 Update to make nessus visualizations consistent with qualys 2018-01-29 22:35:45 -05:00
bbfe7ad71b Merge pull request #22 from austin-taylor/add-license-1
Create LICENSE
2018-01-23 12:07:01 -05:00
330e90c7a0 Create LICENSE 2018-01-23 12:06:48 -05:00
f9af977145 Delete LICENSE 2018-01-23 12:06:03 -05:00
1a2091ac54 Update 1000_nessus_process_file.conf 2018-01-05 09:43:57 -05:00
b2c230f43b Rename logstash-nessus-template.json to logstash-vulnwhisperer-template.json 2018-01-05 07:51:51 -05:00
cdaf743435 Readme update 2018-01-04 18:48:52 -05:00
59b688a117 Update to README 2018-01-04 18:06:43 -05:00
009ccc24f6 Update to README 2018-01-04 18:06:06 -05:00
3141dcabd2 Update to qualys logstash conf to reflect example config 2018-01-04 18:01:01 -05:00
02afd9c24d Update to example logstash config 2018-01-04 17:59:20 -05:00
d70238fbeb Update to README 2018-01-04 17:52:30 -05:00
36b028a78a Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-01-04 17:51:29 -05:00
16b04d7763 Update to README 2018-01-04 17:51:24 -05:00
4ea72650df Merge pull request #4 from austin-taylor/master
Fork Sync
2018-01-04 16:44:30 -05:00
a1b9ff6273 Update to readme 2018-01-04 13:53:38 -05:00
bbad599a73 Update to readme 2018-01-04 13:49:45 -05:00
882a4be275 Update to readme 2018-01-04 13:49:08 -05:00
2bf8c2be8b Update to readme 2018-01-04 13:36:19 -05:00
2b057f290b Remind user to select section if using a config 2018-01-03 18:33:14 -05:00
4359478e3d Getting started steps 2018-01-02 07:53:33 -05:00
ff50354bf9 Getting started steps 2018-01-02 07:10:57 -05:00
0ab53890ca Merge pull request #3 from austin-taylor/master
sync
2018-01-02 04:15:52 -05:00
ee6d61605b Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-01-02 03:12:15 -05:00
ada256cc46 README Update 2018-01-02 03:12:11 -05:00
8215f4e938 Set theme jekyll-theme-leap-day 2018-01-02 03:09:49 -05:00
30f966f354 Create CNAME 2018-01-02 02:59:41 -05:00
8af1ddd9e9 README Update 2018-01-02 02:51:34 -05:00
c3850247c9 Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-01-02 02:51:29 -05:00
745e4b3a0b README Update 2018-01-02 02:48:52 -05:00
c80383aaa6 Merge pull request #18 from cclauss/patch-1
Remove leading slash
2018-01-02 02:34:54 -05:00
e128d8c753 Travis Config update 2018-01-02 02:31:25 -05:00
66987810df Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2018-01-02 02:29:23 -05:00
6e500a4829 Add build status to README 2018-01-02 02:29:19 -05:00
92d6a7788c Remove leading slash 2018-01-02 08:26:15 +01:00
6ce3a254e4 Merge pull request #17 from cclauss/patch-1
flake8 . --exclude=/deps/qualysapi
2018-01-02 02:24:15 -05:00
9fe048fc5f flake8 . --exclude=/deps/qualysapi 2018-01-02 08:20:37 +01:00
67f9017f92 flake8 --ignore=deps/qualysapi 2018-01-02 08:17:03 +01:00
03d7954da9 Testing build with no submodules 2018-01-02 01:46:03 -05:00
ff02340e32 Merge pull request #16 from cclauss/patch-1
Add free automated flake8 testing of pull requests
2018-01-02 01:39:18 -05:00
45f8ea55d3 Add free automated flake8 testing of pull requests
[Flake8](http://flake8.pycqa.org) tests can help to find Python syntax errors, undefined names, and other code quality issues.  [Travis CI](https://travis-ci.org) is a framework for running flake8 and other tests which is free for open source projects like this one.  The owner of the this repo would need to go to https://travis-ci.org/profile and flip the repository switch __on__ to enable free automated flake8 testing of each pull request.
2018-01-02 07:31:24 +01:00
05608b29bb Update README.md 2018-01-01 14:58:42 -05:00
4d6ad51b50 README Update 2018-01-01 07:28:48 -05:00
b953e1d97b README Update 2018-01-01 07:28:16 -05:00
8f536ed2ac Documentation update 2017-12-31 07:04:57 -05:00
c5115fba00 Update to README 2017-12-31 06:02:48 -05:00
ce529dd4f9 Plugin name converted to scan name 2017-12-30 23:57:58 -05:00
3d34916e4c Adding name of scan and scan reference 2017-12-30 23:54:47 -05:00
690841c4df Readme Update 2017-12-30 23:06:09 -05:00
5f3b02aa10 Readme Update 2017-12-30 22:35:16 -05:00
646a5f94ba Readme Update 2017-12-30 22:33:36 -05:00
c33fbb256a Logo update 2017-12-30 22:32:47 -05:00
a2a15094b4 Update to README 2017-12-30 22:24:40 -05:00
2bd32fd9dc Update to README 2017-12-30 22:23:24 -05:00
5c7137a606 Updated dashboard 2017-12-30 22:18:15 -05:00
78d9a077f5 Merge pull request #2 from austin-taylor/master
Sync with master
2017-12-30 21:27:36 -05:00
732237ad5a Update to logstash template 2017-12-30 21:24:19 -05:00
4a78387ce6 Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2017-12-30 21:23:14 -05:00
64751c47dd Addition of VulnWhisperer-Qualys logstash files 2017-12-30 21:20:45 -05:00
bec9cdd4d0 Addition of VulnWhisperer-Qualys logstash files 2017-12-30 20:21:08 -05:00
d7fc63c952 Better database handling 2017-12-30 14:40:49 -05:00
de62400730 Update to write path logic 2017-12-30 14:39:59 -05:00
0ba3cdf579 Update to count tracker 2017-12-30 14:07:02 -05:00
e03860d087 Check for new records 2017-12-30 13:12:58 -05:00
dc7ad082be Bug fix for counter in Nessus and format fix for qualys 2017-12-30 11:27:19 -05:00
0cd2e28ccd Database tracking for processed Qualys scans 2017-12-30 11:21:10 -05:00
07a99eda54 Database tracking for processed Qualys scans 2017-12-30 11:20:31 -05:00
469f3fee81 Support for json output 2017-12-29 22:42:32 -05:00
bb776bd9f2 Update to requirements.txt 2017-12-29 02:32:15 -05:00
55c0713baf Update to README 2017-12-28 23:52:53 -05:00
caa64b4ca2 Update to README 2017-12-28 23:50:09 -05:00
fb9f86634e Update to README 2017-12-28 23:49:16 -05:00
24417cd1bb Update README.md 2017-12-28 23:39:31 -05:00
34638bcf42 Fix for str casting 2017-12-28 23:25:05 -05:00
c041693018 Column update for scans and N/A cleanup 2017-12-28 22:47:58 -05:00
d03ba15772 Addition of category class and special class for Qualys Scanning Reports. Also added additional enrichments to reports 2017-12-28 21:57:21 -05:00
a274341d23 Removing commented code 2017-12-27 10:42:08 -05:00
ee1e79dcd5 Refactored classes to be more modular, update to ini file and submodules 2017-12-27 10:39:25 -05:00
2997e2d2b6 Refactored classes to be more modular, update to ini file and submodules 2017-12-27 10:38:44 -05:00
abe8925ebc Addition of submodules, update to connectors, base class start 2017-12-27 02:18:28 -05:00
b26ff7d9c9 Addition of submodules, update to connectors, base class start 2017-12-27 02:17:56 -05:00
cec794daa8 Addition of submodules, update to connectors, base class start 2017-12-27 02:17:01 -05:00
dd3a8bb649 Merge pull request #1 from austin-taylor/master
Sync with master
2017-12-26 17:10:07 -05:00
bf537df475 Field Cleanup 2017-12-26 07:53:46 -05:00
4f6003066e Adding custom version of QualysAPI 2017-12-25 22:47:34 -05:00
61ba3f0804 Fixed multiple bugs, cleaned up formatting, produces solid csv output for Qualys Web App scans 2017-12-25 22:44:30 -05:00
10f8809723 Merge branch 'master' of github.com:austin-taylor/VulnWhisperer 2017-12-22 17:28:38 -05:00
796db314f3 Addition of Qualys WebApp Processing 2017-12-22 17:28:33 -05:00
d9ff8532ee Addition of Qualys WebApp Processing 2017-12-22 17:28:01 -05:00
dc2491e8b0 Merge pull request #12 from HASecuritySolutions/master
Fork Sync
2017-12-20 01:39:26 -07:00
a9a21c2e90 Allow for any directories to be monitored 2017-12-20 03:00:04 -05:00
16369f0e40 Update README.md 2017-12-20 01:11:28 -05:00
2d8a50d1ad Merge pull request #10 from cybergoof/trim-input
Trim input
2017-12-07 22:43:11 -07:00
4657241b70 Merge pull request #9 from cybergoof/file-test
Checks to make sure config file exists.  Provides descriptive error if it doesn't
2017-12-07 22:42:43 -07:00
c1c4a45562 remove leading and trailing spaces around all input switches. Fixes austi-taylor/VulnWhisperer#6 2017-12-08 00:40:25 -05:00
fcd938b75a Put in a check to make sure that the config file exists. FIXES austin-taylor/VulnWhisperer#4 2017-12-08 00:25:15 -05:00
f8905e8c4b Silence NoneType object 2017-12-07 01:47:06 -05:00
ac61390b88 Update README.md 2017-12-07 01:19:45 -05:00
eb22d9475c Merge pull request #8 from cybergoof/passwords-argument
Added an argument for username and password
2017-11-28 22:18:50 -08:00
35b7093762 Added an argument for username and password, which takes precendece over nessus. Fixed #5 2017-11-27 10:02:53 -05:00
fedbb18bb2 Visualizations for elastic 5.5+ 2017-10-06 15:35:59 -04:00
8808b9e458 Update 9000_output_nessus.conf 2017-10-06 14:33:11 -05:00
b108c1fbeb Create docker-compose.yml 2017-10-06 14:25:09 -05:00
6ea508503d Renaming template 2017-10-06 15:14:25 -04:00
39662cc4cc Merge pull request #3 from alias454/master
Add logstash config for local file pickup
2017-08-06 16:30:07 -04:00
a63f69b3d4 Add logstash config for local file pickup 2017-08-06 15:15:09 -05:00
8be2527ff4 Merge pull request #2 from alias454/master
Add requirements file
2017-08-06 15:29:04 -04:00
d35645363d Update README 2017-08-06 14:08:01 -05:00
df03e7b928 Add requirements file 2017-08-06 14:03:46 -05:00
6a29cb7b84 Addition of logstash configs 2017-07-25 12:23:47 -04:00
dab91faff8 Update to README 2017-07-10 01:51:17 -04:00
db5ab0a265 Update to README 2017-07-10 01:48:38 -04:00
11d2e91321 Update to README 2017-07-10 01:48:04 -04:00
be3938daa5 Update to README 2017-07-10 01:45:11 -04:00
e8c7c5e13e Update to README 2017-07-10 01:42:53 -04:00
e645c33eea Update to README 2017-07-10 01:41:27 -04:00
8cd4e0cc19 Update to README 2017-07-10 01:38:39 -04:00
aed171de81 Update to README 2017-07-10 01:38:18 -04:00
72e6043b09 Update to README 2017-07-10 01:35:16 -04:00
68116635a2 Update to README 2017-07-10 01:32:38 -04:00
4df413d11f Update to README 2017-07-10 01:32:04 -04:00
3b64f7c27d Kibana Dashboards, filebeat and logstash configs 2017-07-10 00:20:17 -04:00
034b204255 Kibana Dashboards, filebeat and logstash configs 2017-07-10 00:15:28 -04:00
ccf774099f Only retrieves completed scans 2017-06-22 02:05:03 -04:00
34d4821c24 Update to README 2017-06-20 02:02:33 -04:00
0d96664209 Update to README 2017-06-19 22:30:54 -04:00
35c5119390 Update to README 2017-06-19 22:26:11 -04:00
69230af210 Update to README 2017-06-19 22:25:42 -04:00
14a451a492 Update to README and removed uneeded modules 2017-06-19 22:24:34 -04:00
71 changed files with 8488 additions and 410 deletions

41
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,41 @@
---
name: Rapport de bug
about: Créez un rapport pour nous aider à nous améliorer
title: ''
labels: ''
assignees: ''
---
**Décrivez le bug**
Une description claire et concise de ce qu'est le bug.
**Module affecté**
Lequel des modules ne fonctionne pas comme prévu, par exemple, Nessus, Qualys WAS, Qualys VM, OpenVAS, ELK, Jira...
**Trace de débogage de VulnWhisperer**
Si possible, veuillez joindre la trace de débogage de l'exécution pour une enquête plus approfondie (exécuter avec l'option `-d`).
**Pour reproduire**
Étapes pour reproduire le comportement :
1. Allez à '...'
2. Cliquez sur '....'
3. Voir l'erreur
**Comportement attendu**
Une description claire et concise de ce à quoi vous vous attendiez.
**Captures d'écran**
Si applicable, ajoutez des captures d'écran pour aider à expliquer votre problème.
**Système sur lequel VulnWhisperer s'exécute (veuillez compléter les informations suivantes) :**
- OS : [ex. Ubuntu Server]
- Version : [ex. 18.04.2 LTS]
- Version de VulnWhisperer : [ex. 1.7.1]
**Contexte additionnel**
Ajoutez tout autre contexte sur le problème ici.
**Note importante**
Comme VulnWhisperer s'appuie sur ELK pour l'agrégation de données, il est attendu que vous ayez déjà une instance ELK ou les connaissances pour en déployer une.
Pour accélérer le déploiement, nous fournissons un fichier docker-compose à jour et testé qui déploie toute l'infrastructure nécessaire et nous supporterons son déploiement, mais nous ne donnerons pas de support pour les instances ELK.

11
.gitignore vendored
View File

@ -1,3 +1,11 @@
# Vulnwhisperer stuff
data/
logs/
elk6/vulnwhisperer.ini
resources/elk6/vulnwhisperer.ini
configs/frameworks_example.ini
tests/data
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
@ -100,3 +108,6 @@ ENV/
# mypy
.mypy_cache/
# Mac
.DS_Store

3
.gitmodules vendored Normal file
View File

@ -0,0 +1,3 @@
[submodule "tests/data"]
path = tests/data
url = https://github.com/HASecuritySolutions/VulnWhisperer-tests.git

36
.travis.yml Normal file
View File

@ -0,0 +1,36 @@
group: travis_latest
language: python
cache: pip
python:
- 2.7
env:
- TEST_PATH=tests/data
services:
- docker
# - 3.6
#matrix:
# allow_failures:
# - python: 3.6 - Commenting out testing for Python 3.6 until ready
before_install:
- mkdir -p ./data/esdata1
- mkdir -p ./data/es_snapshots
- sudo chown -R 1000:1000 ./data/es*
- docker build -t vulnwhisperer-local .
- docker-compose -f docker-compose-test.yml up -d
install:
- pip install -r requirements.txt
- pip install flake8 # pytest # add another testing frameworks later
before_script:
# stop the build if there are Python syntax errors or undefined names
- flake8 . --count --exclude=deps/qualysapi --select=E901,E999,F821,F822,F823 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
- flake8 . --count --exit-zero --exclude=deps/qualysapi --max-complexity=10 --max-line-length=127 --statistics
script:
- python setup.py install
- bash tests/test-vuln_whisperer.sh
- bash tests/test-docker.sh
notifications:
on_success: change
on_failure: change # `always` will be the setting once code changes slow down

1
CNAME Normal file
View File

@ -0,0 +1 @@
www.vulnwhisperer.com

26
Dockerfile Normal file
View File

@ -0,0 +1,26 @@
FROM centos:7
MAINTAINER Justin Henderson justin@hasecuritysolutions.com
RUN yum update -y && \
yum install -y python python-devel git gcc && \
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && \
python get-pip.py
WORKDIR /opt/VulnWhisperer
COPY requirements.txt requirements.txt
COPY setup.py setup.py
COPY vulnwhisp/ vulnwhisp/
COPY bin/ bin/
COPY configs/frameworks_example.ini frameworks_example.ini
RUN python setup.py clean --all && \
pip install -r requirements.txt
WORKDIR /opt/VulnWhisperer
RUN python setup.py install
CMD vuln_whisperer -c /opt/VulnWhisperer/frameworks_example.ini

214
LICENSE
View File

@ -1,21 +1,201 @@
MIT License
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
Copyright (c) 2017 Austin Taylor
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
1. Definitions.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

140
README.md
View File

@ -1,2 +1,138 @@
# VulnWhisperer
Create actionable data from your Vulnerability Scans
<p align="center"><img src="https://git.gudita.com/Cyberdefense/VulnWhisperer/raw/branch/master/docs/source/vuln_whisperer_logo_s.png" width="400px"></p>
<p align="center"> <i>Créez des <u><b>données exploitables</b></u> à partir de vos scans de vulnérabilités</i> </p>
<p align="center" style="width:400px"><img src="https://git.gudita.com/Cyberdefense/VulnWhisperer/raw/branch/master/docs/source/vulnWhispererWebApplications.png" style="width:400px"></p>
VulnWhisperer est un outil de gestion des vulnérabilités et un agrégateur de rapports. VulnWhisperer récupère tous les rapports des différents scanners de vulnérabilités et crée un fichier avec un nom unique pour chacun, utilisant ensuite ces données pour se synchroniser avec Jira et alimenter Logstash. Jira effectue une synchronisation complète en cycle fermé avec les données fournies par les scanners, tandis que Logstash indexe et étiquette toutes les informations contenues dans le rapport (voir les fichiers logstash dans `/resources/elk6/pipeline/`). Les données sont ensuite envoyées à ElasticSearch pour être indexées, et finissent dans un format visuel et consultable dans Kibana avec des tableaux de bord déjà définis.
VulnWhisperer est un projet open-source financé par la communauté. VulnWhisperer est actuellement fonctionnel mais nécessite une refonte de la documentation et une revue de code. Si vous souhaitez de l'aide, si vous êtes intéressé par de nouvelles fonctionnalités, ou si vous recherchez un support payant, veuillez nous contacter à **info@sahelcyber.com**.
### Scanners de Vulnérabilités Supportés
- [X] [Nessus (**v6**/**v7**/**v8**)](https://www.tenable.com/products/nessus/nessus-professional)
- [X] [Qualys Web Applications](https://www.qualys.com/apps/web-app-scanning/)
- [X] [Qualys Vulnerability Management](https://www.qualys.com/apps/vulnerability-management/)
- [X] [OpenVAS (**v7**/**v8**/**v9**)](http://www.openvas.org/)
- [X] [Tenable.io](https://www.tenable.com/products/tenable-io)
- [ ] [Detectify](https://detectify.com/)
- [ ] [Nexpose](https://www.rapid7.com/products/nexpose/)
- [ ] [Insight VM](https://www.rapid7.com/products/insightvm/)
- [ ] [NMAP](https://nmap.org/)
- [ ] [Burp Suite](https://portswigger.net/burp)
- [ ] [OWASP ZAP](https://www.zaproxy.org/)
- [ ] Et d'autres à venir
### Plateformes de Reporting Supportées
- [X] [Elastic Stack (**v6**/**v7**)](https://www.elastic.co/elk-stack)
- [ ] [OpenSearch - Envisagé pour la prochaine mise à jour](https://opensearch.org/)
- [X] [Jira](https://www.atlassian.com/software/jira)
- [ ] [Splunk](https://www.splunk.com/)
## Démarrage
1) Suivez les [prérequis d'installation](#installreq)
2) Remplissez la section que vous souhaitez traiter dans le fichier <a href="https://git.gudita.com/Cyberdefense/VulnWhisperer/src/branch/master/configs/frameworks_example.ini">frameworks_example.ini</a>
3) [JIRA] Si vous utilisez Jira, remplissez la configuration Jira dans le fichier de configuration mentionné ci-dessus.
3) [ELK] Modifiez les paramètres IP dans les <a href="https://git.gudita.com/Cyberdefense/VulnWhisperer/src/branch/master/resources/elk6/pipeline">fichiers Logstash pour correspondre à votre environnement</a> et importez-les dans votre répertoire de configuration logstash (par défaut `/etc/logstash/conf.d/`)
4) [ELK] Importez les <a href="https://git.gudita.com/Cyberdefense/VulnWhisperer/src/branch/master/resources/elk6/kibana.json">visualisations Kibana</a>
5) [Exécutez Vulnwhisperer](#run)
> **Note importante concernant les liens du Wiki :** La migration de Gitea ne transfère pas toujours le Wiki d'un projet GitHub (qui est techniquement un dépôt séparé). Si les liens vers le Wiki (comme le guide de déploiement ELK) ne fonctionnent pas, vous devrez peut-être recréer ces pages manuellement dans l'onglet "Wiki" de votre dépôt sur Gitea.
Besoin d'aide ou juste envie de discuter ? Rejoignez notre [canal Slack](https://join.slack.com/t/vulnwhisperer/shared_invite/enQtNDQ5MzE4OTIyODU0LWQxZTcxYTY0MWUwYzA4MTlmMWZlYWY2Y2ZmM2EzNDFmNWVlOTM4MzNjYzI0YzdkMDA0YmQyYWRhZGI2NGUxNGI)
## Prérequis
* Python 2.7
* Un Scanner de Vulnérabilités
* Un Système de Reporting : Jira / ElasticStack 6.6
<a id="installreq"></a>
## Prérequis d'Installation - VulnWhisperer (peut nécessiter sudo)
**Installez les dépendances des paquets du système d'exploitation** (pour les distributions basées sur Debian, CentOS n'en a pas besoin)
```shell
sudo apt-get install zlib1g-dev libxml2-dev libxslt1-dev
(Optionnel) Utilisez un environnement virtuel python pour ne pas perturber les bibliothèques python de l'hôte
virtualenv venv # créera l'environnement virtuel python 2.7
source venv/bin/activate # démarre l'environnement, pip s'exécutera ici et devrait installer les bibliothèques sans sudo
deactivate # pour quitter l'environnement virtuel une fois que vous avez terminé
Installez les dépendances des bibliothèques python
pip install -r /chemin/vers/VulnWhisperer/requirements.txt
cd /chemin/vers/VulnWhisperer
python setup.py install
(Optionnel) Si vous utilisez un proxy, ajoutez l'URL du proxy comme variable d'environnement au PATH
export HTTP_PROXY=[http://exemple.com:8080](http://exemple.com:8080)
export HTTPS_PROXY=[http://exemple.com:8080](http://exemple.com:8080)
Vous êtes maintenant prêt à télécharger les scans.
Configuration
Il y a quelques étapes de configuration pour mettre en place VulnWhisperer :
Configurer le fichier Ini
Configurer le fichier Logstash
Importer les modèles ElasticSearch
Importer les tableaux de bord Kibana
Exécution
Pour exécuter, remplissez le fichier de configuration avec les paramètres de votre scanner de vulnérabilités. Ensuite, vous pouvez l'exécuter depuis la ligne de commande.
# (optionnel : -F -> fournit une coloration "Fantaisie" des logs, utile pour la compréhension lors de l'exécution manuelle de VulnWhisperer)
vuln_whisperer -c configs/frameworks_example.ini -s nessus
# ou
vuln_whisperer -c configs/frameworks_example.ini -s qualys
Si aucune section n'est spécifiée (ex. -s nessus), vulnwhisperer vérifiera dans le fichier de configuration les modules ayant la propriété enabled=true et les exécutera séquentiellement.
Docker-compose
ELK est un monde en soi, et pour les nouveaux venus sur la plateforme, cela nécessite des compétences de base sous Linux et généralement un peu de dépannage jusqu'à ce qu'il soit déployé et fonctionne comme prévu. Comme nous ne sommes pas en mesure de fournir un support pour les problèmes ELK de chaque utilisateur, nous avons mis en place un docker-compose qui inclut :
VulnWhisperer
Logstash 6.6
ElasticSearch 6.6
Kibana 6.6
Le docker-compose nécessite simplement de spécifier les chemins où les données de VulnWhisperer seront sauvegardées, et où se trouvent les fichiers de configuration. S'il est exécuté directement après un git clone, en ajoutant simplement la configuration du scanner au fichier de configuration de VulnWhisperer (/resources/elk6/vulnwhisperer.ini), il fonctionnera immédiatement.
Il se charge également de charger automatiquement les tableaux de bord et les visualisations Kibana via l'API, ce qui doit être fait manuellement autrement au démarrage de Kibana.
Pour plus d'informations sur le docker-compose, consultez le wiki docker-compose ou la FAQ.
Feuille de route
Notre feuille de route actuelle est la suivante :
[ ] Créer un standard de vulnérabilité
[ ] Mapper les résultats de chaque scanner au standard
[ ] Créer des directives de module de scanner pour une intégration facile de nouveaux scanners
[ ] Refactoriser le code pour réutiliser les fonctions et permettre une compatibilité totale entre les modules
[ ] Changer Nessus CSV en JSON
[ ] Adapter le Logstash unique au standard et aux tableaux de bord Kibana
[ ] Implémenter le scanner Detectify
[ ] Implémenter le reporting/tableaux de bord Splunk
En plus de cela, nous essayons de nous concentrer sur la correction des bugs dès que possible, ce qui peut retarder le développement. Nous accueillons également très volontiers les PR (Pull Requests), et une fois que nous aurons implémenté le nouveau standard, il sera très facile d'ajouter la compatibilité avec de nouveaux scanners.
Le standard de vulnérabilité sera initialement un nouveau JSON simple à un niveau avec toutes les informations correspondantes des différents scanners ayant des noms de variables standardisés, tout en conservant le reste des variables telles quelles.

1
_config.yml Normal file
View File

@ -0,0 +1 @@
theme: jekyll-theme-leap-day

View File

@ -1,39 +1,128 @@
#!/usr/bin/env python
#!/usr/bin/python
# -*- coding: utf-8 -*-
__author__ = 'Austin Taylor'
#Written by Austin Taylor
#www.austintaylor.io
from vulnwhisp.vulnwhisp import vulnWhisperer
from vulnwhisp.utils.cli import bcolors
from vulnwhisp.base.config import vwConfig
from vulnwhisp.test.mock import mockAPI
import os
import argparse
import sys
import logging
def isFileValid(parser, arg):
if not os.path.exists(arg):
parser.error("The file %s does not exist!" % arg)
else:
return arg
def main():
parser = argparse.ArgumentParser(description=""" VulnWhisperer is designed to create actionable data from\
your vulnerability scans through aggregation of historical scans.""")
parser.add_argument('-c', '--config', dest='config', required=False, default='frameworks.ini',
help='Path of config file')
help='Path of config file', type=lambda x: isFileValid(parser, x.strip()))
parser.add_argument('-s', '--section', dest='section', required=False,
help='Section in config')
parser.add_argument('--source', dest='source', required=False,
help='JIRA required only! Source scanner to report')
parser.add_argument('-n', '--scanname', dest='scanname', required=False,
help='JIRA required only! Scan name from scan to report')
parser.add_argument('-v', '--verbose', dest='verbose', action='store_true', default=True,
help='Prints status out to screen (defaults to True)')
parser.add_argument('-u', '--username', dest='username', required=False, default=None,
help='The NESSUS username', type=lambda x: x.strip())
parser.add_argument('-p', '--password', dest='password', required=False, default=None,
help='The NESSUS password', type=lambda x: x.strip())
parser.add_argument('-F', '--fancy', action='store_true',
help='Enable colourful logging output')
parser.add_argument('-d', '--debug', action='store_true',
help='Enable debugging messages')
parser.add_argument('--mock', action='store_true',
help='Enable mocked API responses')
parser.add_argument('--mock_dir', dest='mock_dir', required=False, default=None,
help='Path of test directory')
args = parser.parse_args()
# First setup logging
logging.basicConfig(
stream=sys.stdout,
#format only applies when not using -F flag for colouring
format='%(levelname)s:%(name)s:%(funcName)s:%(message)s',
level=logging.DEBUG if args.debug else logging.INFO
)
logger = logging.getLogger()
# we set up the logger to log as well to file
fh = logging.FileHandler('vulnwhisperer.log')
fh.setLevel(logging.DEBUG if args.debug else logging.INFO)
fh.setFormatter(logging.Formatter("%(asctime)s %(levelname)s %(name)s - %(funcName)s:%(message)s", "%Y-%m-%d %H:%M:%S"))
logger.addHandler(fh)
if args.fancy:
import coloredlogs
coloredlogs.install(level='DEBUG' if args.debug else 'INFO')
if args.mock:
mock_api = mockAPI(args.mock_dir, args.verbose)
mock_api.mock_endpoints()
exit_code = 0
try:
if args.config and not args.section:
# this remains a print since we are in the main binary
print('WARNING: {warning}'.format(warning='No section was specified, vulnwhisperer will scrape enabled modules from config file. \
\nPlease specify a section using -s. \
\nExample vuln_whisperer -c config.ini -s nessus'))
logger.info('No section was specified, vulnwhisperer will scrape enabled modules from the config file.')
config = vwConfig(config_in=args.config)
enabled_sections = config.get_sections_with_attribute('enabled')
vw = vulnWhisperer(config=args.config,
verbose=args.verbose)
for section in enabled_sections:
try:
vw = vulnWhisperer(config=args.config,
profile=section,
verbose=args.verbose,
username=args.username,
password=args.password,
source=args.source,
scanname=args.scanname)
exit_code += vw.whisper_vulnerabilities()
except Exception as e:
logger.error("VulnWhisperer was unable to perform the processing on '{}'".format(args.source))
else:
logger.info('Running vulnwhisperer for section {}'.format(args.section))
vw = vulnWhisperer(config=args.config,
profile=args.section,
verbose=args.verbose,
username=args.username,
password=args.password,
source=args.source,
scanname=args.scanname)
exit_code += vw.whisper_vulnerabilities()
vw.whisper_nessus()
sys.exit(1)
close_logging_handlers(logger)
sys.exit(exit_code)
except Exception as e:
if args.verbose:
print('{red} ERROR: {error}{endc}'.format(red=bcolors.FAIL, error=e, endc=bcolors.ENDC))
# this will remain a print since we are in the main binary
logger.error('{}'.format(str(e)))
print('ERROR: {error}'.format(error=e))
# TODO: fix this to NOT be exit 2 unless in error
close_logging_handlers(logger)
sys.exit(2)
close_logging_handlers(logger)
def close_logging_handlers(logger):
for handler in logger.handlers:
handler.close()
logger.removeFilter(handler)
if __name__ == '__main__':
main()
main()

View File

@ -2,10 +2,93 @@
enabled=true
hostname=localhost
port=8834
access_key=
secret_key=
username=nessus_username
password=nessus_password
write_path=/opt/vulnwhisp/scans
db_path=/opt/vulnwhisp/database
write_path=/opt/VulnWhisperer/data/nessus/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
[tenable]
enabled=true
hostname=cloud.tenable.com
port=443
access_key=
secret_key=
username=tenable.io_username
password=tenable.io_password
write_path=/opt/VulnWhisperer/data/tenable/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
[qualys_web]
#Reference https://www.qualys.com/docs/qualys-was-api-user-guide.pdf to find your API
enabled = true
hostname = qualysapi.qg2.apps.qualys.com
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys_web/
db_path=/opt/VulnWhisperer/data/database
verbose=true
# Set the maximum number of retries each connection should attempt.
#Note, this applies only to failed connections and timeouts, never to requests where the server returns a response.
max_retries = 10
# Template ID will need to be retrieved for each document. Please follow the reference guide above for instructions on how to get your template ID.
template_id = 126024
[qualys_vuln]
#Reference https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf to find your API
enabled = true
hostname = qualysapi.qg2.apps.qualys.com
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys_vuln/
db_path=/opt/VulnWhisperer/data/database
verbose=true
[detectify]
#Reference https://developer.detectify.com/
enabled = false
hostname = api.detectify.com
#username variable used as apiKey
username = exampleuser
#password variable used as secretKey
password = examplepass
write_path =/opt/VulnWhisperer/data/detectify/
db_path = /opt/VulnWhisperer/data/database
verbose = true
[openvas]
enabled = false
hostname = localhost
port = 4000
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/openvas/
db_path=/opt/VulnWhisperer/data/database
verbose=true
[jira]
enabled = false
hostname = jira-host
username = username
password = password
write_path = /opt/VulnWhisperer/data/jira/
db_path = /opt/VulnWhisperer/data/database
verbose = true
dns_resolv = False
#Sample jira report scan, will automatically be created for existent scans
#[jira.qualys_vuln.test_scan]
#source = qualys_vuln
#scan_name = Test Scan
#jira_project = PROJECT
; if multiple components, separate by "," = None
#components =
; minimum criticality to report (low, medium, high or critical) = None
#min_critical_to_report = high

94
configs/test.ini Executable file
View File

@ -0,0 +1,94 @@
[nessus]
enabled=true
hostname=nessus
port=443
access_key=
secret_key=
username=nessus_username
password=nessus_password
write_path=/opt/VulnWhisperer/data/nessus/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
[tenable]
enabled=true
hostname=tenable
port=443
access_key=
secret_key=
username=tenable.io_username
password=tenable.io_password
write_path=/opt/VulnWhisperer/data/tenable/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
[qualys_web]
#Reference https://www.qualys.com/docs/qualys-was-api-user-guide.pdf to find your API
enabled = false
hostname = qualys_web
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys_web/
db_path=/opt/VulnWhisperer/data/database
verbose=true
# Set the maximum number of retries each connection should attempt.
#Note, this applies only to failed connections and timeouts, never to requests where the server returns a response.
max_retries = 10
# Template ID will need to be retrieved for each document. Please follow the reference guide above for instructions on how to get your template ID.
template_id = 126024
[qualys_vuln]
#Reference https://www.qualys.com/docs/qualys-was-api-user-guide.pdf to find your API
enabled = true
hostname = qualys_vuln
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys_vuln/
db_path=/opt/VulnWhisperer/data/database
verbose=true
[detectify]
#Reference https://developer.detectify.com/
enabled = false
hostname = detectify
#username variable used as apiKey
username = exampleuser
#password variable used as secretKey
password = examplepass
write_path =/opt/VulnWhisperer/data/detectify/
db_path = /opt/VulnWhisperer/data/database
verbose = true
[openvas]
enabled = false
hostname = openvas
port = 4000
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/openvas/
db_path=/opt/VulnWhisperer/data/database
verbose=true
[jira]
enabled = false
hostname = jira-host
username = username
password = password
write_path = /opt/VulnWhisperer/data/jira/
db_path = /opt/VulnWhisperer/data/database
verbose = true
dns_resolv = False
#Sample jira report scan, will automatically be created for existent scans
#[jira.qualys_vuln.test_scan]
#source = qualys_vuln
#scan_name = Test Scan
#jira_project = PROJECT
; if multiple components, separate by "," = None
#components =
; minimum criticality to report (low, medium, high or critical) = None
#min_critical_to_report = high

97
docker-compose-test.yml Normal file
View File

@ -0,0 +1,97 @@
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
container_name: elasticsearch
environment:
- cluster.name=vulnwhisperer
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- xpack.security.enabled=false
- cluster.routing.allocation.disk.threshold_enabled=false
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 8g
volumes:
- ./data/esdata1:/usr/share/elasticsearch/data
- ./data/es_snapshots:/snapshots
ports:
- 9200:9200
#restart: always
networks:
esnet:
aliases:
- elasticsearch.local
kibana:
image: docker.elastic.co/kibana/kibana:6.6.0
container_name: kibana
environment:
SERVER_NAME: kibana
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
networks:
esnet:
aliases:
- kibana.local
kibana-config:
image: alpine
container_name: kibana-config
volumes:
- ./resources/elk6/init_kibana.sh:/opt/init_kibana.sh
- ./resources/elk6/kibana_APIonly.json:/opt/kibana_APIonly.json
- ./resources/elk6/logstash-vulnwhisperer-template.json:/opt/index-template.json
command: sh -c "apk add --no-cache curl bash && chmod +x /opt/init_kibana.sh && chmod +r /opt/kibana_APIonly.json && cd /opt/ && /bin/bash /opt/init_kibana.sh" # /opt/kibana_APIonly.json"
networks:
esnet:
aliases:
- kibana-config.local
logstash:
image: docker.elastic.co/logstash/logstash:6.6.0
container_name: logstash
volumes:
- ./resources/elk6/pipeline/:/usr/share/logstash/pipeline
- ./data/vulnwhisperer/:/opt/VulnWhisperer/data
# - ./resources/elk6/logstash.yml:/usr/share/logstash/config/logstash.yml
environment:
- xpack.monitoring.enabled=false
depends_on:
- elasticsearch
ports:
- 9600:9600
networks:
esnet:
aliases:
- logstash.local
vulnwhisperer:
# image: hasecuritysolutions/vulnwhisperer:latest
image: vulnwhisperer-local
container_name: vulnwhisperer
entrypoint: [
"vuln_whisperer",
"-F",
"-c",
"/opt/VulnWhisperer/vulnwhisperer.ini",
"--mock",
"--mock_dir",
"/tests/data"
]
volumes:
- ./data/vulnwhisperer/:/opt/VulnWhisperer/data
# - ./resources/elk6/vulnwhisperer.ini:/opt/VulnWhisperer/vulnwhisperer.ini
- ./configs/test.ini:/opt/VulnWhisperer/vulnwhisperer.ini
- ./tests/data/:/tests/data
network_mode: host
networks:
esnet:

86
docker-compose.v6.yml Normal file
View File

@ -0,0 +1,86 @@
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
container_name: elasticsearch
environment:
- cluster.name=vulnwhisperer
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 8g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
#restart: always
networks:
esnet:
aliases:
- elasticsearch.local
kibana:
image: docker.elastic.co/kibana/kibana:6.6.0
container_name: kibana
environment:
SERVER_NAME: kibana
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
networks:
esnet:
aliases:
- kibana.local
kibana-config:
image: alpine
container_name: kibana-config
volumes:
- ./resources/elk6/init_kibana.sh:/opt/init_kibana.sh
- ./resources/elk6/kibana_APIonly.json:/opt/kibana_APIonly.json
- ./resources/elk6/logstash-vulnwhisperer-template.json:/opt/index-template.json
command: sh -c "apk add --no-cache curl bash && chmod +x /opt/init_kibana.sh && chmod +r /opt/kibana_APIonly.json && cd /opt/ && /bin/bash /opt/init_kibana.sh" # /opt/kibana_APIonly.json"
networks:
esnet:
aliases:
- kibana-config.local
logstash:
image: docker.elastic.co/logstash/logstash:6.6.0
container_name: logstash
volumes:
- ./resources/elk6/pipeline/:/usr/share/logstash/pipeline
- ./data/:/opt/VulnWhisperer/data
#- ./resources/elk6/logstash.yml:/usr/share/logstash/config/logstash.yml
environment:
- xpack.monitoring.enabled=false
depends_on:
- elasticsearch
networks:
esnet:
aliases:
- logstash.local
vulnwhisperer:
image: hasecuritysolutions/vulnwhisperer:latest
container_name: vulnwhisperer
entrypoint: [
"vuln_whisperer",
"-c",
"/opt/VulnWhisperer/vulnwhisperer.ini"
]
volumes:
- ./data/:/opt/VulnWhisperer/data
- ./resources/elk6/vulnwhisperer.ini:/opt/VulnWhisperer/vulnwhisperer.ini
network_mode: host
volumes:
esdata1:
driver: local
networks:
esnet:

Binary file not shown.

After

Width:  |  Height:  |  Size: 356 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 449 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

12
requirements.txt Normal file
View File

@ -0,0 +1,12 @@
pandas==0.20.3
setuptools==40.4.3
pytz==2017.2
Requests==2.20.0
lxml==4.6.5
future-fstrings
bs4
jira
bottle
coloredlogs
qualysapi==6.0.0
httpretty

View File

@ -0,0 +1,72 @@
version: '2'
services:
vulnwhisp-es1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.2
container_name: vulnwhisp-es1
environment:
- cluster.name=vulnwhisperer
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 8g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
environment:
- xpack.security.enabled=false
#restart: always
networks:
esnet:
aliases:
- vulnwhisp-es1.local
vulnwhisp-ks1:
image: docker.elastic.co/kibana/kibana:5.6.2
environment:
SERVER_NAME: vulnwhisp-ks1
ELASTICSEARCH_URL: http://vulnwhisp-es1:9200
ports:
- 5601:5601
depends_on:
- vulnwhisp-es1
networks:
esnet:
aliases:
- vulnwhisp-ks1.local
vulnwhisp-ls1:
image: docker.elastic.co/logstash/logstash:5.6.2
container_name: vulnwhisp-ls1
volumes:
- ./docker/1000_nessus_process_file.conf:/usr/share/logstash/pipeline/1000_nessus_process_file.conf
- ./docker/2000_qualys_web_scans.conf:/usr/share/logstash/pipeline/2000_qualys_web_scans.conf
- ./docker/3000_openvas.conf:/usr/share/logstash/pipeline/3000_openvas.conf
- ./docker/4000_jira.conf:/usr/share/logstash/pipeline/4000_jira.conf
- ./docker/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./data/:/opt/VulnWhisperer/data
environment:
- xpack.monitoring.enabled=false
depends_on:
- vulnwhisp-es1
networks:
esnet:
aliases:
- vulnwhisp-ls1.local
vulnwhisp-vulnwhisperer:
image: hasecuritysolutions/vulnwhisperer:latest
container_name: vulnwhisp-vulnwhisperer
volumes:
- ./data/:/opt/VulnWhisperer/data
- ./configs/frameworks_example.ini:/opt/VulnWhisperer/frameworks_example.ini
network_mode: host
volumes:
esdata1:
driver: local
networks:
esnet:

View File

@ -0,0 +1,220 @@
# Author: Austin Taylor and Justin Henderson
# Email: email@austintaylor.io
# Last Update: 12/20/2017
# Version 0.3
# Description: Take in nessus reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/data/nessus/**/*"
start_position => "beginning"
tags => "nessus"
type => "nessus"
}
file {
path => "/opt/VulnWhisperer/data/tenable/*.csv"
start_position => "beginning"
tags => "tenable"
type => "tenable"
}
}
filter {
if "nessus" in [tags] or "tenable" in [tags] {
# Drop the header column
if [message] =~ "^Plugin ID" { drop {} }
csv {
# columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output"]
columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output", "asset_uuid", "vulnerability_state", "ip", "fqdn", "netbios", "operating_system", "mac_address", "plugin_family", "cvss_base", "cvss_temporal", "cvss_temporal_vector", "cvss_vector", "cvss3_base", "cvss3_temporal", "cvss3_temporal_vector", "cvss_vector", "system_type", "host_start", "host_end"]
separator => ","
source => "message"
}
ruby {
code => "if event.get('description')
event.set('description', event.get('description').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('synopsis')
event.set('synopsis', event.get('synopsis').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('solution')
event.set('solution', event.get('solution').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('see_also')
event.set('see_also', event.get('see_also').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('plugin_output')
event.set('plugin_output', event.get('plugin_output').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end"
}
#If using filebeats as your source, you will need to replace the "path" field to "source"
grok {
match => { "path" => "(?<scan_name>[a-zA-Z0-9_.\-]+)_%{INT:scan_id}_%{INT:history_id}_%{INT:last_updated}.csv$" }
tag_on_failure => []
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => ["last_updated"]
}
if [risk] == "None" {
mutate { add_field => { "risk_number" => 0 }}
}
if [risk] == "Low" {
mutate { add_field => { "risk_number" => 1 }}
}
if [risk] == "Medium" {
mutate { add_field => { "risk_number" => 2 }}
}
if [risk] == "High" {
mutate { add_field => { "risk_number" => 3 }}
}
if [risk] == "Critical" {
mutate { add_field => { "risk_number" => 4 }}
}
if ![cve] or [cve] == "nan" {
mutate { remove_field => [ "cve" ] }
}
if ![cvss] or [cvss] == "nan" {
mutate { remove_field => [ "cvss" ] }
}
if ![cvss_base] or [cvss_base] == "nan" {
mutate { remove_field => [ "cvss_base" ] }
}
if ![cvss_temporal] or [cvss_temporal] == "nan" {
mutate { remove_field => [ "cvss_temporal" ] }
}
if ![cvss_temporal_vector] or [cvss_temporal_vector] == "nan" {
mutate { remove_field => [ "cvss_temporal_vector" ] }
}
if ![cvss_vector] or [cvss_vector] == "nan" {
mutate { remove_field => [ "cvss_vector" ] }
}
if ![cvss3_base] or [cvss3_base] == "nan" {
mutate { remove_field => [ "cvss3_base" ] }
}
if ![cvss3_temporal] or [cvss3_temporal] == "nan" {
mutate { remove_field => [ "cvss3_temporal" ] }
}
if ![cvss3_temporal_vector] or [cvss3_temporal_vector] == "nan" {
mutate { remove_field => [ "cvss3_temporal_vector" ] }
}
if ![description] or [description] == "nan" {
mutate { remove_field => [ "description" ] }
}
if ![mac_address] or [mac_address] == "nan" {
mutate { remove_field => [ "mac_address" ] }
}
if ![netbios] or [netbios] == "nan" {
mutate { remove_field => [ "netbios" ] }
}
if ![operating_system] or [operating_system] == "nan" {
mutate { remove_field => [ "operating_system" ] }
}
if ![plugin_output] or [plugin_output] == "nan" {
mutate { remove_field => [ "plugin_output" ] }
}
if ![see_also] or [see_also] == "nan" {
mutate { remove_field => [ "see_also" ] }
}
if ![synopsis] or [synopsis] == "nan" {
mutate { remove_field => [ "synopsis" ] }
}
if ![system_type] or [system_type] == "nan" {
mutate { remove_field => [ "system_type" ] }
}
mutate {
remove_field => [ "message" ]
add_field => { "risk_score" => "%{cvss}" }
}
mutate {
convert => { "risk_score" => "float" }
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
# Compensating controls - adjust risk_score
# Adobe and Java are not allowed to run in browser unless whitelisted
# Therefore, lower score by dividing by 3 (score is subjective to risk)
#Modify and uncomment when ready to use
#if [risk_score] != 0 {
# if [plugin_name] =~ "Adobe" and [risk_score] > 6 or [plugin_name] =~ "Java" and [risk_score] > 6 {
# ruby {
# code => "event.set('risk_score', event.get('risk_score') / 3)"
# }
# mutate {
# add_field => { "compensating_control" => "Adobe and Flash removed from browsers unless whitelisted site." }
# }
# }
#}
# Add tags for reporting based on assets or criticality
if [asset] == "dc01" or [asset] == "dc02" or [asset] == "pki01" or [asset] == "192.168.0.54" or [asset] =~ "^192\.168\.0\." or [asset] =~ "^42.42.42." {
mutate {
add_tag => [ "critical_asset" ]
}
}
#if [asset] =~ "^192\.168\.[45][0-9][0-9]\.1$" or [asset] =~ "^192.168\.[50]\.[0-9]{1,2}\.1$"{
# mutate {
# add_tag => [ "has_hipaa_data" ]
# }
#}
#if [asset] =~ "^192\.168\.[45][0-9][0-9]\." {
# mutate {
# add_tag => [ "hipaa_asset" ]
# }
#}
if [asset] =~ "^hr" {
mutate {
add_tag => [ "pci_asset" ]
}
}
#if [asset] =~ "^10\.0\.50\." {
# mutate {
# add_tag => [ "web_servers" ]
# }
#}
}
}
output {
if "nessus" in [tags] or "tenable" in [tags] or [type] in [ "nessus", "tenable" ] {
# stdout { codec => rubydebug }
elasticsearch {
hosts => [ "vulnwhisp-es1.local:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,153 @@
# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 12/30/2017
# Version 0.3
# Description: Take in qualys web scan reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/data/qualys/*.json"
type => json
codec => json
start_position => "beginning"
tags => [ "qualys" ]
}
}
filter {
if "qualys" in [tags] {
grok {
match => { "path" => [ "(?<tags>qualys_vuln)_scan_%{DATA}_%{INT:last_updated}.json$", "(?<tags>qualys_web)_%{INT:app_id}_%{INT:last_updated}.json$" ] }
tag_on_failure => []
}
mutate {
replace => [ "message", "%{message}" ]
#gsub => [
# "message", "\|\|\|", " ",
# "message", "\t\t", " ",
# "message", " ", " ",
# "message", " ", " ",
# "message", " ", " ",
# "message", "nan", " ",
# "message",'\n',''
#]
}
if "qualys_web" in [tags] {
mutate {
add_field => { "asset" => "%{web_application_name}" }
add_field => { "risk_score" => "%{cvss}" }
}
} else if "qualys_vuln" in [tags] {
mutate {
add_field => { "asset" => "%{ip}" }
add_field => { "risk_score" => "%{cvss}" }
}
}
if [risk] == "1" {
mutate { add_field => { "risk_number" => 0 }}
mutate { replace => { "risk" => "info" }}
}
if [risk] == "2" {
mutate { add_field => { "risk_number" => 1 }}
mutate { replace => { "risk" => "low" }}
}
if [risk] == "3" {
mutate { add_field => { "risk_number" => 2 }}
mutate { replace => { "risk" => "medium" }}
}
if [risk] == "4" {
mutate { add_field => { "risk_number" => 3 }}
mutate { replace => { "risk" => "high" }}
}
if [risk] == "5" {
mutate { add_field => { "risk_number" => 4 }}
mutate { replace => { "risk" => "critical" }}
}
mutate {
remove_field => "message"
}
if [first_time_detected] {
date {
match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_detected"
}
}
if [first_time_tested] {
date {
match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_tested"
}
}
if [last_time_detected] {
date {
match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_detected"
}
}
if [last_time_tested] {
date {
match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_tested"
}
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => "last_updated"
}
mutate {
convert => { "plugin_id" => "integer"}
convert => { "id" => "integer"}
convert => { "risk_number" => "integer"}
convert => { "risk_score" => "float"}
convert => { "total_times_detected" => "integer"}
convert => { "cvss_temporal" => "float"}
convert => { "cvss" => "float"}
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
if [asset] =~ "\.yourdomain\.(com|net)$" {
mutate {
add_tag => [ "critical_asset" ]
}
}
}
}
output {
if "qualys" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "vulnwhisp-es1.local:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,146 @@
# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 03/04/2018
# Version 0.3
# Description: Take in Openvas web scan reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/data/openvas/*.json"
type => json
codec => json
start_position => "beginning"
tags => [ "openvas_scan", "openvas" ]
}
}
filter {
if "openvas_scan" in [tags] {
mutate {
replace => [ "message", "%{message}" ]
gsub => [
"message", "\|\|\|", " ",
"message", "\t\t", " ",
"message", " ", " ",
"message", " ", " ",
"message", " ", " ",
"message", "nan", " ",
"message",'\n',''
]
}
grok {
match => { "path" => "openvas_scan_%{DATA:scan_id}_%{INT:last_updated}.json$" }
tag_on_failure => []
}
mutate {
add_field => { "risk_score" => "%{cvss}" }
}
if [risk] == "1" {
mutate { add_field => { "risk_number" => 0 }}
mutate { replace => { "risk" => "info" }}
}
if [risk] == "2" {
mutate { add_field => { "risk_number" => 1 }}
mutate { replace => { "risk" => "low" }}
}
if [risk] == "3" {
mutate { add_field => { "risk_number" => 2 }}
mutate { replace => { "risk" => "medium" }}
}
if [risk] == "4" {
mutate { add_field => { "risk_number" => 3 }}
mutate { replace => { "risk" => "high" }}
}
if [risk] == "5" {
mutate { add_field => { "risk_number" => 4 }}
mutate { replace => { "risk" => "critical" }}
}
mutate {
remove_field => "message"
}
if [first_time_detected] {
date {
match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_detected"
}
}
if [first_time_tested] {
date {
match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_tested"
}
}
if [last_time_detected] {
date {
match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_detected"
}
}
if [last_time_tested] {
date {
match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_tested"
}
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => "last_updated"
}
mutate {
convert => { "plugin_id" => "integer"}
convert => { "id" => "integer"}
convert => { "risk_number" => "integer"}
convert => { "risk_score" => "float"}
convert => { "total_times_detected" => "integer"}
convert => { "cvss_temporal" => "float"}
convert => { "cvss" => "float"}
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
# Add your critical assets by subnet or by hostname. Comment this field out if you don't want to tag any, but the asset panel will break.
if [asset] =~ "^10\.0\.100\." {
mutate {
add_tag => [ "critical_asset" ]
}
}
}
}
output {
if "openvas" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "vulnwhisp-es1.local:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,21 @@
# Description: Take in jira tickets from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/Vulnwhisperer/jira/*.json"
type => json
codec => json
start_position => "beginning"
tags => [ "jira" ]
}
}
output {
if "jira" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "vulnwhisp-es1.local:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,5 @@
path.config: /usr/share/logstash/pipeline/
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.elasticsearch.url: vulnwhisp-es1.local:9200
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.enabled: false

View File

@ -0,0 +1,122 @@
{
"order": 0,
"template": "logstash-vulnwhisperer-*",
"settings": {
"index": {
"routing": {
"allocation": {
"total_shards_per_node": "2"
}
},
"mapping": {
"total_fields": {
"limit": "3000"
}
},
"refresh_interval": "5s",
"number_of_shards": "1",
"number_of_replicas": "0"
}
},
"mappings": {
"_default_": {
"_all": {
"enabled": false
},
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}
},
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
],
"properties": {
"plugin_id": {
"type": "float"
},
"last_updated": {
"type": "date"
},
"geoip": {
"dynamic": true,
"type": "object",
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "float"
}
}
},
"risk_score": {
"type": "float"
},
"source": {
"type": "keyword"
},
"synopsis": {
"type": "keyword"
},
"see_also": {
"type": "keyword"
},
"@timestamp": {
"type": "date"
},
"cve": {
"type": "keyword"
},
"solution": {
"type": "keyword"
},
"port": {
"type": "integer"
},
"host": {
"type": "text"
},
"@version": {
"type": "keyword"
},
"risk": {
"type": "keyword"
},
"assign_ip": {
"type": "ip"
},
"cvss": {
"type": "float"
}
}
}
},
"aliases": {}
}

View File

@ -0,0 +1,116 @@
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
# Linux Example
#- /var/log/*.log
#Windows Example
- c:\nessus\My Scans\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ["^DBG"]
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ["^ERR", "^WARN"]
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: [".gz$"]
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["logstash01:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["logstashserver1:5044", "logstashserver2:5044", "logstashserver3:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

View File

@ -0,0 +1,450 @@
[
{
"_id": "80158c90-57c1-11e7-b484-a970fc9d150a",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - HIPAA TL",
"visState": "{\"type\":\"timelion\",\"title\":\"VulnWhisperer - HIPAA TL\",\"params\":{\"expression\":\".es(index=logstash-vulnwhisperer-*,q='risk_score:>9 AND tags:pci_asset').label(\\\"PCI Assets\\\"),.es(index=logstash-vulnwhisperer-*,q='risk_score:>9 AND tags:has_hipaa_data').label(\\\"Has HIPAA Data\\\"),.es(index=logstash-vulnwhisperer-*,q='risk_score:>9 AND tags:hipaa_asset').label(\\\"HIPAA Assets\\\")\",\"interval\":\"auto\"}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
}
}
},
{
"_id": "479deab0-8a39-11e7-a58a-9bfcb3761a3d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL - TaggedAssetsPluginNames",
"visState": "{\"title\":\"VulnWhisperer - TL - TaggedAssetsPluginNames\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*', q='tags:critical_asset OR tags:hipaa_asset OR tags:pci_asset', split=\\\"plugin_name.keyword:10\\\").bars(width=4).label(regex=\\\".*:(.+)>.*\\\",label=\\\"$1\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "84f5c370-8a38-11e7-a58a-9bfcb3761a3d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL - CriticalAssetsPluginNames",
"visState": "{\"title\":\"VulnWhisperer - TL - CriticalAssetsPluginNames\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*', q='tags:critical_asset', split=\\\"plugin_name.keyword:10\\\").bars(width=4).label(regex=\\\".*:(.+)>.*\\\",label=\\\"$1\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "307cdae0-8a38-11e7-a58a-9bfcb3761a3d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL - PluginNames",
"visState": "{\"title\":\"VulnWhisperer - TL - PluginNames\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*', split=\\\"plugin_name.keyword:25\\\").bars(width=4).label(regex=\\\".*:(.+)>.*\\\",label=\\\"$1\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "5093c620-44e9-11e7-8014-ede06a7e69f8",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Mitigation Readme",
"visState": "{\"title\":\"VulnWhisperer - Mitigation Readme\",\"type\":\"markdown\",\"params\":{\"markdown\":\"** Legend **\\n\\n* [Common Vulnerability Scoring System (CVSS)](https://nvd.nist.gov/vuln-metrics/cvss) is the NIST vulnerability scoring system\\n* Risk Number is residual risk score calculated from CVSS, which is adjusted to be specific to the netowrk owner, which accounts for services not in use such as Java and Flash\\n* Vulnerabilities by Tag are systems tagged with HIPAA and PCI identification.\\n\\n\\n** Workflow **\\n* Select 10.0 under Risk Number to identify Critical Vulnerabilities. \\n* For more information about a CVE, scroll down and click the CVE link.\\n* To filter by tags, use one of the following filters:\\n** tags:has_hipaa_data, tags:pci_asset, tags:hipaa_asset, tags:critical_asset**\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "7e7fbc90-3df2-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-PluginID",
"visState": "{\"title\":\"VulnWhisperer-PluginID\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"plugin_id\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "5a3c0340-3eb3-11e7-a192-93f36fbd9d05",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-CVSSHeatmap",
"visState": "{\"title\":\"VulnWhisperer-CVSSHeatmap\",\"type\":\"heatmap\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"enableHover\":false,\"legendPosition\":\"right\",\"times\":[],\"colorsNumber\":4,\"colorSchema\":\"Yellow to Red\",\"setColorRange\":false,\"colorsRange\":[],\"invertColors\":false,\"percentageMode\":false,\"valueAxes\":[{\"show\":false,\"id\":\"ValueAxis-1\",\"type\":\"value\",\"scale\":{\"type\":\"linear\",\"defaultYExtents\":false},\"labels\":{\"show\":false,\"rotate\":0,\"color\":\"#555\"}}]},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"host.keyword\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\"}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"cvss.keyword\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"_term\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 3500\":\"rgb(255,255,204)\",\"3500 - 7000\":\"rgb(254,217,118)\",\"7000 - 10500\":\"rgb(253,141,60)\",\"10500 - 14000\":\"rgb(227,27,28)\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "1de9e550-3df1-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-Description",
"visState": "{\"title\":\"VulnWhisperer-Description\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"description.keyword\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Description\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "13c7d4e0-3df3-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-Solution",
"visState": "{\"title\":\"VulnWhisperer-Solution\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"solution.keyword\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Solution\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "297df800-3f7e-11e7-bd24-6903e3283192",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Plugin Name",
"visState": "{\"title\":\"VulnWhisperer - Plugin Name\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"plugin_name.keyword\",\"size\":10,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Plugin Name\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "de1a5f40-3f85-11e7-97f9-3777d794626d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - ScanName",
"visState": "{\"title\":\"VulnWhisperer - ScanName\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"scan_name.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Scan Name\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "ecbb99c0-3f84-11e7-97f9-3777d794626d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Total",
"visState": "{\"title\":\"VulnWhisperer - Total\",\"type\":\"metric\",\"params\":{\"handleNoResults\":true,\"fontSize\":60},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Total\"}}],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "471a3580-3f6b-11e7-88e7-df1abe6547fb",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Vulnerabilities by Tag",
"visState": "{\"title\":\"VulnWhisperer - Vulnerabilities by Tag\",\"type\":\"table\",\"params\":{\"perPage\":3,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"3\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"bucket\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"tags:has_hipaa_data\",\"analyze_wildcard\":true}}},\"label\":\"Systems with HIPAA data\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"tags:pci_asset\",\"analyze_wildcard\":true}}},\"label\":\"PCI Systems\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"tags:hipaa_asset\",\"analyze_wildcard\":true}}},\"label\":\"HIPAA Systems\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "35b6d320-3f7f-11e7-bd24-6903e3283192",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Residual Risk",
"visState": "{\"title\":\"VulnWhisperer - Residual Risk\",\"type\":\"table\",\"params\":{\"perPage\":15,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"risk_score\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Risk Number\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "a9225930-3df2-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-Risk",
"visState": "{\"title\":\"VulnWhisperer-Risk\",\"type\":\"table\",\"params\":{\"perPage\":4,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"risk\",\"size\":10,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Risk Severity\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "2f979030-44b9-11e7-a818-f5f80dfc3590",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - ScanBarChart",
"visState": "{\"aggs\":[{\"enabled\":true,\"id\":\"1\",\"params\":{},\"schema\":\"metric\",\"type\":\"count\"},{\"enabled\":true,\"id\":\"2\",\"params\":{\"customLabel\":\"Scan Name\",\"field\":\"plugin_name.keyword\",\"order\":\"desc\",\"orderBy\":\"1\",\"size\":10},\"schema\":\"segment\",\"type\":\"terms\"}],\"listeners\":{},\"params\":{\"addLegend\":true,\"addTimeMarker\":false,\"addTooltip\":true,\"defaultYExtents\":false,\"legendPosition\":\"right\",\"mode\":\"stacked\",\"scale\":\"linear\",\"setYExtents\":false,\"times\":[]},\"title\":\"VulnWhisperer - ScanBarChart\",\"type\":\"histogram\"}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "a6508640-897a-11e7-bbc0-33592ce0be1e",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Critical Assets Aggregated",
"visState": "{\"title\":\"VulnWhisperer - Critical Assets Aggregated\",\"type\":\"heatmap\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"enableHover\":true,\"legendPosition\":\"right\",\"times\":[],\"colorsNumber\":4,\"colorSchema\":\"Green to Red\",\"setColorRange\":true,\"colorsRange\":[{\"from\":0,\"to\":3},{\"from\":3,\"to\":7},{\"from\":7,\"to\":9},{\"from\":9,\"to\":11}],\"invertColors\":false,\"percentageMode\":false,\"valueAxes\":[{\"show\":false,\"id\":\"ValueAxis-1\",\"type\":\"value\",\"scale\":{\"type\":\"linear\",\"defaultYExtents\":false},\"labels\":{\"show\":true,\"rotate\":0,\"color\":\"white\"}}]},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"risk_score\",\"customLabel\":\"Residual Risk Score\"}},{\"id\":\"3\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{},\"customLabel\":\"Date\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"host\",\"size\":10,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Critical Asset IP\"}},{\"id\":\"5\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"split\",\"params\":{\"field\":\"plugin_name.keyword\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"row\":true}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"0 - 3\":\"#7EB26D\",\"3 - 7\":\"#EAB839\",\"7 - 9\":\"#EF843C\",\"8 - 10\":\"#BF1B00\",\"9 - 11\":\"#BF1B00\"},\"defaultColors\":{\"0 - 3\":\"rgb(0,104,55)\",\"3 - 7\":\"rgb(135,203,103)\",\"7 - 9\":\"rgb(255,255,190)\",\"9 - 11\":\"rgb(249,142,82)\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[{\"$state\":{\"store\":\"appState\"},\"meta\":{\"alias\":\"Critical Asset\",\"disabled\":false,\"index\":\"logstash-vulnwhisperer-*\",\"key\":\"tags\",\"negate\":false,\"type\":\"phrase\",\"value\":\"critical_asset\"},\"query\":{\"match\":{\"tags\":{\"query\":\"critical_asset\",\"type\":\"phrase\"}}}}]}"
}
}
},
{
"_id": "099a3820-3f68-11e7-a6bd-e764d950e506",
"_type": "visualization",
"_source": {
"title": "Timelion VulnWhisperer Example",
"visState": "{\"type\":\"timelion\",\"title\":\"Timelion VulnWhisperer Example\",\"params\":{\"expression\":\".es(index=logstash-vulnwhisperer-*,q=risk:high).label(\\\"Current High Risk\\\"),.es(index=logstash-vulnwhisperer-*,q=risk:high,offset=-1y).label(\\\"Last 1 Year High Risk\\\"),.es(index=logstash-vulnwhisperer-*,q=risk:medium).label(\\\"Current Medium Risk\\\"),.es(index=logstash-vulnwhisperer-*,q=risk:medium,offset=-1y).label(\\\"Last 1 Year Medium Risk\\\")\",\"interval\":\"auto\"}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
}
}
},
{
"_id": "67d432e0-44ec-11e7-a05f-d9719b331a27",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL-Critical Risk",
"visState": "{\"title\":\"VulnWhisperer - TL-Critical Risk\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*',q='(risk_score:>=9 AND risk_score:<=10)').label(\\\"Original\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=9 AND risk_score:<=10)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=9 AND risk_score:<=10)').subtract(.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=9 AND risk_score:<=10)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "a91b9fe0-44ec-11e7-a05f-d9719b331a27",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL-Medium Risk",
"visState": "{\"title\":\"VulnWhisperer - TL-Medium Risk\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*',q='(risk_score:>=4 AND risk_score:<7)').label(\\\"Original\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=4 AND risk_score:<7)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=4 AND risk_score:<7)').subtract(.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=4 AND risk_score:<7)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "8d9592d0-44ec-11e7-a05f-d9719b331a27",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL-High Risk",
"visState": "{\"title\":\"VulnWhisperer - TL-High Risk\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*',q='(risk_score:>=7 AND risk_score:<9)').label(\\\"Original\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=7 AND risk_score:<9)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=7 AND risk_score:<9)').subtract(.es(index='logstash-vulnwhisperer-*',q='(risk_score:>=7 AND risk_score:<9)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "a2d66660-44ec-11e7-a05f-d9719b331a27",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL-Low Risk",
"visState": "{\"title\":\"VulnWhisperer - TL-Low Risk\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*',q='(risk_score:>0 AND risk_score:<4)').label(\\\"Original\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>0 AND risk_score:<4)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk_score:>0 AND risk_score:<4)').subtract(.es(index='logstash-vulnwhisperer-*',q='(risk_score:>0 AND risk_score:<4)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "fb6eb020-49ab-11e7-8f8c-57ad64ec48a6",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Critical Risk Score for Tagged Assets",
"visState": "{\"title\":\"VulnWhisperer - Critical Risk Score for Tagged Assets\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index=logstash-vulnwhisperer-*,q='risk_score:>9 AND tags:hipaa_asset').label(\\\"HIPAA Assets\\\"),.es(index=logstash-vulnwhisperer-*,q='risk_score:>9 AND tags:pci_asset').label(\\\"PCI Systems\\\"),.es(index=logstash-vulnwhisperer-*,q='risk_score:>9 AND tags:has_hipaa_data').label(\\\"Has HIPAA Data\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "b2f2adb0-897f-11e7-a2d2-c57bca21b3aa",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Total",
"visState": "{\"title\":\"VulnWhisperer - Risk: Total\",\"type\":\"goal\",\"params\":{\"addLegend\":true,\"addTooltip\":true,\"gauge\":{\"autoExtend\":false,\"backStyle\":\"Full\",\"colorSchema\":\"Green to Red\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"gaugeColorMode\":\"Background\",\"gaugeStyle\":\"Full\",\"gaugeType\":\"Metric\",\"invertColors\":false,\"labels\":{\"color\":\"black\",\"show\":false},\"orientation\":\"vertical\",\"percentageMode\":false,\"scale\":{\"color\":\"#333\",\"labels\":false,\"show\":true,\"width\":2},\"style\":{\"bgColor\":true,\"bgFill\":\"white\",\"fontSize\":\"34\",\"labelColor\":false,\"subText\":\"Risk\"},\"type\":\"simple\",\"useRanges\":false,\"verticalSplit\":false},\"type\":\"gauge\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Total\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}}},\"label\":\"Critical\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"0 - 10000\":\"#64B0C8\"},\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "465c5820-8977-11e7-857e-e1d56b17746d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Critical Assets",
"visState": "{\"title\":\"VulnWhisperer - Critical Assets\",\"type\":\"heatmap\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"enableHover\":true,\"legendPosition\":\"right\",\"times\":[],\"colorsNumber\":4,\"colorSchema\":\"Green to Red\",\"setColorRange\":true,\"colorsRange\":[{\"from\":0,\"to\":3},{\"from\":3,\"to\":7},{\"from\":7,\"to\":9},{\"from\":9,\"to\":11}],\"invertColors\":false,\"percentageMode\":false,\"valueAxes\":[{\"show\":false,\"id\":\"ValueAxis-1\",\"type\":\"value\",\"scale\":{\"type\":\"linear\",\"defaultYExtents\":false},\"labels\":{\"show\":false,\"rotate\":0,\"color\":\"white\"}}],\"type\":\"heatmap\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"risk_score\",\"customLabel\":\"Residual Risk Score\"}},{\"id\":\"2\",\"enabled\":false,\"type\":\"terms\",\"schema\":\"split\",\"params\":{\"field\":\"risk_score\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"row\":true}},{\"id\":\"3\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{},\"customLabel\":\"Date\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"asset.keyword\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Critical Asset\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 3\":\"rgb(0,104,55)\",\"3 - 7\":\"rgb(135,203,103)\",\"7 - 9\":\"rgb(255,255,190)\",\"9 - 11\":\"rgb(249,142,82)\"},\"colors\":{\"8 - 10\":\"#BF1B00\",\"9 - 11\":\"#BF1B00\",\"7 - 9\":\"#EF843C\",\"3 - 7\":\"#EAB839\",\"0 - 3\":\"#7EB26D\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[{\"meta\":{\"index\":\"logstash-vulnwhisperer-*\",\"negate\":false,\"disabled\":false,\"alias\":\"Critical Asset\",\"type\":\"phrase\",\"key\":\"tags\",\"value\":\"critical_asset\"},\"query\":{\"match\":{\"tags\":{\"query\":\"critical_asset\",\"type\":\"phrase\"}}},\"$state\":{\"store\":\"appState\"}}]}"
}
}
},
{
"_id": "852816e0-3eb1-11e7-90cb-918f9cb01e3d",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-CVSS",
"visState": "{\"title\":\"VulnWhisperer-CVSS\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"cvss.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"CVSS Score\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"cardinality\",\"schema\":\"metric\",\"params\":{\"field\":\"asset.keyword\",\"customLabel\":\"# of Assets\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "d048c220-80b3-11e7-8790-73b60225f736",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: High",
"visState": "{\"title\":\"VulnWhisperer - Risk: High\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":1000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":true}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"High Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:high\"}}},\"label\":\"\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#EF843C\",\"0 - 1000\":\"#E0752D\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "db55bce0-80b3-11e7-8790-73b60225f736",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Critical",
"visState": "{\"title\":\"VulnWhisperer - Risk: Critical\",\"type\":\"goal\",\"params\":{\"addLegend\":true,\"addTooltip\":true,\"gauge\":{\"autoExtend\":false,\"backStyle\":\"Full\",\"colorSchema\":\"Green to Red\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"gaugeColorMode\":\"Background\",\"gaugeStyle\":\"Full\",\"gaugeType\":\"Metric\",\"invertColors\":false,\"labels\":{\"color\":\"black\",\"show\":false},\"orientation\":\"vertical\",\"percentageMode\":false,\"scale\":{\"color\":\"#333\",\"labels\":false,\"show\":true,\"width\":2},\"style\":{\"bgColor\":true,\"bgFill\":\"white\",\"fontSize\":\"34\",\"labelColor\":false,\"subText\":\"Risk\"},\"type\":\"simple\",\"useRanges\":false,\"verticalSplit\":false},\"type\":\"gauge\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Critical Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:critical\"}}},\"label\":\"Critical\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"0 - 10000\":\"#BF1B00\"},\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "56f0f5f0-3ebe-11e7-a192-93f36fbd9d05",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-RiskOverTime",
"visState": "{\"title\":\"VulnWhisperer-RiskOverTime\",\"type\":\"line\",\"params\":{\"addLegend\":true,\"addTimeMarker\":false,\"addTooltip\":true,\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"labels\":{\"show\":true,\"truncate\":100},\"position\":\"bottom\",\"scale\":{\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{\"text\":\"@timestamp per 12 hours\"},\"type\":\"category\"}],\"defaultYExtents\":false,\"drawLinesBetweenPoints\":true,\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"},\"valueAxis\":\"ValueAxis-1\"},\"interpolate\":\"linear\",\"legendPosition\":\"right\",\"orderBucketsBySum\":false,\"radiusRatio\":9,\"scale\":\"linear\",\"seriesParams\":[{\"data\":{\"id\":\"1\",\"label\":\"Count\"},\"drawLinesBetweenPoints\":true,\"interpolate\":\"linear\",\"mode\":\"normal\",\"show\":\"true\",\"showCircles\":true,\"type\":\"line\",\"valueAxis\":\"ValueAxis-1\"}],\"setYExtents\":false,\"showCircles\":true,\"times\":[],\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"labels\":{\"filter\":false,\"rotate\":0,\"show\":true,\"truncate\":100},\"name\":\"LeftAxis-1\",\"position\":\"left\",\"scale\":{\"mode\":\"normal\",\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{\"text\":\"Count\"},\"type\":\"value\"}],\"type\":\"line\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:info\"}}},\"label\":\"Info\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:low\"}}},\"label\":\"Low\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:medium\"}}},\"label\":\"Medium\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:high\"}}},\"label\":\"High\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:critical\"}}},\"label\":\"Critical\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"Critical\":\"#962D82\",\"High\":\"#BF1B00\",\"Low\":\"#629E51\",\"Medium\":\"#EAB839\",\"Info\":\"#65C5DB\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "c1361da0-80b3-11e7-8790-73b60225f736",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Medium",
"visState": "{\"title\":\"VulnWhisperer - Risk: Medium\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":false},\"isDisplayWarning\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Medium Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:medium\"}}},\"label\":\"Medium Risk\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#EAB839\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "e46ff7f0-897d-11e7-934b-67cec0a7da65",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Low",
"visState": "{\"title\":\"VulnWhisperer - Risk: Low\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":false}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Low Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk_score_name:low\"}}},\"label\":\"Low Risk\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#629E51\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "995e2280-3df3-11e7-a44e-c79ca8efb780",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-Asset",
"visState": "{\"title\":\"VulnWhisperer-Asset\",\"type\":\"table\",\"params\":{\"perPage\":15,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"asset.keyword\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Asset\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
}
]

View File

@ -0,0 +1,43 @@
[
{
"_id": "72051530-448e-11e7-a818-f5f80dfc3590",
"_type": "dashboard",
"_source": {
"title": "VulnWhisperer - Reporting",
"hits": 0,
"description": "",
"panelsJSON": "[{\"col\":1,\"id\":\"2f979030-44b9-11e7-a818-f5f80dfc3590\",\"panelIndex\":5,\"row\":12,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"col\":1,\"id\":\"8d9592d0-44ec-11e7-a05f-d9719b331a27\",\"panelIndex\":12,\"row\":8,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"col\":7,\"id\":\"67d432e0-44ec-11e7-a05f-d9719b331a27\",\"panelIndex\":14,\"row\":4,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"col\":10,\"id\":\"297df800-3f7e-11e7-bd24-6903e3283192\",\"panelIndex\":15,\"row\":8,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":7,\"id\":\"471a3580-3f6b-11e7-88e7-df1abe6547fb\",\"panelIndex\":20,\"row\":8,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":11,\"id\":\"995e2280-3df3-11e7-a44e-c79ca8efb780\",\"panelIndex\":22,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":9,\"id\":\"b2f2adb0-897f-11e7-a2d2-c57bca21b3aa\",\"panelIndex\":23,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"db55bce0-80b3-11e7-8790-73b60225f736\",\"panelIndex\":25,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":5,\"id\":\"d048c220-80b3-11e7-8790-73b60225f736\",\"panelIndex\":26,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"e46ff7f0-897d-11e7-934b-67cec0a7da65\",\"panelIndex\":27,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":3,\"id\":\"c1361da0-80b3-11e7-8790-73b60225f736\",\"panelIndex\":28,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"479deab0-8a39-11e7-a58a-9bfcb3761a3d\",\"panelIndex\":29,\"row\":4,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"}]",
"optionsJSON": "{\"darkTheme\":false}",
"uiStateJSON": "{\"P-15\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-20\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-21\":{\"vis\":{\"defaultColors\":{\"0 - 100\":\"rgb(0,104,55)\"}}},\"P-22\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-23\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-24\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-25\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-26\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-27\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-28\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-5\":{\"vis\":{\"legendOpen\":false}}}",
"version": 1,
"timeRestore": true,
"timeTo": "now",
"timeFrom": "now-1y",
"refreshInterval": {
"display": "Off",
"pause": false,
"value": 0
},
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[{\"query\":{\"match_all\":{}}}],\"highlightAll\":true,\"version\":true}"
}
}
},
{
"_id": "AWCUqesWib22Ai8JwW3u",
"_type": "dashboard",
"_source": {
"title": "VulnWhisperer - Risk Mitigation",
"hits": 0,
"description": "",
"panelsJSON": "[{\"col\":11,\"id\":\"995e2280-3df3-11e7-a44e-c79ca8efb780\",\"panelIndex\":20,\"row\":8,\"size_x\":2,\"size_y\":6,\"type\":\"visualization\"},{\"col\":1,\"id\":\"852816e0-3eb1-11e7-90cb-918f9cb01e3d\",\"panelIndex\":21,\"row\":10,\"size_x\":3,\"size_y\":5,\"type\":\"visualization\"},{\"col\":4,\"id\":\"297df800-3f7e-11e7-bd24-6903e3283192\",\"panelIndex\":27,\"row\":8,\"size_x\":3,\"size_y\":5,\"type\":\"visualization\"},{\"col\":9,\"id\":\"35b6d320-3f7f-11e7-bd24-6903e3283192\",\"panelIndex\":28,\"row\":8,\"size_x\":2,\"size_y\":6,\"type\":\"visualization\"},{\"col\":11,\"id\":\"471a3580-3f6b-11e7-88e7-df1abe6547fb\",\"panelIndex\":30,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"de1a5f40-3f85-11e7-97f9-3777d794626d\",\"panelIndex\":31,\"row\":8,\"size_x\":2,\"size_y\":5,\"type\":\"visualization\"},{\"col\":10,\"id\":\"5093c620-44e9-11e7-8014-ede06a7e69f8\",\"panelIndex\":37,\"row\":4,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":1,\"columns\":[\"host\",\"risk\",\"risk_score\",\"cve\",\"plugin_name\",\"solution\",\"plugin_output\"],\"id\":\"54648700-3f74-11e7-852e-69207a3d0726\",\"panelIndex\":38,\"row\":15,\"size_x\":12,\"size_y\":6,\"sort\":[\"@timestamp\",\"desc\"],\"type\":\"search\"},{\"col\":1,\"id\":\"fb6eb020-49ab-11e7-8f8c-57ad64ec48a6\",\"panelIndex\":39,\"row\":8,\"size_x\":3,\"size_y\":2,\"type\":\"visualization\"},{\"col\":5,\"id\":\"465c5820-8977-11e7-857e-e1d56b17746d\",\"panelIndex\":40,\"row\":4,\"size_x\":5,\"size_y\":4,\"type\":\"visualization\"},{\"col\":1,\"id\":\"56f0f5f0-3ebe-11e7-a192-93f36fbd9d05\",\"panelIndex\":46,\"row\":4,\"size_x\":4,\"size_y\":4,\"type\":\"visualization\"},{\"col\":1,\"id\":\"e46ff7f0-897d-11e7-934b-67cec0a7da65\",\"panelIndex\":47,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":3,\"id\":\"c1361da0-80b3-11e7-8790-73b60225f736\",\"panelIndex\":48,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":5,\"id\":\"d048c220-80b3-11e7-8790-73b60225f736\",\"panelIndex\":49,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"db55bce0-80b3-11e7-8790-73b60225f736\",\"panelIndex\":50,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":9,\"id\":\"b2f2adb0-897f-11e7-a2d2-c57bca21b3aa\",\"panelIndex\":51,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"}]",
"optionsJSON": "{\"darkTheme\":false}",
"uiStateJSON": "{\"P-11\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-2\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-20\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-21\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}},\"P-27\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-28\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}},\"P-3\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"asc\"}}}},\"P-30\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-31\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-40\":{\"vis\":{\"defaultColors\":{\"0 - 3\":\"rgb(0,104,55)\",\"3 - 7\":\"rgb(135,203,103)\",\"7 - 9\":\"rgb(255,255,190)\",\"9 - 11\":\"rgb(249,142,82)\"}}},\"P-41\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-42\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-43\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"}}},\"P-44\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-45\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-46\":{\"vis\":{\"legendOpen\":true}},\"P-47\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-48\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-49\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-5\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-50\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-51\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-6\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-8\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}}",
"version": 1,
"timeRestore": false,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[{\"query\":{\"match_all\":{}}}],\"highlightAll\":true,\"version\":true}"
}
}
}
]

View File

@ -0,0 +1,170 @@
[
{
"_id": "AWCUo-jRib22Ai8JwW1N",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: High Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer - Risk: High Qualys Scoring\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":1000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":true}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"High Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:high\"}}},\"label\":\"\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#EF843C\",\"0 - 1000\":\"#E0752D\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "AWCUozGBib22Ai8JwW1B",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Medium Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer - Risk: Medium Qualys Scoring\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":false}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Medium Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:medium\"}}},\"label\":\"Medium Risk\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#EAB839\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "AWCUpE3Kib22Ai8JwW1c",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Critical Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer - Risk: Critical Qualys Scoring\",\"type\":\"goal\",\"params\":{\"addLegend\":true,\"addTooltip\":true,\"gauge\":{\"autoExtend\":false,\"backStyle\":\"Full\",\"colorSchema\":\"Green to Red\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"gaugeColorMode\":\"Background\",\"gaugeStyle\":\"Full\",\"gaugeType\":\"Metric\",\"invertColors\":false,\"labels\":{\"color\":\"black\",\"show\":false},\"orientation\":\"vertical\",\"percentageMode\":false,\"scale\":{\"color\":\"#333\",\"labels\":false,\"show\":true,\"width\":2},\"style\":{\"bgColor\":true,\"bgFill\":\"white\",\"fontSize\":\"34\",\"labelColor\":false,\"subText\":\"Risk\"},\"type\":\"simple\",\"useRanges\":false,\"verticalSplit\":false},\"type\":\"gauge\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Critical Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:critical\"}}},\"label\":\"Critical\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"0 - 10000\":\"#BF1B00\"},\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "AWCUyeHGib22Ai8JwX62",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer-RiskOverTime Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer-RiskOverTime Qualys Scoring\",\"type\":\"line\",\"params\":{\"addLegend\":true,\"addTimeMarker\":false,\"addTooltip\":true,\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"labels\":{\"show\":true,\"truncate\":100},\"position\":\"bottom\",\"scale\":{\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{\"text\":\"@timestamp per 12 hours\"},\"type\":\"category\"}],\"defaultYExtents\":false,\"drawLinesBetweenPoints\":true,\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"},\"valueAxis\":\"ValueAxis-1\"},\"interpolate\":\"linear\",\"legendPosition\":\"right\",\"orderBucketsBySum\":false,\"radiusRatio\":9,\"scale\":\"linear\",\"seriesParams\":[{\"data\":{\"id\":\"1\",\"label\":\"Count\"},\"drawLinesBetweenPoints\":true,\"interpolate\":\"linear\",\"mode\":\"normal\",\"show\":\"true\",\"showCircles\":true,\"type\":\"line\",\"valueAxis\":\"ValueAxis-1\"}],\"setYExtents\":false,\"showCircles\":true,\"times\":[],\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"labels\":{\"filter\":false,\"rotate\":0,\"show\":true,\"truncate\":100},\"name\":\"LeftAxis-1\",\"position\":\"left\",\"scale\":{\"mode\":\"normal\",\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{\"text\":\"Count\"},\"type\":\"value\"}],\"type\":\"line\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:info\"}}},\"label\":\"Info\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:low\"}}},\"label\":\"Low\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:medium\"}}},\"label\":\"Medium\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:high\"}}},\"label\":\"High\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:critical\"}}},\"label\":\"Critical\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"colors\":{\"Critical\":\"#962D82\",\"High\":\"#BF1B00\",\"Low\":\"#629E51\",\"Medium\":\"#EAB839\",\"Info\":\"#65C5DB\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[]}"
}
}
},
{
"_id": "AWCUos-Fib22Ai8JwW0y",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Risk: Low Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer - Risk: Low Qualys Scoring\",\"type\":\"goal\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"Background\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":true,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"bgFill\":\"white\",\"bgColor\":true,\"labelColor\":false,\"subText\":\"\",\"fontSize\":\"34\"},\"extendRange\":false}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Low Risk\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"risk:low\"}}},\"label\":\"Low Risk\"}]}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":true,\"colors\":{\"0 - 10000\":\"#629E51\"}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "AWCg9Wsfib22Ai8Jww3v",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Qualys: Category Description",
"visState": "{\"title\":\"VulnWhisperer - Qualys: Category Description\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"category_description.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Category Description\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"match_all\":{}},\"filter\":[]}"
}
}
},
{
"_id": "AWCg88f1ib22Ai8Jww3C",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - QualysOS",
"visState": "{\"title\":\"VulnWhisperer - QualysOS\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"operating_system.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"match_all\":{}},\"filter\":[]}"
}
}
},
{
"_id": "AWCg9JUAib22Ai8Jww3Y",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - QualysOwner",
"visState": "{\"title\":\"VulnWhisperer - QualysOwner\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"owner.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"match_all\":{}},\"filter\":[]}"
}
}
},
{
"_id": "AWCg9tE6ib22Ai8Jww4R",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Qualys: Impact",
"visState": "{\"title\":\"VulnWhisperer - Qualys: Impact\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"impact.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Impact\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"match_all\":{}},\"filter\":[]}"
}
}
},
{
"_id": "AWCg9igvib22Ai8Jww36",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - Qualys: Level",
"visState": "{\"title\":\"VulnWhisperer - Qualys: Level\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMeticsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"type\":\"table\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"level.keyword\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Level\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"match_all\":{}},\"filter\":[]}"
}
}
},
{
"_id": "AWCUsp_3ib22Ai8JwW7R",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL-Critical Risk Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer - TL-Critical Risk Qualys Scoring\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*',q='(risk:critical)').label(\\\"Original\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk:critical)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk:critical)').subtract(.es(index='logstash-vulnwhisperer-*',q='(risk:critical)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\",\"type\":\"timelion\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "AWCUtHETib22Ai8JwW79",
"_type": "visualization",
"_source": {
"title": "VulnWhisperer - TL-High Risk Qualys Scoring",
"visState": "{\"title\":\"VulnWhisperer - TL-High Risk Qualys Scoring\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(index='logstash-vulnwhisperer-*',q='(risk:high)').label(\\\"Original\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk:high)',offset=-1w).label(\\\"One week offset\\\"),.es(index='logstash-vulnwhisperer-*',q='(risk:high)').subtract(.es(index='logstash-vulnwhisperer-*',q='(risk:high)',offset=-1w)).label(\\\"Difference\\\").lines(steps=3,fill=2,width=1)\",\"interval\":\"auto\",\"type\":\"timelion\"},\"aggs\":[],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
}
]

View File

@ -0,0 +1,50 @@
[
{
"_id": "AWCUrIBqib22Ai8JwW43",
"_type": "dashboard",
"_source": {
"title": "VulnWhisperer - Reporting Qualys Scoring",
"hits": 0,
"description": "",
"panelsJSON": "[{\"col\":1,\"id\":\"2f979030-44b9-11e7-a818-f5f80dfc3590\",\"panelIndex\":5,\"row\":11,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"col\":10,\"id\":\"297df800-3f7e-11e7-bd24-6903e3283192\",\"panelIndex\":15,\"row\":7,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":7,\"id\":\"471a3580-3f6b-11e7-88e7-df1abe6547fb\",\"panelIndex\":20,\"row\":7,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":11,\"id\":\"995e2280-3df3-11e7-a44e-c79ca8efb780\",\"panelIndex\":22,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":9,\"id\":\"b2f2adb0-897f-11e7-a2d2-c57bca21b3aa\",\"panelIndex\":23,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"479deab0-8a39-11e7-a58a-9bfcb3761a3d\",\"panelIndex\":29,\"row\":4,\"size_x\":6,\"size_y\":4,\"type\":\"visualization\"},{\"size_x\":6,\"size_y\":3,\"panelIndex\":30,\"type\":\"visualization\",\"id\":\"AWCUtHETib22Ai8JwW79\",\"col\":1,\"row\":8},{\"size_x\":6,\"size_y\":3,\"panelIndex\":31,\"type\":\"visualization\",\"id\":\"AWCUsp_3ib22Ai8JwW7R\",\"col\":7,\"row\":4},{\"size_x\":2,\"size_y\":3,\"panelIndex\":33,\"type\":\"visualization\",\"id\":\"AWCUozGBib22Ai8JwW1B\",\"col\":3,\"row\":1},{\"size_x\":2,\"size_y\":3,\"panelIndex\":34,\"type\":\"visualization\",\"id\":\"AWCUo-jRib22Ai8JwW1N\",\"col\":5,\"row\":1},{\"size_x\":2,\"size_y\":3,\"panelIndex\":35,\"type\":\"visualization\",\"id\":\"AWCUpE3Kib22Ai8JwW1c\",\"col\":7,\"row\":1},{\"size_x\":2,\"size_y\":3,\"panelIndex\":36,\"type\":\"visualization\",\"id\":\"AWCUos-Fib22Ai8JwW0y\",\"col\":1,\"row\":1}]",
"optionsJSON": "{\"darkTheme\":false}",
"uiStateJSON": "{\"P-15\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-20\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-21\":{\"vis\":{\"defaultColors\":{\"0 - 100\":\"rgb(0,104,55)\"}}},\"P-22\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-23\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-24\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-5\":{\"vis\":{\"legendOpen\":false}},\"P-33\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-34\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-35\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-27\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-28\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-26\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"}}},\"P-25\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-32\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-36\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}}}",
"version": 1,
"timeRestore": true,
"timeTo": "now",
"timeFrom": "now-30d",
"refreshInterval": {
"display": "Off",
"pause": false,
"value": 0
},
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[{\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"-vulnerability_category:\\\"INFORMATION_GATHERED\\\"\"}}}],\"highlightAll\":true,\"version\":true}"
}
}
},
{
"_id": "5dba30c0-3df3-11e7-a44e-c79ca8efb780",
"_type": "dashboard",
"_source": {
"title": "VulnWhisperer - Risk Mitigation Qualys Web Scoring",
"hits": 0,
"description": "",
"panelsJSON": "[{\"col\":11,\"id\":\"995e2280-3df3-11e7-a44e-c79ca8efb780\",\"panelIndex\":20,\"row\":8,\"size_x\":2,\"size_y\":7,\"type\":\"visualization\"},{\"col\":1,\"id\":\"852816e0-3eb1-11e7-90cb-918f9cb01e3d\",\"panelIndex\":21,\"row\":10,\"size_x\":3,\"size_y\":5,\"type\":\"visualization\"},{\"col\":4,\"id\":\"297df800-3f7e-11e7-bd24-6903e3283192\",\"panelIndex\":27,\"row\":8,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":9,\"id\":\"35b6d320-3f7f-11e7-bd24-6903e3283192\",\"panelIndex\":28,\"row\":8,\"size_x\":2,\"size_y\":7,\"type\":\"visualization\"},{\"col\":11,\"id\":\"471a3580-3f6b-11e7-88e7-df1abe6547fb\",\"panelIndex\":30,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"de1a5f40-3f85-11e7-97f9-3777d794626d\",\"panelIndex\":31,\"row\":8,\"size_x\":2,\"size_y\":4,\"type\":\"visualization\"},{\"col\":10,\"id\":\"5093c620-44e9-11e7-8014-ede06a7e69f8\",\"panelIndex\":37,\"row\":4,\"size_x\":3,\"size_y\":4,\"type\":\"visualization\"},{\"col\":1,\"columns\":[\"host\",\"risk\",\"risk_score\",\"cve\",\"plugin_name\",\"solution\",\"plugin_output\"],\"id\":\"54648700-3f74-11e7-852e-69207a3d0726\",\"panelIndex\":38,\"row\":15,\"size_x\":12,\"size_y\":6,\"sort\":[\"@timestamp\",\"desc\"],\"type\":\"search\"},{\"col\":1,\"id\":\"fb6eb020-49ab-11e7-8f8c-57ad64ec48a6\",\"panelIndex\":39,\"row\":8,\"size_x\":3,\"size_y\":2,\"type\":\"visualization\"},{\"col\":5,\"id\":\"465c5820-8977-11e7-857e-e1d56b17746d\",\"panelIndex\":40,\"row\":4,\"size_x\":5,\"size_y\":4,\"type\":\"visualization\"},{\"col\":9,\"id\":\"b2f2adb0-897f-11e7-a2d2-c57bca21b3aa\",\"panelIndex\":45,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"AWCUos-Fib22Ai8JwW0y\",\"panelIndex\":47,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":3,\"id\":\"AWCUozGBib22Ai8JwW1B\",\"panelIndex\":48,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":5,\"id\":\"AWCUo-jRib22Ai8JwW1N\",\"panelIndex\":49,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"AWCUpE3Kib22Ai8JwW1c\",\"panelIndex\":50,\"row\":1,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"AWCUyeHGib22Ai8JwX62\",\"panelIndex\":51,\"row\":4,\"size_x\":4,\"size_y\":4,\"type\":\"visualization\"},{\"col\":4,\"id\":\"AWCg88f1ib22Ai8Jww3C\",\"panelIndex\":52,\"row\":12,\"size_x\":3,\"size_y\":3,\"type\":\"visualization\"},{\"col\":7,\"id\":\"AWCg9JUAib22Ai8Jww3Y\",\"panelIndex\":53,\"row\":12,\"size_x\":2,\"size_y\":3,\"type\":\"visualization\"}]",
"optionsJSON": "{\"darkTheme\":false}",
"uiStateJSON": "{\"P-11\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-2\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-20\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-21\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}},\"P-27\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-28\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"desc\"}}}},\"P-3\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":0,\"direction\":\"asc\"}}}},\"P-30\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-31\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-40\":{\"vis\":{\"defaultColors\":{\"0 - 3\":\"rgb(0,104,55)\",\"3 - 7\":\"rgb(135,203,103)\",\"7 - 9\":\"rgb(255,255,190)\",\"9 - 11\":\"rgb(249,142,82)\"}}},\"P-41\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-42\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-43\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"}}},\"P-44\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-45\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-47\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-48\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-49\":{\"vis\":{\"defaultColors\":{\"0 - 1000\":\"rgb(0,104,55)\"},\"legendOpen\":false}},\"P-5\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-50\":{\"vis\":{\"defaultColors\":{\"0 - 10000\":\"rgb(0,104,55)\"}}},\"P-6\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-8\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-52\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}},\"P-53\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}}",
"version": 1,
"timeRestore": true,
"timeTo": "now",
"timeFrom": "now-30d",
"refreshInterval": {
"display": "Off",
"pause": false,
"value": 0
},
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[{\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"-vulnerability_category:\\\"INFORMATION_GATHERED\\\"\"}}}],\"highlightAll\":true,\"version\":true}"
}
}
}
]

View File

@ -0,0 +1,28 @@
[
{
"_id": "54648700-3f74-11e7-852e-69207a3d0726",
"_type": "search",
"_source": {
"title": "VulnWhisperer - Saved Search",
"description": "",
"hits": 0,
"columns": [
"host",
"risk",
"risk_score",
"cve",
"plugin_name",
"solution",
"plugin_output"
],
"sort": [
"@timestamp",
"desc"
],
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-vulnwhisperer-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}},\"filter\":[],\"highlight\":{\"pre_tags\":[\"@kibana-highlighted-field@\"],\"post_tags\":[\"@/kibana-highlighted-field@\"],\"fields\":{\"*\":{}},\"require_field_match\":false,\"fragment_size\":2147483647}}"
}
}
}
]

View File

@ -0,0 +1,14 @@
input {
beats {
port => 5044
tags => "beats"
}
}
filter {
if [beat][hostname] == "filebeathost" {
mutate {
add_tag => ["nessus"]
}
}
}

View File

@ -0,0 +1,220 @@
# Author: Austin Taylor and Justin Henderson
# Email: email@austintaylor.io
# Last Update: 12/20/2017
# Version 0.3
# Description: Take in nessus reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/nessus/**/*"
start_position => "beginning"
tags => "nessus"
type => "nessus"
}
file {
path => "/opt/VulnWhisperer/tenable/*.csv"
start_position => "beginning"
tags => "tenable"
type => "tenable"
}
}
filter {
if "nessus" in [tags] or "tenable" in [tags] {
# Drop the header column
if [message] =~ "^Plugin ID" { drop {} }
csv {
# columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output"]
columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output", "asset_uuid", "vulnerability_state", "ip", "fqdn", "netbios", "operating_system", "mac_address", "plugin_family", "cvss_base", "cvss_temporal", "cvss_temporal_vector", "cvss_vector", "cvss3_base", "cvss3_temporal", "cvss3_temporal_vector", "cvss3_vector", "system_type", "host_start", "host_end"]
separator => ","
source => "message"
}
ruby {
code => "if event.get('description')
event.set('description', event.get('description').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('synopsis')
event.set('synopsis', event.get('synopsis').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('solution')
event.set('solution', event.get('solution').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('see_also')
event.set('see_also', event.get('see_also').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('plugin_output')
event.set('plugin_output', event.get('plugin_output').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end"
}
#If using filebeats as your source, you will need to replace the "path" field to "source"
grok {
match => { "path" => "(?<scan_name>[a-zA-Z0-9_.\-]+)_%{INT:scan_id}_%{INT:history_id}_%{INT:last_updated}.csv$" }
tag_on_failure => []
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => ["last_updated"]
}
if [risk] == "None" {
mutate { add_field => { "risk_number" => 0 }}
}
if [risk] == "Low" {
mutate { add_field => { "risk_number" => 1 }}
}
if [risk] == "Medium" {
mutate { add_field => { "risk_number" => 2 }}
}
if [risk] == "High" {
mutate { add_field => { "risk_number" => 3 }}
}
if [risk] == "Critical" {
mutate { add_field => { "risk_number" => 4 }}
}
if ![cve] or [cve] == "nan" {
mutate { remove_field => [ "cve" ] }
}
if ![cvss] or [cvss] == "nan" {
mutate { remove_field => [ "cvss" ] }
}
if ![cvss_base] or [cvss_base] == "nan" {
mutate { remove_field => [ "cvss_base" ] }
}
if ![cvss_temporal] or [cvss_temporal] == "nan" {
mutate { remove_field => [ "cvss_temporal" ] }
}
if ![cvss_temporal_vector] or [cvss_temporal_vector] == "nan" {
mutate { remove_field => [ "cvss_temporal_vector" ] }
}
if ![cvss_vector] or [cvss_vector] == "nan" {
mutate { remove_field => [ "cvss_vector" ] }
}
if ![cvss3_base] or [cvss3_base] == "nan" {
mutate { remove_field => [ "cvss3_base" ] }
}
if ![cvss3_temporal] or [cvss3_temporal] == "nan" {
mutate { remove_field => [ "cvss3_temporal" ] }
}
if ![cvss3_temporal_vector] or [cvss3_temporal_vector] == "nan" {
mutate { remove_field => [ "cvss3_temporal_vector" ] }
}
if ![description] or [description] == "nan" {
mutate { remove_field => [ "description" ] }
}
if ![mac_address] or [mac_address] == "nan" {
mutate { remove_field => [ "mac_address" ] }
}
if ![netbios] or [netbios] == "nan" {
mutate { remove_field => [ "netbios" ] }
}
if ![operating_system] or [operating_system] == "nan" {
mutate { remove_field => [ "operating_system" ] }
}
if ![plugin_output] or [plugin_output] == "nan" {
mutate { remove_field => [ "plugin_output" ] }
}
if ![see_also] or [see_also] == "nan" {
mutate { remove_field => [ "see_also" ] }
}
if ![synopsis] or [synopsis] == "nan" {
mutate { remove_field => [ "synopsis" ] }
}
if ![system_type] or [system_type] == "nan" {
mutate { remove_field => [ "system_type" ] }
}
mutate {
remove_field => [ "message" ]
add_field => { "risk_score" => "%{cvss}" }
}
mutate {
convert => { "risk_score" => "float" }
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
# Compensating controls - adjust risk_score
# Adobe and Java are not allowed to run in browser unless whitelisted
# Therefore, lower score by dividing by 3 (score is subjective to risk)
#Modify and uncomment when ready to use
#if [risk_score] != 0 {
# if [plugin_name] =~ "Adobe" and [risk_score] > 6 or [plugin_name] =~ "Java" and [risk_score] > 6 {
# ruby {
# code => "event.set('risk_score', event.get('risk_score') / 3)"
# }
# mutate {
# add_field => { "compensating_control" => "Adobe and Flash removed from browsers unless whitelisted site." }
# }
# }
#}
# Add tags for reporting based on assets or criticality
if [asset] == "dc01" or [asset] == "dc02" or [asset] == "pki01" or [asset] == "192.168.0.54" or [asset] =~ "^192\.168\.0\." or [asset] =~ "^42.42.42." {
mutate {
add_tag => [ "critical_asset" ]
}
}
#if [asset] =~ "^192\.168\.[45][0-9][0-9]\.1$" or [asset] =~ "^192.168\.[50]\.[0-9]{1,2}\.1$"{
# mutate {
# add_tag => [ "has_hipaa_data" ]
# }
#}
#if [asset] =~ "^192\.168\.[45][0-9][0-9]\." {
# mutate {
# add_tag => [ "hipaa_asset" ]
# }
#}
if [asset] =~ "^hr" {
mutate {
add_tag => [ "pci_asset" ]
}
}
#if [asset] =~ "^10\.0\.50\." {
# mutate {
# add_tag => [ "web_servers" ]
# }
#}
}
}
output {
if "nessus" in [tags] or "tenable" in [tags] or [type] in [ "nessus", "tenable" ] {
# stdout { codec => rubydebug }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,153 @@
# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 12/30/2017
# Version 0.3
# Description: Take in qualys web scan reports from vulnWhisperer and pumps into logstash
input {
file {
path => [ "/opt/VulnWhisperer/data/qualys/*.json" , "/opt/VulnWhisperer/data/qualys_web/*.json", "/opt/VulnWhisperer/data/qualys_vuln/*.json" ]
type => json
codec => json
start_position => "beginning"
tags => [ "qualys" ]
}
}
filter {
if "qualys" in [tags] {
grok {
match => { "path" => [ "(?<tags>qualys_vuln)_scan_%{DATA}_%{INT:last_updated}.json$", "(?<tags>qualys_web)_%{INT:app_id}_%{INT:last_updated}.json$" ] }
tag_on_failure => []
}
mutate {
replace => [ "message", "%{message}" ]
#gsub => [
# "message", "\|\|\|", " ",
# "message", "\t\t", " ",
# "message", " ", " ",
# "message", " ", " ",
# "message", " ", " ",
# "message", "nan", " ",
# "message",'\n',''
#]
}
if "qualys_web" in [tags] {
mutate {
add_field => { "asset" => "%{web_application_name}" }
add_field => { "risk_score" => "%{cvss}" }
}
} else if "qualys_vuln" in [tags] {
mutate {
add_field => { "asset" => "%{ip}" }
add_field => { "risk_score" => "%{cvss}" }
}
}
if [risk] == "1" {
mutate { add_field => { "risk_number" => 0 }}
mutate { replace => { "risk" => "info" }}
}
if [risk] == "2" {
mutate { add_field => { "risk_number" => 1 }}
mutate { replace => { "risk" => "low" }}
}
if [risk] == "3" {
mutate { add_field => { "risk_number" => 2 }}
mutate { replace => { "risk" => "medium" }}
}
if [risk] == "4" {
mutate { add_field => { "risk_number" => 3 }}
mutate { replace => { "risk" => "high" }}
}
if [risk] == "5" {
mutate { add_field => { "risk_number" => 4 }}
mutate { replace => { "risk" => "critical" }}
}
mutate {
remove_field => "message"
}
if [first_time_detected] {
date {
match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_detected"
}
}
if [first_time_tested] {
date {
match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_tested"
}
}
if [last_time_detected] {
date {
match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_detected"
}
}
if [last_time_tested] {
date {
match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_tested"
}
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => "last_updated"
}
mutate {
convert => { "plugin_id" => "integer"}
convert => { "id" => "integer"}
convert => { "risk_number" => "integer"}
convert => { "risk_score" => "float"}
convert => { "total_times_detected" => "integer"}
convert => { "cvss_temporal" => "float"}
convert => { "cvss" => "float"}
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
if [asset] =~ "\.yourdomain\.(com|net)$" {
mutate {
add_tag => [ "critical_asset" ]
}
}
}
}
output {
if "qualys" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,146 @@
# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 03/04/2018
# Version 0.3
# Description: Take in qualys web scan reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/openvas/*.json"
type => json
codec => json
start_position => "beginning"
tags => [ "openvas_scan", "openvas" ]
}
}
filter {
if "openvas_scan" in [tags] {
mutate {
replace => [ "message", "%{message}" ]
gsub => [
"message", "\|\|\|", " ",
"message", "\t\t", " ",
"message", " ", " ",
"message", " ", " ",
"message", " ", " ",
"message", "nan", " ",
"message",'\n',''
]
}
grok {
match => { "path" => "openvas_scan_%{DATA:scan_id}_%{INT:last_updated}.json$" }
tag_on_failure => []
}
mutate {
add_field => { "risk_score" => "%{cvss}" }
}
if [risk] == "1" {
mutate { add_field => { "risk_number" => 0 }}
mutate { replace => { "risk" => "info" }}
}
if [risk] == "2" {
mutate { add_field => { "risk_number" => 1 }}
mutate { replace => { "risk" => "low" }}
}
if [risk] == "3" {
mutate { add_field => { "risk_number" => 2 }}
mutate { replace => { "risk" => "medium" }}
}
if [risk] == "4" {
mutate { add_field => { "risk_number" => 3 }}
mutate { replace => { "risk" => "high" }}
}
if [risk] == "5" {
mutate { add_field => { "risk_number" => 4 }}
mutate { replace => { "risk" => "critical" }}
}
mutate {
remove_field => "message"
}
if [first_time_detected] {
date {
match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_detected"
}
}
if [first_time_tested] {
date {
match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_tested"
}
}
if [last_time_detected] {
date {
match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_detected"
}
}
if [last_time_tested] {
date {
match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_tested"
}
}
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => "last_updated"
}
mutate {
convert => { "plugin_id" => "integer"}
convert => { "id" => "integer"}
convert => { "risk_number" => "integer"}
convert => { "risk_score" => "float"}
convert => { "total_times_detected" => "integer"}
convert => { "cvss_temporal" => "float"}
convert => { "cvss" => "float"}
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
# Add your critical assets by subnet or by hostname. Comment this field out if you don't want to tag any, but the asset panel will break.
if [asset] =~ "^10\.0\.100\." {
mutate {
add_tag => [ "critical_asset" ]
}
}
}
}
output {
if "openvas" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,21 @@
# Description: Take in jira tickets from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/jira/*.json"
type => json
codec => json
start_position => "beginning"
tags => [ "jira" ]
}
}
output {
if "jira" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,13 @@
input {
rabbitmq {
key => "nessus"
queue => "nessus"
durable => true
exchange => "nessus"
user => "logstash"
password => "yourpassword"
host => "buffer01"
port => 5672
tags => [ "queue_nessus", "rabbitmq" ]
}
}

View File

@ -0,0 +1,16 @@
output {
if "nessus" in [tags]{
rabbitmq {
key => "nessus"
exchange => "nessus"
exchange_type => "direct"
user => "logstash"
password => "yourbufferpassword"
host => "buffer01"
port => 5672
durable => true
persistent => true
}
}
}

116
resources/elk6/filebeat.yml Normal file
View File

@ -0,0 +1,116 @@
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
# Linux Example
#- /var/log/*.log
#Windows Example
- c:\nessus\My Scans\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ["^DBG"]
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ["^ERR", "^WARN"]
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: [".gz$"]
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["logstash01:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["logstashserver1:5044", "logstashserver2:5044", "logstashserver3:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

52
resources/elk6/init_kibana.sh Executable file
View File

@ -0,0 +1,52 @@
#!/bin/bash
#kibana_url="localhost:5601"
kibana_url="kibana.local:5601"
elasticsearch_url="elasticsearch.local:9200"
add_saved_objects="curl -s -u elastic:changeme -k -XPOST 'http://"$kibana_url"/api/saved_objects/_bulk_create' -H 'Content-Type: application/json' -H \"kbn-xsrf: true\" -d @"
#Create all saved objects - including index pattern
saved_objects_file="kibana_APIonly.json"
#if [ `curl -I localhost:5601/status | head -n1 |cut -d$' ' -f2` -eq '200' ]; then echo "Loading VulnWhisperer Saved Objects"; eval $(echo $add_saved_objects$saved_objects_file); else echo "waiting for kibana"; fi
until curl -s "$elasticsearch_url/_cluster/health?pretty" | grep '"status"' | grep -qE "green|yellow"; do
curl -s "$elasticsearch_url/_cluster/health?pretty"
echo "Waiting for Elasticsearch..."
sleep 5
done
count=0
until curl -s --fail -XPUT "http://$elasticsearch_url/_template/vulnwhisperer" -H 'Content-Type: application/json' -d '@/opt/index-template.json'; do
echo "Loading VulnWhisperer index template..."
((count++)) && ((count==60)) && break
sleep 1
done
if [[ count -le 60 && $(curl -s -I http://$elasticsearch_url/_template/vulnwhisperer | head -n1 |cut -d$' ' -f2) == "200" ]]; then
echo -e "\n✅ VulnWhisperer index template loaded"
else
echo -e "\n❌ VulnWhisperer index template failed to load"
fi
until [ "`curl -s -I "$kibana_url"/status | head -n1 |cut -d$' ' -f2`" == "200" ]; do
curl -s -I "$kibana_url"/status
echo "Waiting for Kibana..."
sleep 5
done
echo "Loading VulnWhisperer Saved Objects"
echo $add_saved_objects$saved_objects_file
eval $(echo $add_saved_objects$saved_objects_file)
#set "*" as default index
#id_default_index="87f3bcc0-8b37-11e8-83be-afaed4786d8c"
#os.system("curl -X POST -H \"Content-Type: application/json\" -H \"kbn-xsrf: true\" -d '{\"value\":\""+id_default_index+"\"}' http://elastic:changeme@"+kibana_url+"kibana/settings/defaultIndex")
#Create vulnwhisperer index pattern
#index_name = "logstash-vulnwhisperer-*"
#os.system(add_index+index_name+"' '-d{\"attributes\":{\"title\":\""+index_name+"\",\"timeFieldName\":\"@timestamp\"}}'")
#Create jira index pattern, separated for not fill of crap variables the Discover tab by default
#index_name = "logstash-jira-*"
#os.system(add_index+index_name+"' '-d{\"attributes\":{\"title\":\""+index_name+"\",\"timeFieldName\":\"@timestamp\"}}'")

433
resources/elk6/kibana.json Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,233 @@
{
"index_patterns": "logstash-vulnwhisperer-*",
"mappings": {
"doc": {
"properties": {
"@timestamp": {
"type": "date"
},
"@version": {
"type": "keyword"
},
"asset": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"asset_uuid": {
"type": "keyword"
},
"assign_ip": {
"type": "ip"
},
"category": {
"type": "keyword"
},
"cve": {
"type": "keyword"
},
"cvss_base": {
"type": "float"
},
"cvss_temporal_vector": {
"type": "keyword"
},
"cvss_temporal": {
"type": "float"
},
"cvss_vector": {
"type": "keyword"
},
"cvss": {
"type": "float"
},
"cvss3_base": {
"type": "float"
},
"cvss3_temporal_vector": {
"type": "keyword"
},
"cvss3_temporal": {
"type": "float"
},
"cvss3_vector": {
"type": "keyword"
},
"cvss3": {
"type": "float"
},
"description": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"dns": {
"type": "keyword"
},
"exploitability": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"fqdn": {
"type": "keyword"
},
"geoip": {
"dynamic": true,
"type": "object",
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "float"
}
}
},
"history_id": {
"type": "keyword"
},
"host": {
"type": "keyword"
},
"host_end": {
"type": "date"
},
"host_start": {
"type": "date"
},
"impact": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"ip_status": {
"type": "keyword"
},
"ip": {
"type": "ip"
},
"last_updated": {
"type": "date"
},
"operating_system": {
"type": "keyword"
},
"path": {
"type": "keyword"
},
"pci_vuln": {
"type": "keyword"
},
"plugin_family": {
"type": "keyword"
},
"plugin_id": {
"type": "keyword"
},
"plugin_name": {
"type": "keyword"
},
"plugin_output": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"port": {
"type": "integer"
},
"protocol": {
"type": "keyword"
},
"results": {
"type": "text"
},
"risk_number": {
"type": "integer"
},
"risk_score_name": {
"type": "keyword"
},
"risk_score": {
"type": "float"
},
"risk": {
"type": "keyword"
},
"scan_id": {
"type": "keyword"
},
"scan_name": {
"type": "keyword"
},
"scan_reference": {
"type": "keyword"
},
"see_also": {
"type": "keyword"
},
"solution": {
"type": "keyword"
},
"source": {
"type": "keyword"
},
"ssl": {
"type": "keyword"
},
"synopsis": {
"type": "keyword"
},
"system_type": {
"type": "keyword"
},
"tags": {
"type": "keyword"
},
"threat": {
"type": "text"
},
"type": {
"type": "keyword"
},
"vendor_reference": {
"type": "keyword"
},
"vulnerability_state": {
"type": "keyword"
}
}
}
}
}

View File

@ -0,0 +1,231 @@
{
"index_patterns": "logstash-vulnwhisperer-*",
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"@version": {
"type": "keyword"
},
"asset": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"asset_uuid": {
"type": "keyword"
},
"assign_ip": {
"type": "ip"
},
"category": {
"type": "keyword"
},
"cve": {
"type": "keyword"
},
"cvss_base": {
"type": "float"
},
"cvss_temporal_vector": {
"type": "keyword"
},
"cvss_temporal": {
"type": "float"
},
"cvss_vector": {
"type": "keyword"
},
"cvss": {
"type": "float"
},
"cvss3_base": {
"type": "float"
},
"cvss3_temporal_vector": {
"type": "keyword"
},
"cvss3_temporal": {
"type": "float"
},
"cvss3_vector": {
"type": "keyword"
},
"cvss3": {
"type": "float"
},
"description": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"dns": {
"type": "keyword"
},
"exploitability": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"fqdn": {
"type": "keyword"
},
"geoip": {
"dynamic": true,
"type": "object",
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "float"
}
}
},
"history_id": {
"type": "keyword"
},
"host": {
"type": "keyword"
},
"host_end": {
"type": "date"
},
"host_start": {
"type": "date"
},
"impact": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"ip_status": {
"type": "keyword"
},
"ip": {
"type": "ip"
},
"last_updated": {
"type": "date"
},
"operating_system": {
"type": "keyword"
},
"path": {
"type": "keyword"
},
"pci_vuln": {
"type": "keyword"
},
"plugin_family": {
"type": "keyword"
},
"plugin_id": {
"type": "keyword"
},
"plugin_name": {
"type": "keyword"
},
"plugin_output": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"port": {
"type": "integer"
},
"protocol": {
"type": "keyword"
},
"results": {
"type": "text"
},
"risk_number": {
"type": "integer"
},
"risk_score_name": {
"type": "keyword"
},
"risk_score": {
"type": "float"
},
"risk": {
"type": "keyword"
},
"scan_id": {
"type": "keyword"
},
"scan_name": {
"type": "keyword"
},
"scan_reference": {
"type": "keyword"
},
"see_also": {
"type": "keyword"
},
"solution": {
"type": "keyword"
},
"source": {
"type": "keyword"
},
"ssl": {
"type": "keyword"
},
"synopsis": {
"type": "keyword"
},
"system_type": {
"type": "keyword"
},
"tags": {
"type": "keyword"
},
"threat": {
"type": "text"
},
"type": {
"type": "keyword"
},
"vendor_reference": {
"type": "keyword"
},
"vulnerability_state": {
"type": "keyword"
}
}
}
}

View File

@ -0,0 +1,9 @@
node.name: logstash
path.config: /usr/share/logstash/pipeline/
path.data: /tmp
queue.drain: true
queue.type: persisted
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.elasticsearch.url: elasticsearch:9200
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.enabled: false

View File

@ -0,0 +1,182 @@
# Author: Austin Taylor and Justin Henderson
# Email: email@austintaylor.io
# Last Update: 12/20/2017
# Version 0.3
# Description: Take in nessus reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/data/nessus/**/*"
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
tags => "nessus"
}
file {
path => "/opt/VulnWhisperer/data/tenable/*.csv"
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
tags => "tenable"
}
}
filter {
if "nessus" in [tags] or "tenable" in [tags] {
# Drop the header column
if [message] =~ "^Plugin ID" { drop {} }
csv {
# columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output"]
columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output", "asset_uuid", "vulnerability_state", "ip", "fqdn", "netbios", "operating_system", "mac_address", "plugin_family", "cvss_base", "cvss_temporal", "cvss_temporal_vector", "cvss_vector", "cvss3_base", "cvss3_temporal", "cvss3_temporal_vector", "cvss3_vector", "system_type", "host_start", "host_end"]
separator => ","
source => "message"
}
ruby {
code => "if event.get('description')
event.set('description', event.get('description').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('synopsis')
event.set('synopsis', event.get('synopsis').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('solution')
event.set('solution', event.get('solution').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('see_also')
event.set('see_also', event.get('see_also').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end
if event.get('plugin_output')
event.set('plugin_output', event.get('plugin_output').gsub(92.chr + 'n', 10.chr).gsub(92.chr + 'r', 13.chr))
end"
}
#If using filebeats as your source, you will need to replace the "path" field to "source"
# Remove when scan name is included in event (current method is error prone)
grok {
match => { "path" => "(?<scan_name>[a-zA-Z0-9_.\-]+)_%{INT:scan_id}_%{INT:history_id}_%{INT:last_updated}.csv$" }
tag_on_failure => []
}
# TODO remove when @timestamp is included in event
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => ["last_updated"]
}
if [risk] == "None" {
mutate { add_field => { "risk_number" => 0 }}
}
if [risk] == "Low" {
mutate { add_field => { "risk_number" => 1 }}
}
if [risk] == "Medium" {
mutate { add_field => { "risk_number" => 2 }}
}
if [risk] == "High" {
mutate { add_field => { "risk_number" => 3 }}
}
if [risk] == "Critical" {
mutate { add_field => { "risk_number" => 4 }}
}
if ![cve] or [cve] == "nan" {
mutate { remove_field => [ "cve" ] }
}
if ![cvss] or [cvss] == "nan" {
mutate { remove_field => [ "cvss" ] }
}
if ![cvss_base] or [cvss_base] == "nan" {
mutate { remove_field => [ "cvss_base" ] }
}
if ![cvss_temporal] or [cvss_temporal] == "nan" {
mutate { remove_field => [ "cvss_temporal" ] }
}
if ![cvss_temporal_vector] or [cvss_temporal_vector] == "nan" {
mutate { remove_field => [ "cvss_temporal_vector" ] }
}
if ![cvss_vector] or [cvss_vector] == "nan" {
mutate { remove_field => [ "cvss_vector" ] }
}
if ![cvss3_base] or [cvss3_base] == "nan" {
mutate { remove_field => [ "cvss3_base" ] }
}
if ![cvss3_temporal] or [cvss3_temporal] == "nan" {
mutate { remove_field => [ "cvss3_temporal" ] }
}
if ![cvss3_temporal_vector] or [cvss3_temporal_vector] == "nan" {
mutate { remove_field => [ "cvss3_temporal_vector" ] }
}
if ![description] or [description] == "nan" {
mutate { remove_field => [ "description" ] }
}
if ![mac_address] or [mac_address] == "nan" {
mutate { remove_field => [ "mac_address" ] }
}
if ![netbios] or [netbios] == "nan" {
mutate { remove_field => [ "netbios" ] }
}
if ![operating_system] or [operating_system] == "nan" {
mutate { remove_field => [ "operating_system" ] }
}
if ![plugin_output] or [plugin_output] == "nan" {
mutate { remove_field => [ "plugin_output" ] }
}
if ![see_also] or [see_also] == "nan" {
mutate { remove_field => [ "see_also" ] }
}
if ![synopsis] or [synopsis] == "nan" {
mutate { remove_field => [ "synopsis" ] }
}
if ![system_type] or [system_type] == "nan" {
mutate { remove_field => [ "system_type" ] }
}
mutate {
remove_field => [ "message" ]
add_field => { "risk_score" => "%{cvss}" }
}
mutate {
convert => { "risk_score" => "float" }
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
}
}
output {
if "nessus" in [tags] or "tenable" in [tags]{
stdout {
codec => dots
}
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,160 @@
# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 12/30/2017
# Version 0.3
# Description: Take in qualys web scan reports from vulnWhisperer and pumps into logstash
input {
file {
path => [ "/opt/VulnWhisperer/data/qualys/*.json" , "/opt/VulnWhisperer/data/qualys_web/*.json", "/opt/VulnWhisperer/data/qualys_vuln/*.json"]
type => json
codec => json
start_position => "beginning"
tags => [ "qualys" ]
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
}
}
filter {
if "qualys" in [tags] {
grok {
match => { "path" => [ "(?<tags>qualys_vuln)_scan_%{DATA}_%{INT:last_updated}.json$", "(?<tags>qualys_web)_%{INT:app_id}_%{INT:last_updated}.json$" ] }
tag_on_failure => []
}
mutate {
replace => [ "message", "%{message}" ]
#gsub => [
# "message", "\|\|\|", " ",
# "message", "\t\t", " ",
# "message", " ", " ",
# "message", " ", " ",
# "message", " ", " ",
# "message", "nan", " ",
# "message",'\n',''
#]
}
if "qualys_web" in [tags] {
mutate {
add_field => { "asset" => "%{web_application_name}" }
add_field => { "risk_score" => "%{cvss}" }
}
} else if "qualys_vuln" in [tags] {
mutate {
add_field => { "asset" => "%{ip}" }
add_field => { "risk_score" => "%{cvss}" }
}
}
if [risk] == "1" {
mutate { add_field => { "risk_number" => 0 }}
mutate { replace => { "risk" => "info" }}
}
if [risk] == "2" {
mutate { add_field => { "risk_number" => 1 }}
mutate { replace => { "risk" => "low" }}
}
if [risk] == "3" {
mutate { add_field => { "risk_number" => 2 }}
mutate { replace => { "risk" => "medium" }}
}
if [risk] == "4" {
mutate { add_field => { "risk_number" => 3 }}
mutate { replace => { "risk" => "high" }}
}
if [risk] == "5" {
mutate { add_field => { "risk_number" => 4 }}
mutate { replace => { "risk" => "critical" }}
}
mutate {
remove_field => "message"
}
if [first_time_detected] {
date {
match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_detected"
}
}
if [first_time_tested] {
date {
match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_tested"
}
}
if [last_time_detected] {
date {
match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_detected"
}
}
if [last_time_tested] {
date {
match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_tested"
}
}
# TODO remove when @timestamp is included in event
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => "last_updated"
}
mutate {
convert => { "plugin_id" => "integer"}
convert => { "id" => "integer"}
convert => { "risk_number" => "integer"}
convert => { "risk_score" => "float"}
convert => { "total_times_detected" => "integer"}
convert => { "cvss_temporal" => "float"}
convert => { "cvss" => "float"}
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
if [asset] =~ "\.yourdomain\.(com|net)$" {
mutate {
add_tag => [ "critical_asset" ]
}
}
}
}
output {
if "qualys" in [tags] {
stdout {
codec => dots
}
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,154 @@
# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 03/04/2018
# Version 0.3
# Description: Take in qualys web scan reports from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/data/openvas/*.json"
type => json
codec => json
start_position => "beginning"
tags => [ "openvas_scan", "openvas" ]
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
}
}
filter {
if "openvas_scan" in [tags] {
mutate {
replace => [ "message", "%{message}" ]
gsub => [
"message", "\|\|\|", " ",
"message", "\t\t", " ",
"message", " ", " ",
"message", " ", " ",
"message", " ", " ",
"message", "nan", " ",
"message",'\n',''
]
}
grok {
match => { "path" => "openvas_scan_%{DATA:scan_id}_%{INT:last_updated}.json$" }
tag_on_failure => []
}
mutate {
add_field => { "risk_score" => "%{cvss}" }
}
if [risk] == "1" {
mutate { add_field => { "risk_number" => 0 }}
mutate { replace => { "risk" => "info" }}
}
if [risk] == "2" {
mutate { add_field => { "risk_number" => 1 }}
mutate { replace => { "risk" => "low" }}
}
if [risk] == "3" {
mutate { add_field => { "risk_number" => 2 }}
mutate { replace => { "risk" => "medium" }}
}
if [risk] == "4" {
mutate { add_field => { "risk_number" => 3 }}
mutate { replace => { "risk" => "high" }}
}
if [risk] == "5" {
mutate { add_field => { "risk_number" => 4 }}
mutate { replace => { "risk" => "critical" }}
}
mutate {
remove_field => "message"
}
if [first_time_detected] {
date {
match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_detected"
}
}
if [first_time_tested] {
date {
match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "first_time_tested"
}
}
if [last_time_detected] {
date {
match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_detected"
}
}
if [last_time_tested] {
date {
match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
target => "last_time_tested"
}
}
# TODO remove when @timestamp is included in event
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
remove_field => "last_updated"
}
mutate {
convert => { "plugin_id" => "integer"}
convert => { "id" => "integer"}
convert => { "risk_number" => "integer"}
convert => { "risk_score" => "float"}
convert => { "total_times_detected" => "integer"}
convert => { "cvss_temporal" => "float"}
convert => { "cvss" => "float"}
}
if [risk_score] == 0 {
mutate {
add_field => { "risk_score_name" => "info" }
}
}
if [risk_score] > 0 and [risk_score] < 3 {
mutate {
add_field => { "risk_score_name" => "low" }
}
}
if [risk_score] >= 3 and [risk_score] < 6 {
mutate {
add_field => { "risk_score_name" => "medium" }
}
}
if [risk_score] >=6 and [risk_score] < 9 {
mutate {
add_field => { "risk_score_name" => "high" }
}
}
if [risk_score] >= 9 {
mutate {
add_field => { "risk_score_name" => "critical" }
}
}
# Add your critical assets by subnet or by hostname. Comment this field out if you don't want to tag any, but the asset panel will break.
if [asset] =~ "^10\.0\.100\." {
mutate {
add_tag => [ "critical_asset" ]
}
}
}
}
output {
if "openvas" in [tags] {
stdout {
codec => dots
}
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,25 @@
# Description: Take in jira tickets from vulnWhisperer and pumps into logstash
input {
file {
path => "/opt/VulnWhisperer/data/jira/*.json"
type => json
codec => json
start_position => "beginning"
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
tags => [ "jira" ]
}
}
output {
if "jira" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
}

View File

@ -0,0 +1,109 @@
[nessus]
enabled=true
hostname=localhost
port=8834
username=nessus_username
password=nessus_password
write_path=/opt/VulnWhisperer/data/nessus/
db_path=/opt/VulnWhisperer/database
trash=false
verbose=true
[tenable]
enabled=true
hostname=cloud.tenable.com
port=443
username=tenable.io_username
password=tenable.io_password
write_path=/opt/VulnWhisperer/data/tenable/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
[qualys_web]
#Reference https://www.qualys.com/docs/qualys-was-api-user-guide.pdf to find your API
enabled = true
hostname = qualysapi.qg2.apps.qualys.com
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys/
db_path=/opt/VulnWhisperer/data/database
verbose=true
# Set the maximum number of retries each connection should attempt.
#Note, this applies only to failed connections and timeouts, never to requests where the server returns a response.
max_retries = 10
# Template ID will need to be retrieved for each document. Please follow the reference guide above for instructions on how to get your template ID.
template_id = 126024
[qualys_vuln]
#Reference https://www.qualys.com/docs/qualys-was-api-user-guide.pdf to find your API
enabled = true
hostname = qualysapi.qg2.apps.qualys.com
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys/
db_path=/opt/VulnWhisperer/data/database
verbose=true
# Set the maximum number of retries each connection should attempt.
#Note, this applies only to failed connections and timeouts, never to requests where the server returns a response.
max_retries = 10
# Template ID will need to be retrieved for each document. Please follow the reference guide above for instructions on how to get your template ID.
template_id = 126024
[detectify]
#Reference https://developer.detectify.com/
enabled = false
hostname = api.detectify.com
#username variable used as apiKey
username = exampleuser
#password variable used as secretKey
password = examplepass
write_path =/opt/VulnWhisperer/data/detectify/
db_path = /opt/VulnWhisperer/data/database
verbose = true
[openvas]
enabled = false
hostname = localhost
port = 4000
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/openvas/
db_path=/opt/VulnWhisperer/data/database
verbose=true
#[proxy]
; This section is optional. Leave it out if you're not using a proxy.
; You can use environmental variables as well: http://www.python-requests.org/en/latest/user/advanced/#proxies
; proxy_protocol set to https, if not specified.
#proxy_url = proxy.mycorp.com
; proxy_port will override any port specified in proxy_url
#proxy_port = 8080
; proxy authentication
#proxy_username = proxyuser
#proxy_password = proxypass
[jira]
hostname = jira-host
username = username
password = password
write_path = /opt/VulnWhisperer/data/jira/
db_path = /opt/VulnWhisperer/data/database
verbose = true
dns_resolv = False
#Sample jira report scan, will automatically be created for existent scans
#[jira.qualys_vuln.test_scan]
#source = qualys_vuln
#scan_name = Test Scan
#jira_project = PROJECT
; if multiple components, separate by "," = None
#components =
; minimum criticality to report (low, medium, high or critical) = None
#min_critical_to_report = high

View File

@ -4,7 +4,7 @@ from setuptools import setup, find_packages
setup(
name='VulnWhisperer',
version='1.0.1',
version='1.8',
packages=find_packages(),
url='https://github.com/austin-taylor/vulnwhisperer',
license="""MIT License
@ -26,7 +26,7 @@ setup(
SOFTWARE.""",
author='Austin Taylor',
author_email='email@austintaylor.io',
description='Vulnerability assessment framework aggregator',
description='Vulnerability Assessment Framework Aggregator',
scripts=['bin/vuln_whisperer']
)

1
tests/data Submodule

Submodule tests/data added at 55dc6832f8

109
tests/test-docker.sh Executable file
View File

@ -0,0 +1,109 @@
#!/usr/bin/env bash
NORMAL=$(tput sgr0)
GREEN=$(tput setaf 2)
YELLOW=$(tput setaf 3)
RED=$(tput setaf 1)
function red() {
echo -e "$RED$*$NORMAL"
}
function green() {
echo -e "$GREEN$*$NORMAL"
}
function yellow() {
echo -e "$YELLOW$*$NORMAL"
}
return_code=0
elasticsearch_url="localhost:9200"
logstash_url="localhost:9600"
until curl -s "$elasticsearch_url/_cluster/health?pretty" | grep '"status"' | grep -qE "green|yellow"; do
yellow "Waiting for Elasticsearch..."
sleep 5
done
green "✅ Elasticsearch status is green..."
count=0
until [[ $(curl -s "$logstash_url/_node/stats" | jq '.events.out') -ge 1236 ]]; do
yellow "Waiting for Logstash load to finish... $(curl -s "$logstash_url/_node/stats" | jq '.events.out') of 1236 (attempt $count of 60)"
((count++)) && ((count==60)) && break
sleep 5
done
if [[ count -le 60 && $(curl -s "$logstash_url/_node/stats" | jq '.events.out') -ge 1236 ]]; then
green "✅ Logstash load finished..."
else
red "❌ Logstash load didn't complete... $(curl -s "$logstash_url/_node/stats" | jq '.events.out')"
fi
count=0
until [[ $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq '.count') -ge 1232 ]] ; do
yellow "Waiting for Elasticsearch index to sync... $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq '.count') of 1232 logs loaded (attempt $count of 150)"
((count++)) && ((count==150)) && break
sleep 2
done
if [[ count -le 50 && $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq '.count') -ge 1232 ]]; then
green "✅ logstash-vulnwhisperer-2019.03 document count >= 1232"
else
red "❌ TIMED OUT waiting for logstash-vulnwhisperer-2019.03 document count: $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq) != 1232"
fi
# if [[ $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq '.count') == 1232 ]]; then
# green "✅ Passed: logstash-vulnwhisperer-2019.03 document count == 1232"
# else
# red "❌ Failed: logstash-vulnwhisperer-2019.03 document count == 1232 was: $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count") instead"
# ((return_code = return_code + 1))
# fi
# Test Nessus plugin_name:Backported Security Patch Detection (FTP)
nessus_doc=$(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_search?q=plugin_name:%22Backported%20Security%20Patch%20Detection%20(FTP)%22%20AND%20asset:176.28.50.164%20AND%20tags:nessus" | jq '.hits.hits[]._source')
if echo $nessus_doc | jq '.risk' | grep -q "None"; then
green "✅ Passed: Nessus risk == None"
else
red "❌ Failed: Nessus risk == None was: $(echo $nessus_doc | jq '.risk') instead"
((return_code = return_code + 1))
fi
# Test Tenable plugin_name:Backported Security Patch Detection (FTP)
tenable_doc=$(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_search?q=plugin_name:%22Backported%20Security%20Patch%20Detection%20(FTP)%22%20AND%20asset:176.28.50.164%20AND%20tags:tenable" | jq '.hits.hits[]._source')
# Test asset
if echo $tenable_doc | jq .asset | grep -q '176.28.50.164'; then
green "✅ Passed: Tenable asset == 176.28.50.164"
else
red "❌ Failed: Tenable asset == 176.28.50.164 was: $(echo $tenable_doc | jq .asset) instead"
((return_code = return_code + 1))
fi
# Test @timestamp
if echo $tenable_doc | jq '.["@timestamp"]' | grep -q '2019-03-30T15:45:44.000Z'; then
green "✅ Passed: Tenable @timestamp == 2019-03-30T15:45:44.000Z"
else
red "❌ Failed: Tenable @timestamp == 2019-03-30T15:45:44.000Z was: $(echo $tenable_doc | jq '.["@timestamp"]') instead"
((return_code = return_code + 1))
fi
# Test Qualys plugin_name:OpenSSL Multiple Remote Security Vulnerabilities
qualys_vuln_doc=$(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_search?q=tags:qualys_vuln%20AND%20ip:%22176.28.50.164%22%20AND%20plugin_name:%22OpenSSL%20Multiple%20Remote%20Security%20Vulnerabilities%22%20AND%20port:465" | jq '.hits.hits[]._source')
# Test @timestamp
if echo $qualys_vuln_doc | jq '.["@timestamp"]' | grep -q '2019-03-30T10:17:41.000Z'; then
green "✅ Passed: Qualys VM @timestamp == 2019-03-30T10:17:41.000Z"
else
red "❌ Failed: Qualys VM @timestamp == 2019-03-30T10:17:41.000Z was: $(echo $qualys_vuln_doc | jq '.["@timestamp"]') instead"
((return_code = return_code + 1))
fi
# Test @XXXX
if echo $qualys_vuln_doc | jq '.cvss' | grep -q '6.8'; then
green "✅ Passed: Qualys VM cvss == 6.8"
else
red "❌ Failed: Qualys VM cvss == 6.8 was: $(echo $qualys_vuln_doc | jq '.cvss') instead"
((return_code = return_code + 1))
fi
exit $return_code

97
tests/test-vuln_whisperer.sh Executable file
View File

@ -0,0 +1,97 @@
#!/usr/bin/env bash
NORMAL=$(tput sgr0)
GREEN=$(tput setaf 2)
YELLOW=$(tput setaf 3)
RED=$(tput setaf 1)
function red() {
echo -e "$RED$*$NORMAL"
}
function green() {
echo -e "$GREEN$*$NORMAL"
}
function yellow() {
echo -e "$YELLOW$*$NORMAL"
}
return_code=0
TEST_PATH=${TEST_PATH:-"tests/data"}
yellow "\n*********************************************"
yellow "* Test successful scan download and parsing *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
if vuln_whisperer -F -c configs/test.ini --mock --mock_dir "${TEST_PATH}"; then
green "\n✅ Passed: Test successful scan download and parsing"
else
red "\n❌ Failed: Test successful scan download and parsing"
((return_code = return_code + 1))
fi
yellow "\n*********************************************"
yellow "* Test run with no scans to import *"
yellow "*********************************************"
if vuln_whisperer -F -c configs/test.ini --mock --mock_dir "${TEST_PATH}"; then
green "\n✅ Passed: Test run with no scans to import"
else
red "\n❌ Failed: Test run with no scans to import"
((return_code = return_code + 1))
fi
yellow "\n*********************************************"
yellow "* Test one failed scan *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
yellow "Removing ${TEST_PATH}/nessus/GET_scans_exports_164_download"
mv "${TEST_PATH}/nessus/GET_scans_exports_164_download"{,.bak}
if vuln_whisperer -F -c configs/test.ini --mock --mock_dir "${TEST_PATH}"; [[ $? -eq 1 ]]; then
green "\n✅ Passed: Test one failed scan"
else
red "\n❌ Failed: Test one failed scan"
((return_code = return_code + 1))
fi
yellow "\n*********************************************"
yellow "* Test two failed scans *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
yellow "Removing ${TEST_PATH}/qualys_vuln/scan_1553941061.87241"
mv "${TEST_PATH}/qualys_vuln/scan_1553941061.87241"{,.bak}
if vuln_whisperer -F -c configs/test.ini --mock --mock_dir "${TEST_PATH}"; [[ $? -eq 2 ]]; then
green "\n✅ Passed: Test two failed scans"
else
red "\n❌ Failed: Test two failed scans"
((return_code = return_code + 1))
fi
yellow "\n*********************************************"
yellow "* Test only nessus with one failed scan *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
if vuln_whisperer -F -c configs/test.ini -s nessus --mock --mock_dir "${TEST_PATH}"; [[ $? -eq 1 ]]; then
green "\n✅ Passed: Test only nessus with one failed scan"
else
red "\n❌ Failed: Test only nessus with one failed scan"
((return_code = return_code + 1))
fi
yellow "*********************************************"
yellow "* Test only Qualys VM with one failed scan *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
if vuln_whisperer -F -c configs/test.ini -s qualys_vuln --mock --mock_dir "${TEST_PATH}"; [[ $? -eq 1 ]]; then
green "\n✅ Passed: Test only Qualys VM with one failed scan"
else
red "\n❌ Failed: Test only Qualys VM with one failed scan"
((return_code = return_code + 1))
fi
# Restore the removed files
mv "${TEST_PATH}/qualys_vuln/scan_1553941061.87241.bak" "${TEST_PATH}/qualys_vuln/scan_1553941061.87241"
mv "${TEST_PATH}/nessus/GET_scans_exports_164_download.bak" "${TEST_PATH}/nessus/GET_scans_exports_164_download"
exit $return_code

View File

@ -1,8 +1,8 @@
import os
import sys
import logging
# Support for python3
if (sys.version_info > (3, 0)):
if sys.version_info > (3, 0):
import configparser as cp
else:
import ConfigParser as cp
@ -14,9 +14,70 @@ class vwConfig(object):
self.config_in = config_in
self.config = cp.RawConfigParser()
self.config.read(self.config_in)
self.logger = logging.getLogger('vwConfig')
def get(self, section, option):
self.logger.debug('Calling get for {}:{}'.format(section, option))
return self.config.get(section, option)
def getbool(self, section, option):
return self.config.getboolean(section, option)
self.logger.debug('Calling getbool for {}:{}'.format(section, option))
return self.config.getboolean(section, option)
def get_sections_with_attribute(self, attribute):
sections = []
# TODO: does this not also need the "yes" case?
check = ["true", "True", "1"]
for section in self.config.sections():
try:
if self.get(section, attribute) in check:
sections.append(section)
except:
self.logger.warn("Section {} has no option '{}'".format(section, attribute))
return sections
def exists_jira_profiles(self, profiles):
# get list of profiles source_scanner.scan_name
for profile in profiles:
if not self.config.has_section(self.normalize_section(profile)):
self.logger.warn("JIRA Scan Profile missing")
return False
return True
def update_jira_profiles(self, profiles):
# create JIRA profiles in the ini config file
self.logger.debug('Updating Jira profiles: {}'.format(str(profiles)))
for profile in profiles:
#IMPORTANT profile scans/results will be normalized to lower and "_" instead of spaces for ini file section
section_name = self.normalize_section(profile)
try:
self.get(section_name, "source")
self.logger.info("Skipping creating of section '{}'; already exists".format(section_name))
except:
self.logger.warn("Creating config section for '{}'".format(section_name))
self.config.add_section(section_name)
self.config.set(section_name, 'source', profile.split('.')[0])
# in case any scan name contains '.' character
self.config.set(section_name, 'scan_name', '.'.join(profile.split('.')[1:]))
self.config.set(section_name, 'jira_project', '')
self.config.set(section_name, '; if multiple components, separate by ","')
self.config.set(section_name, 'components', '')
self.config.set(section_name, '; minimum criticality to report (low, medium, high or critical)')
self.config.set(section_name, 'min_critical_to_report', 'high')
self.config.set(section_name, '; automatically report, boolean value ')
self.config.set(section_name, 'autoreport', 'false')
# TODO: try/catch this
# writing changes back to file
with open(self.config_in, 'w') as configfile:
self.config.write(configfile)
self.logger.debug('Written configuration to {}'.format(self.config_in))
# FIXME: this is the same as return None, that is the default return for return-less functions
return
def normalize_section(self, profile):
profile = "jira.{}".format(profile.lower().replace(" ", "_"))
self.logger.debug('Normalized profile as: {}'.format(profile))
return profile

View File

@ -1,219 +1,184 @@
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
import pandas as pd
from pandas.io.json import json_normalize
import pytz
from datetime import datetime
import json
import sys
import os
import time
import io
class NessusAPI(object):
SESSION = '/session'
FOLDERS = '/folders'
SCANS = '/scans'
SCAN_ID = SCANS + '/{scan_id}'
HOST_VULN = SCAN_ID + '/hosts/{host_id}'
PLUGINS = HOST_VULN + '/plugins/{plugin_id}'
EXPORT = SCAN_ID + '/export'
EXPORT_TOKEN_DOWNLOAD = '/scans/exports/{token_id}/download'
EXPORT_FILE_DOWNLOAD = EXPORT + '/{file_id}/download'
EXPORT_STATUS = EXPORT + '/{file_id}/status'
EXPORT_HISTORY = EXPORT + '?history_id={history_id}'
def __init__(self, hostname=None, port=None, username=None, password=None, verbose=True):
if username is None or password is None:
raise Exception('ERROR: Missing username or password.')
self.user = username
self.password = password
self.base = 'https://{hostname}:{port}'.format(hostname=hostname, port=port)
self.verbose = verbose
self.headers = {
'Origin': self.base,
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.96 Safari/537.36',
'Content-Type': 'application/json',
'Accept': 'application/json, text/javascript, */*; q=0.01',
'Referer': self.base,
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive',
'X-Cookie': None
}
self.login()
self.scan_ids = self.get_scan_ids()
def vprint(self, msg):
if self.verbose:
print(msg)
def login(self):
resp = self.get_token()
if resp.status_code is 200:
self.headers['X-Cookie'] = 'token={token}'.format(token=resp.json()['token'])
else:
raise Exception('[FAIL] Could not login to Nessus')
def request(self, url, data=None, headers=None, method='POST', download=False, json=False):
if headers is None:
headers = self.headers
timeout = 0
success = False
url = self.base + url
methods = {'GET': requests.get,
'POST': requests.post,
'DELETE': requests.delete}
while (timeout <= 10) and (not success):
data = methods[method](url, data=data, headers=self.headers, verify=False)
if data.status_code == 401:
try:
self.login()
timeout += 1
self.vprint('[INFO] Token refreshed')
except Exception as e:
self.vprint('[FAIL] Could not refresh token\nReason: %s' % e)
else:
success = True
if json:
data = data.json()
if download:
return data.content
return data
def get_token(self):
auth = '{"username":"%s", "password":"%s"}' % (self.user, self.password)
token = self.request(self.SESSION, data=auth, json=False)
return token
def logout(self):
self.request(self.SESSION, method='DELETE')
def get_folders(self):
folders = self.request(self.FOLDERS, method='GET', json=True)
return folders
def get_scans(self):
scans = self.request(self.SCANS, method='GET', json=True)
return scans
def get_scan_ids(self):
scans = self.get_scans()
scan_ids = [scan_id['id'] for scan_id in scans['scans']]
return scan_ids
def count_scan(self, scans, folder_id):
count = 0
for scan in scans:
if scan['folder_id'] == folder_id: count = count + 1
return count
def print_scans(self, data):
for folder in data['folders']:
print("\\{0} - ({1})\\".format(folder['name'], self.count_scan(data['scans'], folder['id'])))
for scan in data['scans']:
if scan['folder_id'] == folder['id']:
print(
"\t\"{0}\" - sid:{1} - uuid: {2}".format(scan['name'].encode('utf-8'), scan['id'], scan['uuid']))
def get_scan_details(self, scan_id):
data = self.request(self.SCAN_ID.format(scan_id=scan_id), method='GET', json=True)
return data
def get_scan_history(self, scan_id):
data = self.request(self.SCAN_ID.format(scan_id=scan_id), method='GET', json=True)
return data['history']
def get_scan_hosts(self, scan_id):
data = self.request(self.SCAN_ID.format(scan_id=scan_id), method='GET', json=True)
return data['hosts']
def get_host_vulnerabilities(self, scan_id, host_id):
query = self.HOST_VULN.format(scan_id=scan_id, host_id=host_id)
data = self.request(query, method='GET', json=True)
return data
def get_plugin_info(self, scan_id, host_id, plugin_id):
query = self.PLUGINS.format(scan_id=scan_id, host_id=host_id, plugin_id=plugin_id)
data = self.request(query, method='GET', json=True)
return data
def export_scan(self, scan_id, history_id):
data = {'format': 'csv'}
query = self.EXPORT_REPORT.format(scan_id=scan_id, history_id=history_id)
req = self.request(query, data=data, method='POST')
return req
def download_scan(self, scan_id=None, history=None, export_format="", chapters="", dbpasswd=""):
running = True
counter = 0
data = {'format': export_format}
if not history:
query = self.EXPORT.format(scan_id=scan_id)
else:
query = self.EXPORT_HISTORY.format(scan_id=scan_id, history_id=history)
scan_id = str(scan_id)
req = self.request(query, data=json.dumps(data), method='POST', json=True)
try:
file_id = req['file']
token_id = req['token']
except Exception as e:
print("[ERROR] %s" % e)
print('Download for file id ' + str(file_id) + '.')
while running:
time.sleep(2)
counter += 2
report_status = self.request(self.EXPORT_STATUS.format(scan_id=scan_id, file_id=file_id), method='GET',
json=True)
running = report_status['status'] != 'ready'
sys.stdout.write(".")
sys.stdout.flush()
if counter % 60 == 0:
print("")
print("")
content = self.request(self.EXPORT_TOKEN_DOWNLOAD.format(token_id=token_id), method='GET', download=True)
return content
@staticmethod
def merge_dicts(self, *dict_args):
"""
Given any number of dicts, shallow copy and merge into a new dict,
precedence goes to key value pairs in latter dicts.
"""
result = {}
for dictionary in dict_args:
result.update(dictionary)
return result
def get_utc_from_local(self, date_time, local_tz=None, epoch=True):
date_time = datetime.fromtimestamp(date_time)
if local_tz is None:
local_tz = pytz.timezone('US/Central')
else:
local_tz = pytz.timezone(local_tz)
# print date_time
local_time = local_tz.normalize(local_tz.localize(date_time))
local_time = local_time.astimezone(pytz.utc)
if epoch:
naive = local_time.replace(tzinfo=None)
local_time = int((naive - datetime(1970, 1, 1)).total_seconds())
return local_time
def tz_conv(self, tz):
time_map = {'Eastern Standard Time': 'US/Eastern',
'Central Standard Time': 'US/Central',
'Pacific Standard Time': 'US/Pacific',
'None': 'US/Central'}
return time_map.get(tz, None)
import json
import logging
import sys
import time
from datetime import datetime
import pytz
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
class NessusAPI(object):
SESSION = '/session'
FOLDERS = '/folders'
SCANS = '/scans'
SCAN_ID = SCANS + '/{scan_id}'
HOST_VULN = SCAN_ID + '/hosts/{host_id}'
PLUGINS = HOST_VULN + '/plugins/{plugin_id}'
EXPORT = SCAN_ID + '/export'
EXPORT_TOKEN_DOWNLOAD = '/scans/exports/{token_id}/download'
EXPORT_FILE_DOWNLOAD = EXPORT + '/{file_id}/download'
EXPORT_STATUS = EXPORT + '/{file_id}/status'
EXPORT_HISTORY = EXPORT + '?history_id={history_id}'
def __init__(self, hostname=None, port=None, username=None, password=None, verbose=True, profile=None, access_key=None, secret_key=None):
self.logger = logging.getLogger('NessusAPI')
if verbose:
self.logger.setLevel(logging.DEBUG)
if not all((username, password)) and not all((access_key, secret_key)):
raise Exception('ERROR: Missing username, password or API keys.')
self.profile = profile
self.user = username
self.password = password
self.api_keys = False
self.access_key = access_key
self.secret_key = secret_key
self.base = 'https://{hostname}:{port}'.format(hostname=hostname, port=port)
self.verbose = verbose
self.session = requests.Session()
self.session.verify = False
self.session.stream = True
self.session.headers = {
'Origin': self.base,
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.8',
'User-Agent': 'VulnWhisperer for Nessus',
'Content-Type': 'application/json',
'Accept': 'application/json, text/javascript, */*; q=0.01',
'Referer': self.base,
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive',
'X-Cookie': None
}
if all((self.access_key, self.secret_key)):
self.logger.debug('Using {} API keys'.format(self.profile))
self.api_keys = True
self.session.headers['X-ApiKeys'] = 'accessKey={}; secretKey={}'.format(self.access_key, self.secret_key)
else:
self.login()
self.scans = self.get_scans()
self.scan_ids = self.get_scan_ids()
def login(self):
auth = '{"username":"%s", "password":"%s"}' % (self.user, self.password)
resp = self.request(self.SESSION, data=auth, json_output=False)
if resp.status_code == 200:
self.session.headers['X-Cookie'] = 'token={token}'.format(token=resp.json()['token'])
else:
raise Exception('[FAIL] Could not login to Nessus')
def request(self, url, data=None, headers=None, method='POST', download=False, json_output=False):
timeout = 0
success = False
method = method.lower()
url = self.base + url
self.logger.debug('Requesting to url {}'.format(url))
while (timeout <= 10) and (not success):
response = getattr(self.session, method)(url, data=data)
if response.status_code == 401:
if url == self.base + self.SESSION:
break
try:
timeout += 1
if self.api_keys:
continue
self.login()
self.logger.info('Token refreshed')
except Exception as e:
self.logger.error('Could not refresh token\nReason: {}'.format(str(e)))
else:
success = True
if json_output:
return response.json()
if download:
self.logger.debug('Returning data.content')
response_data = ''
count = 0
for chunk in response.iter_content(chunk_size=8192):
count += 1
if chunk:
response_data += chunk
self.logger.debug('Processed {} chunks'.format(count))
return response_data
return response
def get_scans(self):
scans = self.request(self.SCANS, method='GET', json_output=True)
return scans
def get_scan_ids(self):
scans = self.scans
scan_ids = [scan_id['id'] for scan_id in scans['scans']] if scans['scans'] else []
self.logger.debug('Found {} scan_ids'.format(len(scan_ids)))
return scan_ids
def get_scan_history(self, scan_id):
data = self.request(self.SCAN_ID.format(scan_id=scan_id), method='GET', json_output=True)
return data['history']
def download_scan(self, scan_id=None, history=None, export_format=""):
running = True
counter = 0
data = {'format': export_format}
if not history:
query = self.EXPORT.format(scan_id=scan_id)
else:
query = self.EXPORT_HISTORY.format(scan_id=scan_id, history_id=history)
scan_id = str(scan_id)
req = self.request(query, data=json.dumps(data), method='POST', json_output=True)
try:
file_id = req['file']
if self.profile == 'nessus':
token_id = req['token'] if 'token' in req else req['temp_token']
except Exception as e:
self.logger.error('{}'.format(str(e)))
self.logger.info('Download for file id {}'.format(str(file_id)))
while running:
time.sleep(2)
counter += 2
report_status = self.request(self.EXPORT_STATUS.format(scan_id=scan_id, file_id=file_id), method='GET',
json_output=True)
running = report_status['status'] != 'ready'
sys.stdout.write(".")
sys.stdout.flush()
# FIXME: why? can this be removed in favour of a counter?
if counter % 60 == 0:
self.logger.info("Completed: {}".format(counter))
self.logger.info("Done: {}".format(counter))
if self.profile == 'tenable' or self.api_keys:
content = self.request(self.EXPORT_FILE_DOWNLOAD.format(scan_id=scan_id, file_id=file_id), method='GET', download=True)
else:
content = self.request(self.EXPORT_TOKEN_DOWNLOAD.format(token_id=token_id), method='GET', download=True)
return content
def get_utc_from_local(self, date_time, local_tz=None, epoch=True):
date_time = datetime.fromtimestamp(date_time)
if local_tz is None:
local_tz = pytz.timezone('UTC')
else:
local_tz = pytz.timezone(local_tz)
local_time = local_tz.normalize(local_tz.localize(date_time))
local_time = local_time.astimezone(pytz.utc)
if epoch:
naive = local_time.replace(tzinfo=None)
local_time = int((naive - datetime(1970, 1, 1)).total_seconds())
self.logger.debug('Converted timestamp {} in datetime {}'.format(date_time, local_time))
return local_time
def tz_conv(self, tz):
time_map = {'Eastern Standard Time': 'US/Eastern',
'Central Standard Time': 'US/Central',
'Pacific Standard Time': 'US/Pacific',
'None': 'US/Central'}
return time_map.get(tz, None)

View File

@ -0,0 +1,192 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
__author__ = 'Austin Taylor'
import datetime as dt
import io
import logging
import pandas as pd
import requests
from bs4 import BeautifulSoup
class OpenVAS_API(object):
OMP = '/omp'
def __init__(self,
hostname=None,
port=None,
username=None,
password=None,
report_format_id=None,
verbose=True):
self.logger = logging.getLogger('OpenVAS_API')
if verbose:
self.logger.setLevel(logging.DEBUG)
if username is None or password is None:
raise Exception('ERROR: Missing username or password.')
self.username = username
self.password = password
self.base = 'https://{hostname}:{port}'.format(hostname=hostname, port=port)
self.verbose = verbose
self.processed_reports = 0
self.report_format_id = report_format_id
self.headers = {
'Origin': self.base,
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.8',
'User-Agent': 'VulnWhisperer for OpenVAS',
'Content-Type': 'application/x-www-form-urlencoded',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Cache-Control': 'max-age=0',
'Referer': self.base,
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive',
}
self.login()
self.openvas_reports = self.get_reports()
self.report_formats = self.get_report_formats()
def login(self):
resp = self.get_token()
if resp.status_code is 200:
xml_response = BeautifulSoup(resp.content, 'lxml')
self.token = xml_response.find(attrs={'id': 'gsa-token'}).text
self.cookies = resp.cookies.get_dict()
else:
raise Exception('[FAIL] Could not login to OpenVAS')
def request(self, url, data=None, params=None, headers=None, cookies=None, method='POST', download=False,
json=False):
if headers is None:
headers = self.headers
if cookies is None:
cookies = self.cookies
timeout = 0
success = False
url = self.base + url
methods = {'GET': requests.get,
'POST': requests.post,
'DELETE': requests.delete}
while (timeout <= 10) and (not success):
data = methods[method](url,
data=data,
headers=self.headers,
params=params,
cookies=cookies,
verify=False)
if data.status_code == 401:
try:
self.login()
timeout += 1
self.logger.info(' Token refreshed')
except Exception as e:
self.logger.error('Could not refresh token\nReason: {}'.format(str(e)))
else:
success = True
if json:
data = data.json()
if download:
return data.content
return data
def get_token(self):
data = [
('cmd', 'login'),
('text', '/omp?r=1'),
('login', self.username),
('password', self.password),
]
token = requests.post(self.base + self.OMP, data=data, verify=False)
return token
def get_report_formats(self):
params = (
('cmd', 'get_report_formats'),
('token', self.token)
)
self.logger.info('Retrieving available report formats')
data = self.request(url=self.OMP, method='GET', params=params)
bs = BeautifulSoup(data.content, "lxml")
table_body = bs.find('tbody')
rows = table_body.find_all('tr')
format_mapping = {}
for row in rows:
cols = row.find_all('td')
for x in cols:
for y in x.find_all('a'):
if y.get_text() != '':
format_mapping[y.get_text()] = \
[h.split('=')[1] for h in y['href'].split('&') if 'report_format_id' in h][0]
return format_mapping
def get_reports(self, complete=True):
self.logger.info('Retreiving OpenVAS report data...')
params = (('cmd', 'get_reports'),
('token', self.token),
('max_results', 1),
('ignore_pagination', 1),
('filter', 'apply_overrides=1 min_qod=70 autofp=0 first=1 rows=0 levels=hml sort-reverse=severity'),
)
reports = self.request(self.OMP, params=params, method='GET')
soup = BeautifulSoup(reports.text, 'lxml')
data = []
links = []
table = soup.find('table', attrs={'class': 'gbntable'})
table_body = table.find('tbody')
rows = table_body.find_all('tr')
for row in rows:
cols = row.find_all('td')
links.extend([a['href'] for a in row.find_all('a', href=True) if 'get_report' in str(a)])
cols = [ele.text.strip() for ele in cols]
data.append([ele for ele in cols if ele])
report = pd.DataFrame(data, columns=['date', 'status', 'task', 'scan_severity', 'high', 'medium', 'low', 'log',
'false_pos'])
if report.shape[0] != 0:
report['links'] = links
report['report_ids'] = report.links.str.extract('.*report_id=([a-z-0-9]*)', expand=False)
report['epoch'] = (pd.to_datetime(report['date']) - dt.datetime(1970, 1, 1)).dt.total_seconds().astype(int)
else:
raise Exception("Could not retrieve OpenVAS Reports - Please check your settings and try again")
report['links'] = links
report['report_ids'] = report.links.str.extract('.*report_id=([a-z-0-9]*)', expand=False)
report['epoch'] = (pd.to_datetime(report['date']) - dt.datetime(1970, 1, 1)).dt.total_seconds().astype(int)
if complete:
report = report[report.status == 'Done']
severity_extraction = report.scan_severity.str.extract('([0-9.]*) \(([\w]+)\)', expand=False)
severity_extraction.columns = ['scan_highest_severity', 'severity_rate']
report_with_severity = pd.concat([report, severity_extraction], axis=1)
return report_with_severity
def process_report(self, report_id):
params = (
('token', self.token),
('cmd', 'get_report'),
('report_id', report_id),
('filter', 'apply_overrides=0 min_qod=70 autofp=0 levels=hml first=1 rows=0 sort-reverse=severity'),
('ignore_pagination', '1'),
('report_format_id', '{report_format_id}'.format(report_format_id=self.report_formats['CSV Results'])),
('submit', 'Download'),
)
self.logger.info('Retrieving {}'.format(report_id))
req = self.request(self.OMP, params=params, method='GET')
report_df = pd.read_csv(io.BytesIO(req.text.encode('utf-8')))
report_df['report_ids'] = report_id
self.processed_reports += 1
merged_df = pd.merge(report_df, self.openvas_reports, on='report_ids').reset_index().drop('index', axis=1)
return merged_df

View File

@ -0,0 +1,124 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
__author__ = 'Nathan Young'
import logging
import sys
import xml.etree.ElementTree as ET
import dateutil.parser as dp
import pandas as pd
import qualysapi
class qualysWhisperAPI(object):
SCANS = 'api/2.0/fo/scan'
def __init__(self, config=None):
self.logger = logging.getLogger('qualysWhisperAPI')
self.config = config
try:
self.qgc = qualysapi.connect(config, 'qualys_vuln')
# Fail early if we can't make a request or auth is incorrect
self.qgc.request('about.php')
self.logger.info('Connected to Qualys at {}'.format(self.qgc.server))
except Exception as e:
self.logger.error('Could not connect to Qualys: {}'.format(str(e)))
sys.exit(1)
def scan_xml_parser(self, xml):
all_records = []
root = ET.XML(xml.encode("utf-8"))
for child in root.find('.//SCAN_LIST'):
all_records.append({
'name': child.find('TITLE').text,
'id': child.find('REF').text,
'date': child.find('LAUNCH_DATETIME').text,
'type': child.find('TYPE').text,
'duration': child.find('DURATION').text,
'status': child.find('.//STATE').text,
})
return pd.DataFrame(all_records)
def get_all_scans(self):
parameters = {
'action': 'list',
'echo_request': 0,
'show_op': 0,
'launched_after_datetime': '0001-01-01'
}
scans_xml = self.qgc.request(self.SCANS, parameters)
return self.scan_xml_parser(scans_xml)
def get_scan_details(self, scan_id=None):
parameters = {
'action': 'fetch',
'echo_request': 0,
'output_format': 'json_extended',
'mode': 'extended',
'scan_ref': scan_id
}
scan_json = self.qgc.request(self.SCANS, parameters)
# First two columns are metadata we already have
# Last column corresponds to "target_distribution_across_scanner_appliances" element
# which doesn't follow the schema and breaks the pandas data manipulation
return pd.read_json(scan_json).iloc[2:-1]
class qualysUtils:
def __init__(self):
self.logger = logging.getLogger('qualysUtils')
def iso_to_epoch(self, dt):
out = dp.parse(dt).strftime('%s')
self.logger.info('Converted {} to {}'.format(dt, out))
return out
class qualysVulnScan:
def __init__(
self,
config=None,
file_in=None,
file_stream=False,
delimiter=',',
quotechar='"',
):
self.logger = logging.getLogger('qualysVulnScan')
self.file_in = file_in
self.file_stream = file_stream
self.report = None
self.utils = qualysUtils()
if config:
try:
self.qw = qualysWhisperAPI(config=config)
except Exception as e:
self.logger.error('Could not load config! Please check settings. Error: {}'.format(str(e)))
if file_stream:
self.open_file = file_in.splitlines()
elif file_in:
self.open_file = open(file_in, 'rb')
self.downloaded_file = None
def process_data(self, scan_id=None):
"""Downloads a file from Qualys and normalizes it"""
self.logger.info('Downloading scan ID: {}'.format(scan_id))
scan_report = self.qw.get_scan_details(scan_id=scan_id)
if not scan_report.empty:
keep_columns = ['category', 'cve_id', 'cvss3_base', 'cvss3_temporal', 'cvss_base',
'cvss_temporal', 'dns', 'exploitability', 'fqdn', 'impact', 'ip', 'ip_status',
'netbios', 'os', 'pci_vuln', 'port', 'protocol', 'qid', 'results', 'severity',
'solution', 'ssl', 'threat', 'title', 'type', 'vendor_reference']
scan_report = scan_report.filter(keep_columns)
scan_report['severity'] = scan_report['severity'].astype(int).astype(str)
scan_report['qid'] = scan_report['qid'].astype(int).astype(str)
else:
self.logger.warn('Scan ID {} has no vulnerabilities, skipping.'.format(scan_id))
return scan_report
return scan_report

View File

@ -0,0 +1,465 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
__author__ = 'Austin Taylor'
from lxml import objectify
from lxml.builder import E
import xml.etree.ElementTree as ET
import pandas as pd
import qualysapi
import qualysapi.config as qcconf
import requests
import sys
import os
import csv
import logging
import dateutil.parser as dp
class qualysWhisperAPI(object):
COUNT_WEBAPP = '/count/was/webapp'
COUNT_WASSCAN = '/count/was/wasscan'
DELETE_REPORT = '/delete/was/report/{report_id}'
GET_WEBAPP_DETAILS = '/get/was/webapp/{was_id}'
QPS_REST_3 = '/qps/rest/3.0'
REPORT_DETAILS = '/get/was/report/{report_id}'
REPORT_STATUS = '/status/was/report/{report_id}'
REPORT_CREATE = '/create/was/report'
REPORT_DOWNLOAD = '/download/was/report/{report_id}'
SCAN_DETAILS = '/get/was/wasscan/{scan_id}'
SCAN_DOWNLOAD = '/download/was/wasscan/{scan_id}'
SEARCH_REPORTS = '/search/was/report'
SEARCH_WEB_APPS = '/search/was/webapp'
SEARCH_WAS_SCAN = '/search/was/wasscan'
VERSION = '/qps/rest/portal/version'
def __init__(self, config=None):
self.logger = logging.getLogger('qualysWhisperAPI')
self.config = config
try:
self.qgc = qualysapi.connect(config, 'qualys_web')
self.logger.info('Connected to Qualys at {}'.format(self.qgc.server))
except Exception as e:
self.logger.error('Could not connect to Qualys: {}'.format(str(e)))
self.headers = {
#"content-type": "text/xml"}
"Accept" : "application/json",
"Content-Type": "application/json"}
self.config_parse = qcconf.QualysConnectConfig(config, 'qualys_web')
try:
self.template_id = self.config_parse.get_template_id()
except:
self.logger.error('Could not retrieve template ID')
####
#### GET SCANS TO PROCESS
####
def get_was_scan_count(self, status):
"""
Checks number of scans, used to control the api limits
"""
parameters = (
E.ServiceRequest(
E.filters(
E.Criteria({'field': 'status', 'operator': 'EQUALS'}, status))))
xml_output = self.qgc.request(self.COUNT_WASSCAN, parameters)
root = objectify.fromstring(xml_output.encode('utf-8'))
return root.count.text
def generate_scan_result_XML(self, limit=1000, offset=1, status='FINISHED'):
report_xml = E.ServiceRequest(
E.filters(
E.Criteria({'field': 'status', 'operator': 'EQUALS'}, status
),
),
E.preferences(
E.startFromOffset(str(offset)),
E.limitResults(str(limit))
),
)
return report_xml
def get_scan_info(self, limit=1000, offset=1, status='FINISHED'):
""" Returns XML of ALL WAS Scans"""
data = self.generate_scan_result_XML(limit=limit, offset=offset, status=status)
return self.qgc.request(self.SEARCH_WAS_SCAN, data)
def xml_parser(self, xml, dupfield=None):
all_records = []
root = ET.XML(xml)
for i, child in enumerate(root):
for subchild in child:
record = {}
dup_tracker = 0
for p in subchild:
record[p.tag] = p.text
for o in p:
if o.tag in record:
dup_tracker += 1
record[o.tag + '_%s' % dup_tracker] = o.text
else:
record[o.tag] = o.text
all_records.append(record)
return pd.DataFrame(all_records)
def get_all_scans(self, limit=1000, offset=1, status='FINISHED'):
qualys_api_limit = limit
dataframes = []
_records = []
try:
total = int(self.get_was_scan_count(status=status))
self.logger.error('Already have WAS scan count')
self.logger.info('Retrieving information for {} scans'.format(total))
for i in range(0, total):
if i % limit == 0:
if (total - i) < limit:
qualys_api_limit = total - i
self.logger.info('Making a request with a limit of {} at offset {}'.format((str(qualys_api_limit)), str(i + 1)))
scan_info = self.get_scan_info(limit=qualys_api_limit, offset=i + 1, status=status)
_records.append(scan_info)
self.logger.debug('Converting XML to DataFrame')
dataframes = [self.xml_parser(xml) for xml in _records]
except Exception as e:
self.logger.error("Couldn't process all scans: {}".format(e))
return pd.concat(dataframes, axis=0).reset_index().drop('index', axis=1)
####
#### CREATE VULNERABILITY REPORT AND DOWNLOAD IT
####
def get_report_status(self, report_id):
return self.qgc.request(self.REPORT_STATUS.format(report_id=report_id))
def download_report(self, report_id):
return self.qgc.request(self.REPORT_DOWNLOAD.format(report_id=report_id))
def generate_scan_report_XML(self, scan_id):
"""Generates a CSV report for an asset based on template defined in .ini file"""
report_xml = E.ServiceRequest(
E.data(
E.Report(
E.name('<![CDATA[API Scan Report generated by VulnWhisperer]]>'),
E.description('<![CDATA[CSV Scanning report for VulnWhisperer]]>'),
E.format('CSV'),
#type is not needed, as the template already has it
E.type('WAS_SCAN_REPORT'),
E.template(
E.id(self.template_id)
),
E.config(
E.scanReport(
E.target(
E.scans(
E.WasScan(
E.id(scan_id)
)
),
),
),
)
)
)
)
return report_xml
def create_report(self, report_id, kind='scan'):
mapper = {'scan': self.generate_scan_report_XML}
try:
data = mapper[kind](report_id)
except Exception as e:
self.logger.error('Error creating report: {}'.format(str(e)))
return self.qgc.request(self.REPORT_CREATE, data).encode('utf-8')
def delete_report(self, report_id):
return self.qgc.request(self.DELETE_REPORT.format(report_id=report_id))
class qualysReportFields:
CATEGORIES = ['VULNERABILITY',
'SENSITIVECONTENT',
'INFORMATION_GATHERED']
# URL Vulnerability Information
VULN_BLOCK = [
CATEGORIES[0],
'ID',
'QID',
'Url',
'Param',
'Function',
'Form Entry Point',
'Access Path',
'Authentication',
'Ajax Request',
'Ajax Request ID',
'Ignored',
'Ignore Reason',
'Ignore Date',
'Ignore User',
'Ignore Comments',
'First Time Detected',
'Last Time Detected',
'Last Time Tested',
'Times Detected',
'Payload #1',
'Request Method #1',
'Request URL #1',
'Request Headers #1',
'Response #1',
'Evidence #1',
]
INFO_HEADER = [
'Vulnerability Category',
'ID',
'QID',
'Response #1',
'Last Time Detected',
]
INFO_BLOCK = [
CATEGORIES[2],
'ID',
'QID',
'Results',
'Detection Date',
]
QID_HEADER = [
'QID',
'Id',
'Title',
'Category',
'Severity Level',
'Groups',
'OWASP',
'WASC',
'CWE',
'CVSS Base',
'CVSS Temporal',
'Description',
'Impact',
'Solution',
]
GROUP_HEADER = ['GROUP', 'Name', 'Category']
OWASP_HEADER = ['OWASP', 'Code', 'Name']
WASC_HEADER = ['WASC', 'Code', 'Name']
SCAN_META = ['Web Application Name', 'URL', 'Owner', 'Scope', 'Operating System']
CATEGORY_HEADER = ['Category', 'Severity', 'Level', 'Description']
class qualysUtils:
def __init__(self):
self.logger = logging.getLogger('qualysUtils')
def grab_section(
self,
report,
section,
end=[],
pop_last=False,
):
temp_list = []
max_col_count = 0
with open(report, 'rb') as csvfile:
q_report = csv.reader(csvfile, delimiter=',', quotechar='"')
for line in q_report:
if set(line) == set(section):
break
# Reads text until the end of the block:
for line in q_report: # This keeps reading the file
temp_list.append(line)
if line in end:
break
if pop_last and len(temp_list) > 1:
temp_list.pop(-1)
return temp_list
def iso_to_epoch(self, dt):
return dp.parse(dt).strftime('%s')
def cleanser(self, _data):
repls = (('\n', '|||'), ('\r', '|||'), (',', ';'), ('\t', '|||'))
if _data:
_data = reduce(lambda a, kv: a.replace(*kv), repls, str(_data))
return _data
class qualysScanReport:
# URL Vulnerability Information
WEB_SCAN_VULN_BLOCK = list(qualysReportFields.VULN_BLOCK)
WEB_SCAN_VULN_BLOCK.insert(WEB_SCAN_VULN_BLOCK.index('QID'), 'Detection ID')
WEB_SCAN_VULN_HEADER = list(WEB_SCAN_VULN_BLOCK)
WEB_SCAN_VULN_HEADER[WEB_SCAN_VULN_BLOCK.index(qualysReportFields.CATEGORIES[0])] = \
'Vulnerability Category'
WEB_SCAN_SENSITIVE_HEADER = list(WEB_SCAN_VULN_HEADER)
WEB_SCAN_SENSITIVE_HEADER.insert(WEB_SCAN_SENSITIVE_HEADER.index('Url'
), 'Content')
WEB_SCAN_SENSITIVE_BLOCK = list(WEB_SCAN_SENSITIVE_HEADER)
WEB_SCAN_SENSITIVE_BLOCK.insert(WEB_SCAN_SENSITIVE_BLOCK.index('QID'), 'Detection ID')
WEB_SCAN_SENSITIVE_BLOCK[WEB_SCAN_SENSITIVE_BLOCK.index('Vulnerability Category'
)] = qualysReportFields.CATEGORIES[1]
WEB_SCAN_INFO_HEADER = list(qualysReportFields.INFO_HEADER)
WEB_SCAN_INFO_HEADER.insert(WEB_SCAN_INFO_HEADER.index('QID'), 'Detection ID')
WEB_SCAN_INFO_BLOCK = list(qualysReportFields.INFO_BLOCK)
WEB_SCAN_INFO_BLOCK.insert(WEB_SCAN_INFO_BLOCK.index('QID'), 'Detection ID')
QID_HEADER = list(qualysReportFields.QID_HEADER)
GROUP_HEADER = list(qualysReportFields.GROUP_HEADER)
OWASP_HEADER = list(qualysReportFields.OWASP_HEADER)
WASC_HEADER = list(qualysReportFields.WASC_HEADER)
SCAN_META = list(qualysReportFields.SCAN_META)
CATEGORY_HEADER = list(qualysReportFields.CATEGORY_HEADER)
def __init__(
self,
config=None,
file_in=None,
file_stream=False,
delimiter=',',
quotechar='"',
):
self.logger = logging.getLogger('qualysScanReport')
self.file_in = file_in
self.file_stream = file_stream
self.report = None
self.utils = qualysUtils()
if config:
try:
self.qw = qualysWhisperAPI(config=config)
except Exception as e:
self.logger.error('Could not load config! Please check settings. Error: {}'.format(str(e)))
if file_stream:
self.open_file = file_in.splitlines()
elif file_in:
self.open_file = open(file_in, 'rb')
self.downloaded_file = None
def grab_sections(self, report):
all_dataframes = []
dict_tracker = {}
with open(report, 'rb') as csvfile:
dict_tracker['WEB_SCAN_VULN_BLOCK'] = pd.DataFrame(self.utils.grab_section(report,
self.WEB_SCAN_VULN_BLOCK,
end=[
self.WEB_SCAN_SENSITIVE_BLOCK,
self.WEB_SCAN_INFO_BLOCK],
pop_last=True),
columns=self.WEB_SCAN_VULN_HEADER)
dict_tracker['WEB_SCAN_SENSITIVE_BLOCK'] = pd.DataFrame(self.utils.grab_section(report,
self.WEB_SCAN_SENSITIVE_BLOCK,
end=[
self.WEB_SCAN_INFO_BLOCK,
self.WEB_SCAN_SENSITIVE_BLOCK],
pop_last=True),
columns=self.WEB_SCAN_SENSITIVE_HEADER)
dict_tracker['WEB_SCAN_INFO_BLOCK'] = pd.DataFrame(self.utils.grab_section(report,
self.WEB_SCAN_INFO_BLOCK,
end=[self.QID_HEADER],
pop_last=True),
columns=self.WEB_SCAN_INFO_HEADER)
dict_tracker['QID_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.QID_HEADER,
end=[self.GROUP_HEADER],
pop_last=True),
columns=self.QID_HEADER)
dict_tracker['GROUP_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.GROUP_HEADER,
end=[self.OWASP_HEADER],
pop_last=True),
columns=self.GROUP_HEADER)
dict_tracker['OWASP_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.OWASP_HEADER,
end=[self.WASC_HEADER],
pop_last=True),
columns=self.OWASP_HEADER)
dict_tracker['WASC_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.WASC_HEADER, end=[['APPENDIX']],
pop_last=True),
columns=self.WASC_HEADER)
dict_tracker['SCAN_META'] = pd.DataFrame(self.utils.grab_section(report,
self.SCAN_META,
end=[self.CATEGORY_HEADER],
pop_last=True),
columns=self.SCAN_META)
dict_tracker['CATEGORY_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.CATEGORY_HEADER),
columns=self.CATEGORY_HEADER)
all_dataframes.append(dict_tracker)
return all_dataframes
def data_normalizer(self, dataframes):
"""
Merge and clean data
:param dataframes:
:return:
"""
df_dict = dataframes[0]
merged_df = pd.concat([df_dict['WEB_SCAN_VULN_BLOCK'], df_dict['WEB_SCAN_SENSITIVE_BLOCK'],
df_dict['WEB_SCAN_INFO_BLOCK']], axis=0,
ignore_index=False)
merged_df = pd.merge(merged_df, df_dict['QID_HEADER'], left_on='QID',
right_on='Id')
if 'Content' not in merged_df:
merged_df['Content'] = ''
columns_to_cleanse = ['Payload #1', 'Request Method #1', 'Request URL #1',
'Request Headers #1', 'Response #1', 'Evidence #1',
'Description', 'Impact', 'Solution', 'Url', 'Content']
for col in columns_to_cleanse:
merged_df[col] = merged_df[col].apply(self.utils.cleanser)
merged_df = merged_df.drop(['QID_y', 'QID_x'], axis=1)
merged_df = merged_df.rename(columns={'Id': 'QID'})
merged_df = merged_df.assign(**df_dict['SCAN_META'].to_dict(orient='records')[0])
merged_df = pd.merge(merged_df, df_dict['CATEGORY_HEADER'], how='left', left_on=['Category', 'Severity Level'],
right_on=['Category', 'Severity'], suffixes=('Severity', 'CatSev'))
merged_df = merged_df.replace('N/A', '').fillna('')
try:
merged_df = \
merged_df[~merged_df.Title.str.contains('Links Crawled|External Links Discovered')]
except Exception as e:
self.logger.error('Error normalizing: {}'.format(str(e)))
return merged_df
def download_file(self, path='', file_id=None):
report = self.qw.download_report(file_id)
filename = path + str(file_id) + '.csv'
file_out = open(filename, 'w')
for line in report.splitlines():
file_out.write(line + '\n')
file_out.close()
self.logger.info('File written to {}'.format(filename))
return filename
def process_data(self, path='', file_id=None, cleanup=True):
"""Downloads a file from qualys and normalizes it"""
download_file = self.download_file(path=path, file_id=file_id)
self.logger.info('Downloading file ID: {}'.format(file_id))
report_data = self.grab_sections(download_file)
merged_data = self.data_normalizer(report_data)
merged_data.sort_index(axis=1, inplace=True)
return merged_data

View File

View File

@ -0,0 +1,669 @@
import json
import os
from datetime import datetime, date, timedelta
from jira import JIRA
import requests
import logging
from bottle import template
import re
class JiraAPI(object):
def __init__(self, hostname=None, username=None, password=None, path="", debug=False, clean_obsolete=True, max_time_window=12, decommission_time_window=3):
self.logger = logging.getLogger('JiraAPI')
if debug:
self.logger.setLevel(logging.DEBUG)
if "https://" not in hostname:
hostname = "https://{}".format(hostname)
self.username = username
self.password = password
self.jira = JIRA(options={'server': hostname}, basic_auth=(self.username, self.password))
self.logger.info("Created vjira service for {}".format(hostname))
self.all_tickets = []
self.excluded_tickets = []
self.JIRA_REOPEN_ISSUE = "Reopen Issue"
self.JIRA_CLOSE_ISSUE = "Close Issue"
self.JIRA_RESOLUTION_OBSOLETE = "Obsolete"
self.JIRA_RESOLUTION_FIXED = "Fixed"
self.template_path = 'vulnwhisp/reporting/resources/ticket.tpl'
self.max_ips_ticket = 30
self.attachment_filename = "vulnerable_assets.txt"
self.max_time_tracking = max_time_window #in months
if path:
self.download_tickets(path)
else:
self.logger.warn("No local path specified, skipping Jira ticket download.")
self.max_decommission_time = decommission_time_window #in months
# [HIGIENE] close tickets older than 12 months as obsolete (max_time_window defined)
if clean_obsolete:
self.close_obsolete_tickets()
# deletes the tag "server_decommission" from those tickets closed <=3 months ago
self.decommission_cleanup()
self.jira_still_vulnerable_comment = '''This ticket has been reopened due to the vulnerability not having been fixed (if multiple assets are affected, all need to be fixed; if the server is down, lastest known vulnerability might be the one reported).
- In the case of the team accepting the risk and wanting to close the ticket, please add the label "*risk_accepted*" to the ticket before closing it.
- If server has been decommissioned, please add the label "*server_decommission*" to the ticket before closing it.
- If when checking the vulnerability it looks like a false positive, _+please elaborate in a comment+_ and add the label "*false_positive*" before closing it; we will review it and report it to the vendor.
If you have further doubts, please contact the Security Team.'''
def create_ticket(self, title, desc, project="IS", components=[], tags=[], attachment_contents = []):
labels = ['vulnerability_management']
for tag in tags:
labels.append(str(tag))
self.logger.info("Creating ticket for project {} title: {}".format(project, title[:20]))
self.logger.debug("project {} has a component requirement: {}".format(project, components))
project_obj = self.jira.project(project)
components_ticket = []
for component in components:
exists = False
for c in project_obj.components:
if component == c.name:
self.logger.debug("resolved component name {} to id {}".format(c.name, c.id))
components_ticket.append({ "id": c.id })
exists=True
if not exists:
self.logger.error("Error creating Ticket: component {} not found".format(component))
return 0
try:
new_issue = self.jira.create_issue(project=project,
summary=title,
description=desc,
issuetype={'name': 'Bug'},
labels=labels,
components=components_ticket)
self.logger.info("Ticket {} created successfully".format(new_issue))
if attachment_contents:
self.add_content_as_attachment(new_issue, attachment_contents)
except Exception as e:
self.logger.error("Failed to create ticket on Jira Project '{}'. Error: {}".format(project, e))
new_issue = False
return new_issue
#Basic JIRA Metrics
def metrics_open_tickets(self, project=None):
jql = "labels= vulnerability_management and resolution = Unresolved"
if project:
jql += " and (project='{}')".format(project)
self.logger.debug('Executing: {}'.format(jql))
return len(self.jira.search_issues(jql, maxResults=0))
def metrics_closed_tickets(self, project=None):
jql = "labels= vulnerability_management and NOT resolution = Unresolved AND created >=startOfMonth(-{})".format(self.max_time_tracking)
if project:
jql += " and (project='{}')".format(project)
return len(self.jira.search_issues(jql, maxResults=0))
def sync(self, vulnerabilities, project, components=[]):
#JIRA structure of each vulnerability: [source, scan_name, title, diagnosis, consequence, solution, ips, risk, references]
self.logger.info("JIRA Sync started")
for vuln in vulnerabilities:
# JIRA doesn't allow labels with spaces, so making sure that the scan_name doesn't have spaces
# if it has, they will be replaced by "_"
if " " in vuln['scan_name']:
vuln['scan_name'] = "_".join(vuln['scan_name'].split(" "))
# we exclude from the vulnerabilities to report those assets that already exist with *risk_accepted*/*server_decommission*
vuln = self.exclude_accepted_assets(vuln)
# make sure after exclusion of risk_accepted assets there are still assets
if vuln['ips']:
exists = False
to_update = False
ticketid = ""
ticket_assets = []
exists, to_update, ticketid, ticket_assets = self.check_vuln_already_exists(vuln)
if exists:
# If ticket "resolved" -> reopen, as vulnerability is still existent
self.reopen_ticket(ticketid=ticketid, comment=self.jira_still_vulnerable_comment)
self.add_label(ticketid, vuln['risk'])
continue
elif to_update:
self.ticket_update_assets(vuln, ticketid, ticket_assets)
self.add_label(ticketid, vuln['risk'])
continue
attachment_contents = []
# if assets >30, add as attachment
# create local text file with assets, attach it to ticket
if len(vuln['ips']) > self.max_ips_ticket:
attachment_contents = vuln['ips']
vuln['ips'] = ["Affected hosts ({assets}) exceed Jira's allowed character limit, added as an attachment.".format(assets = len(attachment_contents))]
try:
tpl = template(self.template_path, vuln)
except Exception as e:
self.logger.error('Exception templating: {}'.format(str(e)))
return 0
self.create_ticket(title=vuln['title'], desc=tpl, project=project, components=components, tags=[vuln['source'], vuln['scan_name'], 'vulnerability', vuln['risk']], attachment_contents = attachment_contents)
else:
self.logger.info("Ignoring vulnerability as all assets are already reported in a risk_accepted ticket")
self.close_fixed_tickets(vulnerabilities)
# we reinitialize so the next sync redoes the query with their specific variables
self.all_tickets = []
self.excluded_tickets = []
return True
def exclude_accepted_assets(self, vuln):
# we want to check JIRA tickets with risk_accepted/server_decommission or false_positive labels sharing the same source
# will exclude tickets older than 12 months, old tickets will get closed for higiene and recreated if still vulnerable
labels = [vuln['source'], vuln['scan_name'], 'vulnerability_management', 'vulnerability']
if not self.excluded_tickets:
jql = "{} AND labels in (risk_accepted,server_decommission, false_positive) AND NOT labels=advisory AND created >=startOfMonth(-{})".format(" AND ".join(["labels={}".format(label) for label in labels]), self.max_time_tracking)
self.excluded_tickets = self.jira.search_issues(jql, maxResults=0)
title = vuln['title']
#WARNING: function IGNORES DUPLICATES, after finding a "duplicate" will just return it exists
#it wont iterate over the rest of tickets looking for other possible duplicates/similar issues
self.logger.info("Comparing vulnerability to risk_accepted tickets")
assets_to_exclude = []
tickets_excluded_assets = []
for index in range(len(self.excluded_tickets)):
checking_ticketid, checking_title, checking_assets = self.ticket_get_unique_fields(self.excluded_tickets[index])
if title.encode('ascii') == checking_title.encode('ascii'):
if checking_assets:
#checking_assets is a list, we add to our full list for later delete all assets
assets_to_exclude+=checking_assets
tickets_excluded_assets.append(checking_ticketid)
if assets_to_exclude:
assets_to_remove = []
self.logger.warn("Vulnerable Assets seen on an already existing risk_accepted Jira ticket: {}".format(', '.join(tickets_excluded_assets)))
self.logger.debug("Original assets: {}".format(vuln['ips']))
#assets in vulnerability have the structure "ip - hostname - port", so we need to match by partial
for exclusion in assets_to_exclude:
# for efficiency, we walk the backwards the array of ips from the scanners, as we will be popping out the matches
# and we don't want it to affect the rest of the processing (otherwise, it would miss the asset right after the removed one)
for index in range(len(vuln['ips']))[::-1]:
if exclusion == vuln['ips'][index].split(" - ")[0]:
self.logger.debug("Deleting asset {} from vulnerability {}, seen in risk_accepted.".format(vuln['ips'][index], title))
vuln['ips'].pop(index)
self.logger.debug("Modified assets: {}".format(vuln['ips']))
return vuln
def check_vuln_already_exists(self, vuln):
'''
This function compares a vulnerability with a collection of tickets.
Returns [exists (bool), is equal (bool), ticketid (str), assets (array)]
'''
# we need to return if the vulnerability has already been reported and the ID of the ticket for further processing
#function returns array [duplicated(bool), update(bool), ticketid, ticket_assets]
title = vuln['title']
labels = [vuln['source'], vuln['scan_name'], 'vulnerability_management', 'vulnerability']
#list(set()) to remove duplicates
assets = list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", ",".join(vuln['ips']))))
if not self.all_tickets:
self.logger.info("Retrieving all JIRA tickets with the following tags {}".format(labels))
# we want to check all JIRA tickets, to include tickets moved to other queues
# will exclude tickets older than 12 months, old tickets will get closed for higiene and recreated if still vulnerable
jql = "{} AND NOT labels=advisory AND created >=startOfMonth(-{})".format(" AND ".join(["labels={}".format(label) for label in labels]), self.max_time_tracking)
self.all_tickets = self.jira.search_issues(jql, maxResults=0)
#WARNING: function IGNORES DUPLICATES, after finding a "duplicate" will just return it exists
#it wont iterate over the rest of tickets looking for other possible duplicates/similar issues
self.logger.info("Comparing Vulnerabilities to created tickets")
for index in range(len(self.all_tickets)):
checking_ticketid, checking_title, checking_assets = self.ticket_get_unique_fields(self.all_tickets[index])
# added "not risk_accepted", as if it is risk_accepted, we will create a new ticket excluding the accepted assets
if title.encode('ascii') == checking_title.encode('ascii') and not self.is_risk_accepted(self.jira.issue(checking_ticketid)):
difference = list(set(assets).symmetric_difference(checking_assets))
#to check intersection - set(assets) & set(checking_assets)
if difference:
self.logger.info("Asset mismatch, ticket to update. Ticket ID: {}".format(checking_ticketid))
return False, True, checking_ticketid, checking_assets #this will automatically validate
else:
self.logger.info("Confirmed duplicated. TickedID: {}".format(checking_ticketid))
return True, False, checking_ticketid, [] #this will automatically validate
return False, False, "", []
def ticket_get_unique_fields(self, ticket):
title = ticket.raw.get('fields', {}).get('summary').encode("ascii").strip()
ticketid = ticket.key.encode("ascii")
assets = self.get_assets_from_description(ticket)
if not assets:
#check if attachment, if so, get assets from attachment
assets = self.get_assets_from_attachment(ticket)
return ticketid, title, assets
def get_assets_from_description(self, ticket, _raw = False):
# Get the assets as a string "host - protocol/port - hostname" separated by "\n"
# structure the text to have the same structure as the assets from the attachment
affected_assets = ""
try:
affected_assets = ticket.raw.get('fields', {}).get('description').encode("ascii").split("{panel:title=Affected Assets}")[1].split("{panel}")[0].replace('\n','').replace(' * ','\n').replace('\n', '', 1)
except Exception as e:
self.logger.error("Unable to process the Ticket's 'Affected Assets'. Ticket ID: {}. Reason: {}".format(ticket, e))
if affected_assets:
if _raw:
# from line 406 check if the text in the panel corresponds to having added an attachment
if "added as an attachment" in affected_assets:
return False
return affected_assets
try:
# if _raw is not true, we return only the IPs of the affected assets
return list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", affected_assets)))
except Exception as e:
self.logger.error("Ticket IPs regex failed. Ticket ID: {}. Reason: {}".format(ticket, e))
return False
def get_assets_from_attachment(self, ticket, _raw = False):
# Get the assets as a string "host - protocol/port - hostname" separated by "\n"
affected_assets = []
try:
fields = self.jira.issue(ticket.key).raw.get('fields', {})
attachments = fields.get('attachment', {})
affected_assets = ""
#we will make sure we get the latest version of the file
latest = ''
attachment_id = ''
if attachments:
for item in attachments:
if item.get('filename') == self.attachment_filename:
if not latest:
latest = item.get('created')
attachment_id = item.get('id')
else:
if latest < item.get('created'):
latest = item.get('created')
attachment_id = item.get('id')
affected_assets = self.jira.attachment(attachment_id).get()
except Exception as e:
self.logger.error("Failed to get assets from ticket attachment. Ticket ID: {}. Reason: {}".format(ticket, e))
if affected_assets:
if _raw:
return affected_assets
try:
# if _raw is not true, we return only the IPs of the affected assets
affected_assets = list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", affected_assets)))
return affected_assets
except Exception as e:
self.logger.error("Ticket IPs Attachment regex failed. Ticket ID: {}. Reason: {}".format(ticket, e))
return False
def parse_asset_to_json(self, asset):
hostname, protocol, port = "", "", ""
asset_info = asset.split(" - ")
ip = asset_info[0]
proto_port = asset_info[1]
# in case there is some case where hostname is not reported at all
if len(asset_info) == 3:
hostname = asset_info[2]
if proto_port != "N/A/N/A":
protocol, port = proto_port.split("/")
port = int(float(port))
asset_dict = {
"host": ip,
"protocol": protocol,
"port": port,
"hostname": hostname
}
return asset_dict
def clean_old_attachments(self, ticket):
fields = ticket.raw.get('fields')
attachments = fields.get('attachment')
if attachments:
for item in attachments:
if item.get('filename') == self.attachment_filename:
self.jira.delete_attachment(item.get('id'))
def add_content_as_attachment(self, issue, contents):
try:
#Create the file locally with the data
attachment_file = open(self.attachment_filename, "w")
attachment_file.write("\n".join(contents))
attachment_file.close()
#Push the created file to the ticket
attachment_file = open(self.attachment_filename, "rb")
self.jira.add_attachment(issue, attachment_file, self.attachment_filename)
attachment_file.close()
#remove the temp file
os.remove(self.attachment_filename)
self.logger.info("Added attachment successfully.")
except:
self.logger.error("Error while attaching file to ticket.")
return False
return True
def get_ticket_reported_assets(self, ticket):
#[METRICS] return a list with all the affected assets for that vulnerability (including already resolved ones)
return list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b",str(self.jira.issue(ticket).raw))))
def get_resolution_time(self, ticket):
#get time a ticket took to be resolved
ticket_obj = self.jira.issue(ticket)
if self.is_ticket_resolved(ticket_obj):
ticket_data = ticket_obj.raw.get('fields')
#dates follow format '2018-11-06T10:36:13.849+0100'
created = [int(x) for x in ticket_data['created'].split('.')[0].replace('T', '-').replace(':','-').split('-')]
resolved =[int(x) for x in ticket_data['resolutiondate'].split('.')[0].replace('T', '-').replace(':','-').split('-')]
start = datetime(created[0],created[1],created[2],created[3],created[4],created[5])
end = datetime(resolved[0],resolved[1],resolved[2],resolved[3],resolved[4],resolved[5])
return (end-start).days
else:
self.logger.error("Ticket {ticket} is not resolved, can't calculate resolution time".format(ticket=ticket))
return False
def ticket_update_assets(self, vuln, ticketid, ticket_assets):
# correct description will always be in the vulnerability to report, only needed to update description to new one
self.logger.info("Ticket {} exists, UPDATE requested".format(ticketid))
#for now, if a vulnerability has been accepted ('accepted_risk'), ticket is completely ignored and not updated (no new assets)
#TODO when vulnerability accepted, create a new ticket with only the non-accepted vulnerable assets
#this would require go through the downloaded tickets, check duplicates/accepted ones, and if so,
#check on their assets to exclude them from the new ticket
risk_accepted = False
ticket_obj = self.jira.issue(ticketid)
if self.is_ticket_resolved(ticket_obj):
if self.is_risk_accepted(ticket_obj):
return 0
self.reopen_ticket(ticketid=ticketid, comment=self.jira_still_vulnerable_comment)
#First will do the comparison of assets
ticket_obj.update()
assets = list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", ",".join(vuln['ips']))))
difference = list(set(assets).symmetric_difference(ticket_assets))
comment = ''
added = ''
removed = ''
#put a comment with the assets that have been added/removed
for asset in difference:
if asset in assets:
if not added:
added = '\nThe following assets *have been newly detected*:\n'
added += '* {}\n'.format(asset)
elif asset in ticket_assets:
if not removed:
removed= '\nThe following assets *have been resolved*:\n'
removed += '* {}\n'.format(asset)
comment = added + removed
#then will check if assets are too many that need to be added as an attachment
attachment_contents = []
if len(vuln['ips']) > self.max_ips_ticket:
attachment_contents = vuln['ips']
vuln['ips'] = ["Affected hosts ({assets}) exceed Jira's allowed character limit, added as an attachment.".format(assets = len(attachment_contents))]
#fill the ticket description template
try:
tpl = template(self.template_path, vuln)
except Exception as e:
self.logger.error('Exception updating assets: {}'.format(str(e)))
return 0
#proceed checking if it requires adding as an attachment
try:
#update attachment with hosts and delete the old versions
if attachment_contents:
self.clean_old_attachments(ticket_obj)
self.add_content_as_attachment(ticket_obj, attachment_contents)
ticket_obj.update(description=tpl, comment=comment, fields={"labels":ticket_obj.fields.labels})
self.logger.info("Ticket {} updated successfully".format(ticketid))
self.add_label(ticketid, 'updated')
except Exception as e:
self.logger.error("Error while trying up update ticket {ticketid}.\nReason: {e}".format(ticketid = ticketid, e=e))
return 0
def add_label(self, ticketid, label):
ticket_obj = self.jira.issue(ticketid)
if label not in [x.encode('utf8') for x in ticket_obj.fields.labels]:
ticket_obj.fields.labels.append(label)
try:
ticket_obj.update(fields={"labels":ticket_obj.fields.labels})
self.logger.info("Added label {label} to ticket {ticket}".format(label=label, ticket=ticketid))
except:
self.logger.error("Error while trying to add label {label} to ticket {ticket}".format(label=label, ticket=ticketid))
return 0
def remove_label(self, ticketid, label):
ticket_obj = self.jira.issue(ticketid)
if label in [x.encode('utf8') for x in ticket_obj.fields.labels]:
ticket_obj.fields.labels.remove(label)
try:
ticket_obj.update(fields={"labels":ticket_obj.fields.labels})
self.logger.info("Removed label {label} from ticket {ticket}".format(label=label, ticket=ticketid))
except:
self.logger.error("Error while trying to remove label {label} to ticket {ticket}".format(label=label, ticket=ticketid))
else:
self.logger.error("Error: label {label} not in ticket {ticket}".format(label=label, ticket=ticketid))
return 0
def close_fixed_tickets(self, vulnerabilities):
'''
Close tickets which vulnerabilities have been resolved and are still open.
Higiene clean up affects to all tickets created by the module, filters by label 'vulnerability_management'
'''
found_vulns = []
for vuln in vulnerabilities:
found_vulns.append(vuln['title'])
comment = '''This ticket is being closed as it appears that the vulnerability no longer exists.
If the vulnerability reappears, a new ticket will be opened.'''
for ticket in self.all_tickets:
if ticket.raw['fields']['summary'].strip() in found_vulns:
self.logger.info("Ticket {} is still vulnerable".format(ticket))
continue
self.logger.info("Ticket {} is no longer vulnerable".format(ticket))
self.close_ticket(ticket, self.JIRA_RESOLUTION_FIXED, comment)
return 0
def is_ticket_reopenable(self, ticket_obj):
transitions = self.jira.transitions(ticket_obj)
for transition in transitions:
if transition.get('name') == self.JIRA_REOPEN_ISSUE:
self.logger.debug("Ticket is reopenable")
return True
self.logger.error("Ticket {} can't be opened. Check Jira transitions.".format(ticket_obj))
return False
def is_ticket_closeable(self, ticket_obj):
transitions = self.jira.transitions(ticket_obj)
for transition in transitions:
if transition.get('name') == self.JIRA_CLOSE_ISSUE:
return True
self.logger.error("Ticket {} can't closed. Check Jira transitions.".format(ticket_obj))
return False
def is_ticket_resolved(self, ticket_obj):
#Checks if a ticket is resolved or not
if ticket_obj is not None:
if ticket_obj.raw['fields'].get('resolution') is not None:
if ticket_obj.raw['fields'].get('resolution').get('name') != 'Unresolved':
self.logger.debug("Checked ticket {} is already closed".format(ticket_obj))
self.logger.info("Ticket {} is closed".format(ticket_obj))
return True
self.logger.debug("Checked ticket {} is already open".format(ticket_obj))
return False
def is_risk_accepted(self, ticket_obj):
if ticket_obj is not None:
if ticket_obj.raw['fields'].get('labels') is not None:
labels = ticket_obj.raw['fields'].get('labels')
if "risk_accepted" in labels:
self.logger.warn("Ticket {} accepted risk, will be ignored".format(ticket_obj))
return True
elif "server_decommission" in labels:
self.logger.warn("Ticket {} server decommissioned, will be ignored".format(ticket_obj))
return True
elif "false_positive" in labels:
self.logger.warn("Ticket {} flagged false positive, will be ignored".format(ticket_obj))
return True
self.logger.info("Ticket {} risk has not been accepted".format(ticket_obj))
return False
def reopen_ticket(self, ticketid, ignore_labels=False, comment=""):
self.logger.debug("Ticket {} exists, REOPEN requested".format(ticketid))
# this will reopen a ticket by ticketid
ticket_obj = self.jira.issue(ticketid)
if self.is_ticket_resolved(ticket_obj):
if (not self.is_risk_accepted(ticket_obj) or ignore_labels):
try:
if self.is_ticket_reopenable(ticket_obj):
error = self.jira.transition_issue(issue=ticketid, transition=self.JIRA_REOPEN_ISSUE, comment = comment)
self.logger.info("Ticket {} reopened successfully".format(ticketid))
if not ignore_labels:
self.add_label(ticketid, 'reopened')
return 1
except Exception as e:
# continue with ticket data so that a new ticket is created in place of the "lost" one
self.logger.error("error reopening ticket {}: {}".format(ticketid, e))
return 0
return 0
def close_ticket(self, ticketid, resolution, comment):
# this will close a ticket by ticketid
self.logger.debug("Ticket {} exists, CLOSE requested".format(ticketid))
ticket_obj = self.jira.issue(ticketid)
if not self.is_ticket_resolved(ticket_obj):
try:
if self.is_ticket_closeable(ticket_obj):
#need to add the label before closing the ticket
self.add_label(ticketid, 'closed')
error = self.jira.transition_issue(issue=ticketid, transition=self.JIRA_CLOSE_ISSUE, comment = comment, resolution = {"name": resolution })
self.logger.info("Ticket {} closed successfully".format(ticketid))
return 1
except Exception as e:
# continue with ticket data so that a new ticket is created in place of the "lost" one
self.logger.error("error closing ticket {}: {}".format(ticketid, e))
return 0
return 0
def close_obsolete_tickets(self):
# Close tickets older than 12 months, vulnerabilities not solved will get created a new ticket
self.logger.info("Closing obsolete tickets older than {} months".format(self.max_time_tracking))
jql = "labels=vulnerability_management AND NOT labels=advisory AND created <startOfMonth(-{}) and resolution=Unresolved".format(self.max_time_tracking)
tickets_to_close = self.jira.search_issues(jql, maxResults=0)
comment = '''This ticket is being closed for hygiene, as it is more than {} months old.
If the vulnerability still exists, a new ticket will be opened.'''.format(self.max_time_tracking)
for ticket in tickets_to_close:
self.close_ticket(ticket, self.JIRA_RESOLUTION_OBSOLETE, comment)
return 0
def project_exists(self, project):
try:
self.jira.project(project)
return True
except:
return False
return False
def download_tickets(self, path):
'''
saves all tickets locally, local snapshot of vulnerability_management ticktes
'''
#check if file already exists
check_date = str(date.today())
fname = '{}jira_{}.json'.format(path, check_date)
if os.path.isfile(fname):
self.logger.info("File {} already exists, skipping ticket download".format(fname))
return True
try:
self.logger.info("Saving locally tickets from the last {} months".format(self.max_time_tracking))
jql = "labels=vulnerability_management AND NOT labels=advisory AND created >=startOfMonth(-{})".format(self.max_time_tracking)
tickets_data = self.jira.search_issues(jql, maxResults=0)
#TODO process tickets, creating a new field called "_metadata" with all the affected assets well structured
# for future processing in ELK/Splunk; this includes downloading attachments with assets and processing them
processed_tickets = []
for ticket in tickets_data:
assets = self.get_assets_from_description(ticket, _raw=True)
if not assets:
# check if attachment, if so, get assets from attachment
assets = self.get_assets_from_attachment(ticket, _raw=True)
# process the affected assets to save them as json structure on a new field from the JSON
_metadata = {"affected_hosts": []}
if assets:
if "\n" in assets:
for asset in assets.split("\n"):
assets_json = self.parse_asset_to_json(asset)
_metadata["affected_hosts"].append(assets_json)
else:
assets_json = self.parse_asset_to_json(assets)
_metadata["affected_hosts"].append(assets_json)
temp_ticket = ticket.raw.get('fields')
temp_ticket['_metadata'] = _metadata
processed_tickets.append(temp_ticket)
#end of line needed, as writelines() doesn't add it automatically, otherwise one big line
to_save = [json.dumps(ticket.raw.get('fields'))+"\n" for ticket in tickets_data]
with open(fname, 'w') as outfile:
outfile.writelines(to_save)
self.logger.info("Tickets saved succesfully.")
return True
except Exception as e:
self.logger.error("Tickets could not be saved locally: {}.".format(e))
return False
def decommission_cleanup(self):
'''
deletes the server_decomission tag from those tickets that have been
closed already for more than x months (default is 3 months) in order to clean solved issues
for statistics purposes
'''
self.logger.info("Deleting 'server_decommission' tag from tickets closed more than {} months ago".format(self.max_decommission_time))
jql = "labels=vulnerability_management AND labels=server_decommission and resolutiondate <=startOfMonth(-{})".format(self.max_decommission_time)
decommissioned_tickets = self.jira.search_issues(jql, maxResults=0)
comment = '''This ticket is having deleted the *server_decommission* tag, as it is more than {} months old and is expected to already have been decommissioned.
If that is not the case and the vulnerability still exists, the vulnerability will be opened again.'''.format(self.max_decommission_time)
for ticket in decommissioned_tickets:
#we open first the ticket, as we want to make sure the process is not blocked due to
#an unexisting jira workflow or unallowed edit from closed tickets
self.reopen_ticket(ticketid=ticket, ignore_labels=True)
self.remove_label(ticket, 'server_decommission')
self.close_ticket(ticket, self.JIRA_RESOLUTION_FIXED, comment)
return 0

View File

@ -0,0 +1,34 @@
{panel:title=Description}
{{ !diagnosis}}
{panel}
{panel:title=Consequence}
{{ !consequence}}
{panel}
{panel:title=Solution}
{{ !solution}}
{panel}
{panel:title=Affected Assets}
% for ip in ips:
* {{ip}}
% end
{panel}
{panel:title=References}
% for ref in references:
* {{ref}}
% end
{panel}
Please do not delete or modify the ticket assigned tags or title, as they are used to be synced. If the ticket ceases to be recognised, a new ticket will raise.
In the case of the team accepting the risk and wanting to close the ticket, please add the label "*risk_accepted*" to the ticket before closing it.
If server has been decommissioned, please add the label "*server_decommission*" to the ticket before closing it.
If when checking the vulnerability it looks like a false positive, _+please elaborate in a comment+_ and add the label "*false_positive*" before closing it; we will review it and report it to the vendor.

View File

76
vulnwhisp/test/mock.py Normal file
View File

@ -0,0 +1,76 @@
import os
import logging
import httpretty
class mockAPI(object):
def __init__(self, mock_dir=None, debug=False):
self.mock_dir = mock_dir
if not self.mock_dir:
# Try to guess the mock_dir if python setup.py develop was used
self.mock_dir = '/'.join(__file__.split('/')[:-3]) + '/tests/data'
self.logger = logging.getLogger('mockAPI')
if debug:
self.logger.setLevel(logging.DEBUG)
self.logger.info('mockAPI initialised, API requests will be mocked')
self.logger.debug('Test path resolved as {}'.format(self.mock_dir))
def get_directories(self, path):
dir, subdirs, files = next(os.walk(path))
return subdirs
def get_files(self, path):
dir, subdirs, files = next(os.walk(path))
return files
def qualys_vuln_callback(self, request, uri, response_headers):
self.logger.debug('Simulating response for {} ({})'.format(uri, request.body))
if 'list' in request.parsed_body['action']:
return [200,
response_headers,
open('{}/{}'.format(self.qualys_vuln_path, 'scans')).read()]
elif 'fetch' in request.parsed_body['action']:
try:
response_body = open('{}/{}'.format(
self.qualys_vuln_path,
request.parsed_body['scan_ref'][0].replace('/', '_'))
).read()
except:
# Can't find the file, just send an empty response
response_body = ''
return [200, response_headers, response_body]
def create_nessus_resource(self, framework):
for filename in self.get_files('{}/{}'.format(self.mock_dir, framework)):
method, resource = filename.split('_', 1)
resource = resource.replace('_', '/')
self.logger.debug('Adding mocked {} endpoint {} {}'.format(framework, method, resource))
httpretty.register_uri(
getattr(httpretty, method), 'https://{}:443/{}'.format(framework, resource),
body=open('{}/{}/{}'.format(self.mock_dir, framework, filename)).read()
)
def create_qualys_vuln_resource(self, framework):
# Create health check endpoint
self.logger.debug('Adding mocked {} endpoint {} {}'.format(framework, 'GET', 'msp/about.php'))
httpretty.register_uri(
httpretty.GET,
'https://{}:443/{}'.format(framework, 'msp/about.php'),
body='')
self.logger.debug('Adding mocked {} endpoint {} {}'.format(framework, 'POST', 'api/2.0/fo/scan'))
httpretty.register_uri(
httpretty.POST, 'https://{}:443/{}'.format(framework, 'api/2.0/fo/scan/'),
body=self.qualys_vuln_callback)
def mock_endpoints(self):
for framework in self.get_directories(self.mock_dir):
if framework in ['nessus', 'tenable']:
self.create_nessus_resource(framework)
elif framework == 'qualys_vuln':
self.qualys_vuln_path = self.mock_dir + '/' + framework
self.create_qualys_vuln_resource(framework)
httpretty.enable()

View File

@ -1,16 +0,0 @@
class bcolors:
"""
Utility to add colors to shell for scripts
"""
HEADERS = '\033[95m'
OKBLUE = '\033[94m'
OKGREEN = '\033[92m'
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
INFO = '{info}[INFO]{endc}'.format(info=OKBLUE, endc=ENDC)
SUCCESS = '{green}[SUCCESS]{endc}'.format(green=OKGREEN, endc=ENDC)
FAIL = '{red}[FAIL]{endc}'.format(red=FAIL, endc=ENDC)

File diff suppressed because it is too large Load Diff